» Archive for the 'Knowledge Management' Category

When Too Much Knowledge Becomes a Dangerous Thing

Thursday, June 17th, 2010 by Jonathan Spira

Socrates was relentless in his pursuit of knowledge and truth and this eventually led to his death   In The Apology, Plato writes that Socrates believed that the public discussion of important issues was necessary for a life to be of value.  “The unexamined life is not worth living.”

Danger, Professor Robinson?

In the olden days, before the Web, someone wishing to leak secret government documents would adopt a code name (think “Deep Throat” of the Watergate era) and covertly contact a journalist.  The reporter would then publish the information if, in the view of the reporter, editor, and publisher, it did not cross certain lines, such as placing the lives of covert CIA agents in danger.

Enter WikiLeaks.

WikiLeaks, founded in 2006, describes itself as “a multi-jurisdictional public service designed to protect whistleblowers, journalists and activists who have sensitive materials to communicate to the public.”

The site was founded to support “principled leaking of information.”  A classic example of an individual following this line or reasoning, namely that leaking classified information is necessary for the greater good, is that of Daniel Ellsberg, who leaked the Pentagon Papers, thereby exposing the U.S. government’s attempts to deceive the U.S. public about the Vietnam War.  The decision by the New York Times to publish the Pentagon Papers is credited with shortening the war and saving thousands of lives.

Time magazine wrote that WikiLeaks, located in Sweden, where laws protect anonymity, “… could become as important a journalistic tool as the Freedom of Information Act.”

On the other hand, the U.S. government considers WikiLeaks to be a potential threat to security.  In a document eventually published on the WikiLeaks site, the Army Counterintelligence Center wrote that WikiLeaks “represents a potential force protection, counterintelligence, operational security (OPSEC), and information security (INFOSEC) threat to the US Army.”  The document also states that “the identification, exposure, termination of employment, criminal prosecution, legal action against current or former insiders, leakers, or whistleblowers could potentially damage or destroy this center of gravity and deter others considering similar actions from using the WikiLeaks.org web site.”

Ten days ago, Wired magazine reported that U.S. officials had arrested Spc. Bradley Manning, a 22-year-old army intelligence analyst who reportedly leaked hundreds of thousands of classified documents and records as well as classified U.S. combat videos to WikiLeaks.

Although WikiLeaks confidentiality has never been breached, Manning reportedly bragged about his exploits, resulting in his apprehension.

According to Wired, Manning took credit for leaking the classified video of a helicopter air strike in Baghdad that also claimed the lives of several civilian bystanders.  The previously-referenced Army Counterintelligence Center document also reportedly came from Manning.

The case of Manning is perhaps the tip of the iceberg.  Several million people in the U.S. hold security clearances and, while their motives may vary from clear (e.g. trying to end a war as in the case of Ellsberg) to unclear (e.g. Manning), the genie is clearly out of the bottle.

Socrates, a social and moral critic, preferred dying for his beliefs rather than to recant them.  Indeed, Plato referred to Socrates as the “gadfly” of the state.  The motives of today’s leakers may not be as virtuous as Socrates’ but today’s technology virtually ensures that a secret may not remain a secret for very long.

Jonathan B. Spira is CEO and Chief Analyst at Basex.

Searching for Needles in Haystacks: How our brain sabotages our searches

Thursday, January 28th, 2010 by Cody Burke

In a recent study funded by the U.S. Department of Homeland Security (DHS) and reported in LiveScience, researchers found that subjects’ expectations of finding something had a direct effect on their success rates for finding the items in question.

Found it yet?

Found it yet?

In the study, subjects looked at X-ray scans of checked baggage and tried to identify the presence of guns and knives.  In the first trial, a gun or knife was present in 50% of the bags, and subjects only missed the weapons 7% of the time.  In the second trial, the guns and knives were in only 2% of the bags, and the subjects missed the weapons 30% of the time.  In short, when something is harder to find, our accuracy in identifying it drops significantly.

This is a trick our brain is playing on us as it becomes bored when we do not find what we are looking for and stops paying attention, meaning we then miss things when they do appear.

While the implications for airline security are obvious and somewhat chilling, the implications for the enterprise are also worth examining.  Knowledge workers spend ca. 15% of their day searching for content.  Applying the lessons learned in the DHS study, we can assume that if a search query returns fewer correct results in relation to incorrect results, the knowledge worker’s accuracy in picking out the relevant items will decline.

Conversely, just as in the DHS study, if the correct to incorrect ratio is better, meaning there is a higher number of correct results, then the knowledge worker is much more likely to find more of them.

For knowledge-based organizations and providers of software to these groups, the lessons from this study are clear: search tools must be improved to provide better ratios of relevant, useful results.  Today’s search tools focus on returning large sets of results and the answers to a search query may very well lie somewhere within these.  However, the low signal-to-noise ratio virtually ensures low accuracy even if one were to comb through every last result.

Search results need to be highly contextual and limited in volume to ensure accuracy and provide a favorable ratio of correct to incorrect results.  This keeps the knowledge worker engaged and not feeling that he is looking for a needle in a haystack; this, in turn, increases the probability of identifying the needed content.

Cody Burke is a senior analyst at Basex.

The Christmas Day Terrorism Plot: How Information Overload Prevailed and Counterterrorism Knowledge Sharing Failed

Monday, January 4th, 2010 by Jonathan Spira

There is no question that analyzing mountains of information and determining what is important, urgent, and worthy of follow-up (three separate and distinct categories) is a daunting task in any organization.

Are we sharing all of our knowledge yet?

Are we sharing all of our knowledge yet?

When the organization is the United States Federal Government and the amount of information that has to be addressed daily dwarfs what most people can conceptualize, lives may be at stake when an individual or system fails to connect the dots.

Such a failure occurred on December 25, 2009, but it need not have.

The tools to manage information on a massive scale do indeed exist and it is clear that the U.S. government is either not deploying the right ones or not using them correctly.

The National Counterterrorism Center, created in 2004 following recommendations of the 9/11 Commission, has a mission to break “the older mold of national government organizations” and serve as a center for joint operational planning and joint intelligence.  In other words, various intelligence agencies were ordered to put aside decades-long rivalries and share what they know and whom they suspect.  Unfortunately, while this sounds good in theory, in practice this mission may not yet be close to be being fully carried out.

In addition to the fact that old habits die hard (such as a disdain for inter-agency information sharing), it appears that the folks at the NCTC failed to grasp basic tenets of knowledge sharing, namely that search, in order to be effective, needs to be federated and contextual, that is to say it needs to simultaneously search multiple data stores and present results in a coherent manner.

Discrete searches in separate databases will yield far different results compared to a federated search that spans across multiple databases.  All reports indicate that intelligence agencies were still looking at discrete pieces of information from separate and distinct databases plus the agencies themselves were not sharing all that they knew.

In this case, much was known about Umar Farouk Abdulmutallab, the Nigerian man accused of trying to blow up Northwest Flight 253.  In May, Britain put him on a watch list and refused to renew his visa.  In August, the National Security Agency overheard Al Qaeda leaders in Yemen discussing a plot involving a Nigerian man.  In November, the accused’s father warned the American Embassy (and a CIA official) in Abuja that his son was a potential threat.  As a result, the son was put on a watch list that flagged him for future investigation.  He bought his plane ticket to Detroit with cash and boarded the flight with no luggage.  Yet, almost unbelievably, no one saw a pattern emerge here.

Shouldn’t a system somewhere have put the pieces of this puzzle together and spit out “Nigerian, Abdulmutallab, Yemen, visa, plot, cash ticket purchase, no luggage = DANGER!”?

Information Overload is partially to blame as well.  Given the vast amount of intelligence that the government receives every day on suspected terrorists and plots, it could very well be that analysts were simply overwhelmed and did not notice the pattern.  Rather than being immune from the problem, given the sheer quantity of the information it deals with, the government is more of a poster child for it.

Regardless of what comes out of the numerous investigations of the Christmas Day terrorism plot and the information-sharing failures of the various intelligence agencies, one thing was abundantly clear by Boxing Day: the Federal Government needs to greatly improve its ability to leverage the intelligence it gathers and connect the dots.

Clearly, there are many changes that need to occur in order to improve security but one relatively simple way for the government to proceed is to take the first steps to lower the amount of Information Overload and raise the signal-to-noise ratio so that critical information can rise to the top.

Jonathan B. Spira is CEO and Chief Analyst at Basex.

In the Briefing Room: Kana 10

Wednesday, November 25th, 2009 by Cody Burke

One can find software for virtually any purpose today, yet this very fact highlights a key paradox in the knowledge economy. 

kana2

Kana 10 allows companies to create process flows visually

Today’s software tools can handle almost any task but, since they are mostly not integrated with one another, they force users to shift constantly between windows and interfaces in the course of completing a task.  This results in significant amounts of wasted time, and perhaps more critically, missed opportunities to obtain valuable information needed to execute tasks effectively.

The need to constantly shift between tools is a problem that will have to be addressed as companies move towards the deployment of a true Collaborative Business Environment (CBE), our vision for the future of the knowledge worker’s workspace that will drive efficiencies.  The CBE’s basic principles are the One Environment Rule (a single work environment), Friction-Free Knowledge Sharing, and Embedded Community.  Clearly, the problem of too many tools and interfaces is at loggerheads with the concept of the One Environment Rule.

Software companies have taken note and are moving to provide solutions.  Kana, a CRM company, has begun to address the problem for call center agents and managers with its Kana 10 platform.  Kana 10 is a CRM system that aims to optimize the experience for customers by providing agents with information that is contextual to the call they are on, without requiring them to leave the environment.

The primary point of interface for Kana 10 is the Adaptive Desktop, a single desktop environment that changes based on the user’s needs to present the modules, information, and cues to guide an agent through a given process, such as a conversation with a customer.  The system hinges on the idea that a system that provides all relevant information in the context of what the agent is doing will improve service and efficiencies.

To this end, Kana focuses on Service Experience Management (SEM), which in laymen’s terms means that the experience is controlled in near real-time as the agent progresses through a customer interaction.  Changes that are made to processes are reflected quickly, with no IT department involvement required.  Process creation and changes are done through a simple drag-and-drop interface that builds a process flow.  The ability to create flows that automate functions reduces the steps that must be taken by the agent, such as having to shuffle between windows and cut and paste information.

Kana 10 gives organizations the tools to build call center work environments that exhibit many of the positives that a true Collaborative Business Environment has to offer.

Cody Burke is a senior analyst at Basex.

In the briefing room: Dow Jones Companies & Executives Sales

Thursday, November 12th, 2009 by Cody Burke

In an age of ubiquitous social networking tools and near exponential content creation, rapidly rising levels of information that may or may not be relevant to a particular individual inhibit one’s ability to keep track of contacts, key industry news, and business intelligence.

Dow Jones Companies & Executives Sales

The ideal situation would be for sales and business development professionals to be presented with current and accurate information as it is needed.  Prior to a sales meeting, having a summary of a company’s and industry’s recent news automatically delivered would prepare the sales executive and increase the likelihood of successful conclusion.  This requires tools that automatically surface relevant information.  In addition to the time saved from eliminating manual searching, this type of system solves a fundamental problem that exists with searching for information: one has to know what is being looked for as well as how to use traditional search tools effectively.  Often, the most valuable information is that which is unexpected, for instance a surprise executive position change at a company that opens up the possibility for new business.

We wrote about these dynamics in great detail in our report, Searching for a Connection: Leveraging Enterprise Contacts with Social Software.  In that report, we discussed the acquisition of Generate, a business intelligence company, by Dow Jones, as well as various issues relating to the value of up-to-date information, the limitations of search technology, and what could be done to improve search in the enterprise.

Dow Jones has since incorporated Generate’s technology into the company’s business to business sales and marketing intelligence offering, Dow Jones Companies and Executives Sales.  The latest version of the offering makes some impressive strides towards delivering relevant information in a contextual and timely manner.  Users can set up triggers such as executive changes, product announcements, venture funding, and partnerships, which when detected result in an alert that includes company profile information, relevant executives and contacts, current news, and related documents.  The information itself comes from unstructured news content, Dow Jones’ owned and licensed content that includes company and executive profiles and records, CRM contact and account information, and personal contact lists imported from Outlook or LinkedIn.

Once a trigger event occurs, the system presents contacts that are weighted for relevancy to enable the user to follow up leads that are exposed by the trigger event.  A contact from LinkedIn, for example, is weighed highly because it is presumed to be a personal contact.  This enables sales and business development professionals to find the shortest connection path to a prospect or contact via their work history, CRM system, and personal contact lists.

Dow Jones Companies and Executives Sales is a significant step towards presenting useful information as it is needed without requiring extraneous effort, and will help to surface critical information that would have otherwise gone unnoticed.  It is great starting platform with great potential for exciting features and functionality, and we are eager to see how it develops

Cody Burke is a senior analyst at Basex.

Seeking the Forest of Experts Through the Trees

Thursday, October 29th, 2009 by Jonathan Spira

The problem of Information Overload has significantly exacerbated the problem of finding an expert.

Where are the experts?

Where are the experts?

Do a search and you’ll get 564,768 results.  Of course searches only address explicit information.  Most information is tacit knowledge that resides in the minds of experts.  When those experts leave the office in the evening, they take that knowledge with them.  Of course, some may never return and take their expertise to a new job and/or employer.

There are myriad ways knowledge workers use to locate an expert, but two stick out in my mind:  1.) asking a few close colleagues and getting a limited number of answers, and 2.) sending an “all-hands” e-mail to hundreds or thousands of colleagues in which one’s query is stated.  In thousands of companies across the land, e-mail messages that state “does anyone know anything about…” or “does anyone know anyone who knows the VP of marketing at…” are commonplace.

Of course, the all-hands method is very disruptive and adds to the problem of Information Overload.  An e-mail query that should only have gone to a handful of colleagues but went to 500 cost the company 1.7 days (ca. 40 hours) in lost man-hours when you calculate the impact of the interruption to 488 people who didn’t have to receive it.  Of course, that e-mail dance probably happens multiple times a week despite the results of this search technique being modest at best.

Expertise location tools, which have been around for well over a decade, have been unable to keep up with what people know (no surprise there) and who the experts are.  A Basex research report from 2002 profiled expertise location and management platforms from eight companies.  Five of them have disappeared, one was purchased by Oracle, and the other two (Lotus and Sopheon) were not focused solely on solving the problem.

Social software may give us more breadcrumbs in determining the answer to the “who knows what” question but much work is still needed to create a platform that integrates expertise location into commonly-used enterprise tools (both to locate as well to rate experts).

Before that, however, one should focus efforts on lowering the overall amount of Information Overload within an organization, as doing that will make it much easier to see the forest of experts through the trees.

A wealth of Information Overload resources including a three-minute movie on the topic featuring senior executives discussing how Information Overload impacts them may be found on our Information Overload microsite.

Jonathan B. Spira is CEO and Chief Analyst at Basex.

In the briefing room: Mindjet Catalyst

Thursday, October 15th, 2009 by Cody Burke

Collaboration should be a given in practically every task a knowledge worker undertakes.

Mindjet Catalyst

Mindjet Catalyst

Frequently, however, it isn’t and, in many cases, where collaboration does take place, it is not used to its best advantage.   Part of the reason for this is due to much collaborative work taking place without a full a picture of the project at hand.

Indeed, there exist different dimensions to collaboration and there is a significant need to connect knowledge workers, the collaborative process itself, and the organization with relevant complex information, ideas, and processes.  Given the trend towards both a dispersed workforce and the need for collaboration among multiple entities, the need to effectively manage a project requires new approaches to joining people with information.

One approach that will make collaboration between knowledge workers more effective is to ensure that the supporting information is captured in a form that adds context and is easily shareable.  To add context, information must be linked to people, documents, and other supporting content.  One method of doing this is to create a mind map.  Mind mapping is a technique for brainstorming and organizing data that starts with a central point and then adds branches with related content such as links, documents, attachments, notes, and tasks.  The resulting diagram is a visual guide to a set of information that allows knowledge workers to see the big picture and understand the context of what they are doing.

One company active in this space is Mindjet.  The company has made its reputation through the development of mind mapping products, such as its MindManager product line.  Adding further value to the company’s mind mapping capabilities, Mindjet recently launched Mindjet Catalyst, an online visual collaboration platform.  The offering has clear roots in Mindjet’s visual approach to mind mapping that the company is know for, and adds a team-based collaborative element.

Catalyst is an online service and can be accessed from anywhere via any Web browser and hooks into standards-based document repositories such as SharePoint.  Multiple users can make edits and attach supporting documents and other content to a mind map and have the changes reflected in real time.  The offering also includes pre-built map templates for common business situations such as online meetings or idea generation sessions.  Once maps are generated, they are shareable with colleagues (both users of Catalyst and with those who do not use the product) via links that are e-mailed or posted on social networking sites.  Workspaces are assigned with permission levels to assign reader, author, and owner access.  In addition, the environment is persistent, meaning that users are able to see changes that have occurred.

Catalyst also features integrated online chat functionality, and (optional) Web conferencing capabilities.  The integrated online chat embeds community into the work environment and allows for communication between colleagues without forcing them to leave the environment and switch tools.  The Web conferencing module includes desktop sharing, video and VoIP support, file transfer, and session recording.

Mindjet has taken a good and underappreciated idea, the visual mapping of information, and successfully integrated into it collaborative capabilities and tools.  Displaying information in a visual and connected way gives the knowledge worker context that is critically important for making informed decisions, capturing new information, and understanding business processes.  The addition of powerful collaborative elements extends the value of mind mapping by allowing knowledge workers to use the environment for the kind of collaborative team-based work that is a reality in the knowledge economy.

Cody Burke is a senior analyst at Basex.

In the Briefing Room: eDev inteGreat

Thursday, October 1st, 2009 by Jonathan Spira and Cody Burke

Many people think of software development as lone programmers working in isolation, perhaps reminded of Douglas Coupland’s 1995 classic Microserfs, where programmers slide flat foods, such as “Kraft singles, Premium Plus Crackers, Pop-Tarts, grape leathers, and Freeze Pops” under the door of a fellow coder after they hadn’t seen him in days.  In reality, the process of software development is a collaboration-intensive activity that would benefit greatly from improved knowledge management technology and thinking, much in the way knowledge sharing and collaboration happen between workers in far less technical occupations.  Unfortunately, many managers fail to realize the necessity of actively managing knowledge and facilitating collaboration in this area.

Companies typically spend vast amounts of time and money to document their requirements and it is far from easy to keep such documentation up to date.  At the same time, they struggle to find ways to interrelate information, given that such information comes from diversified sources.  In other words, how does one create a document that leverages information that is anywhere and everywhere and still be able to make sense out of it?

One company that provides a tool in this area is eDev Technologies via the company’s inteGreat offering.  The product is a requirements management solution that allows for the creation and reuse of requirements through the development of a central body of knowledge, which the company refers to as iBoK (Integrated Body of Knowledge).  This knowledge base is a collection of reusable requirements.  InteGreat allows developers to create requirements using a drag-and-drop interface and then relate them to one another to aid in reuse.

Requirements are then visually mapped out as process flows using MS Visio, and are saved either as inteGreat files or exported as Visio files.  Users also have the ability to create mockups using an included simulation tool.  Once a process is created, generated documents are exported via MS Word, Excel, or Visio, or saved within inteGreat.

As in any form of knowledge work, the recreation of content, in this case requirements, is a huge and costly problem, and is essentially a problem of finding things and avoiding recreating that which already exists.  If the knowledge worker can not find information, be it a document or a requirement, they will have to recreate it, increasing project costs, squandering limited resources, and impacting an organization’s bottom line.  The end result of enabling the reuse of requirements is that, for future projects, there will be a reduction in the time and cost of gathering requirements, as well as lessening the burden of maintaining software.

In inteGreat, the ability to reuse requirements once they are developed adds a much needed knowledge management aspect to the development of requirements, affording software developers the same KM capabilities that other knowledge workers now take for granted.  In turn, as more companies adopt similar solutions, they will see increases in efficiency and a reduction in the time spent recreating requirements.

Jonathan B. Spira is CEO and Chief Analyst at Basex.
Cody Burke is a senior analyst at Basex.

In the Briefing Room: Liaise

Thursday, September 24th, 2009 by Jonathan Spira and Cody Burke

Think carefully about the last action item you sent someone.  It was in an e-mail and it’s been several days and there’s been no acknowledgement.  In fact, you are not sure that the recipient is even aware of its existence.  So you send another e-mail and wait.

Your last action item is now with umpteen others that have not seen the light of day.

How many action items and requests fall through the cracks?  Some tasks, due to the nebulous nature of how they are communicated, may not even appear to the recipient as a task at all.  Some tasks are unimportant, busy work that is not critical and should never make it on to a task list.  However, others may be extremely important, yet these may not be recognized for what they are: steps that need to be undertaken as part of a process.

It is simply not possible for knowledge workers to recall on their own everything that has been done and what has not yet been addressed.

In a sense, e-mail is a pit that we tend to throw requests into, hoping that they will resurface, completed.  The problem is that the content of e-mail is static: once sent, it is locked into the e-mail and not linked to other content or systems in any meaningful way.

However, there are some potential solutions looming on the horizon.

One, the eponymously-named Liaise, is a new inbox add-on (currently only available for Outlook) that scans e-mail messages as they are being composed and creates a task list based on any action items it finds in the e-mail.  The underlying technology, called KeyPoint Intelligence, automatically finds, identifies and captures key points in a message.  Over time, the system learns and adapts to a user’s writing style in order to improve performance.

Liaise differentiates between issues (the report is late), and action items (review the report), and compiles all of these into a separate task list.  The tasks are scanned to determine the nature of the task, who is involved, and when it is due.   When an e-mail is sent, any new tasks are automatically added to the user’s list.  If the recipient does not have Liaise, the e-mail is delivered as usual and when it is replied to, the system scans the message and updates the task list accordingly.  If both users have Liaise, then both see the new tasks in their respective the task lists and any changes or progress made is automatically updated without further e-mail being sent around a team.

Additionally, Liaise allows a knowledge worker who is about to go into a meeting to automatically see information such as all e-mail, tasks, and issues associated with the attendees.  This provides context to the knowledge worker and gives a quick overview of where people stand on projects they have been assigned.  Liaise shows the people in the meeting, the level of interaction that they all have, and relevant open matters.

Liaise is an exciting new tool for e-mail and task management that has great potential to reduce Information Overload by cutting down on the overall amount of e-mail in the inbox.  More significantly, Liaise has the potential to illuminate the dark pit that often is the knowledge worker’s inbox by extracting the important tasks, issues, and action items that otherwise would be lost in a sea of noise.

Jonathan B. Spira is CEO and Chief Analyst at Basex.
Cody Burke is a senior analyst at Basex.

In the Briefing Room: Gist

Thursday, September 17th, 2009 by Cody Burke

In an age of Information Overload, the inbox has come to dominate the knowledge worker’s world.  E-mail is, however, far from alone in competing for the knowledge worker’s precious time.  The rise of social networking, tools such as LinkedIn, Facebook, and Twitter, along with other sources of content that have exploded in use such as wikis and blogs, have created a tidal wave of content that more often than not swamps the knowledge worker.  It isn’t only the sheer volume, but the disparate sources of content that create Information Overload, which in turn impairs the knowledge worker’s ability to process information, make decisions, and get things done.

One solution to this overwhelming amount of content from various sources is aggregation, where the tidal wave is filtered down to a manageable stream, with only the most important and relevant content being presented to the user.  This reduces the harmful effects of Information Overload by limiting the non-essential content that is presented as well as dramatically reducing the time that would have been spent locating that content manually.

One company that is addressing this pain point is Gist, which launched an open beta of its eponymously-named  relationship and information aggregation offering.  Gist retrieves content from sources such as Gmail, Outlook, LinkedIn, Facebook, and Twitter, ranks the content, and then prioritizes it based on relevancy (determined by analysis of a users inbox habits).  The content is also enhanced with further material culled from the Web, such as a blog posts and news stories that set context.  The result is a dashboard presenting a snapshot of the user’s contacts and wider social network, combined with supplementary relevant information.  Assembling the same array of information manually would be a time consuming process; Gist does this automatically, by parsing and rearranging the data in a meaningful way, depending on the context.

In practice, Gist is useful for drilling down on a person; a meeting attendee for instance, and quickly compiling information, past communications, and other relevant data.  From the individual’s page, which would present contact information, blog posts, aggregated communications, the user can pivot to a company’s page, which presents the same variety of information, giving context on the person for an upcoming meeting or sales call.

The service can be accessed from an account on the Gist Web page, via an Outlook plug-in, or from within Salesforce.  Gist has three options for inputting data.  Names of people may be added manually, and the system will then compile content on them; a list of contacts may be uploaded, such as a list of meeting attendees; or the system can run automatically and pull information from e-mail accounts and contact lists.

Gist, as its name suggests, is meant to provide the user with a general understanding of what is going on with their contacts and allow for deeper drill downs as needed, reducing the information flood to a manageable, and critically, relevant stream.  The offering does an excellent job of extending the functionality of Outlook via the dedicated plug-in, adding some much needed capabilities to the knowledge worker’s inbox.  As Gist moves through its beta phase, it shows great potential as a remedy for Information Overload and is certainly worth keeping an eye on.

Cody Burke is a senior analyst at Basex.


google