» Archive for the 'Content' Category

In the briefing room: Dow Jones Companies & Executives Sales

Thursday, November 12th, 2009 by Cody Burke

In an age of ubiquitous social networking tools and near exponential content creation, rapidly rising levels of information that may or may not be relevant to a particular individual inhibit one’s ability to keep track of contacts, key industry news, and business intelligence.

Dow Jones Companies & Executives Sales

The ideal situation would be for sales and business development professionals to be presented with current and accurate information as it is needed.  Prior to a sales meeting, having a summary of a company’s and industry’s recent news automatically delivered would prepare the sales executive and increase the likelihood of successful conclusion.  This requires tools that automatically surface relevant information.  In addition to the time saved from eliminating manual searching, this type of system solves a fundamental problem that exists with searching for information: one has to know what is being looked for as well as how to use traditional search tools effectively.  Often, the most valuable information is that which is unexpected, for instance a surprise executive position change at a company that opens up the possibility for new business.

We wrote about these dynamics in great detail in our report, Searching for a Connection: Leveraging Enterprise Contacts with Social Software.  In that report, we discussed the acquisition of Generate, a business intelligence company, by Dow Jones, as well as various issues relating to the value of up-to-date information, the limitations of search technology, and what could be done to improve search in the enterprise.

Dow Jones has since incorporated Generate’s technology into the company’s business to business sales and marketing intelligence offering, Dow Jones Companies and Executives Sales.  The latest version of the offering makes some impressive strides towards delivering relevant information in a contextual and timely manner.  Users can set up triggers such as executive changes, product announcements, venture funding, and partnerships, which when detected result in an alert that includes company profile information, relevant executives and contacts, current news, and related documents.  The information itself comes from unstructured news content, Dow Jones’ owned and licensed content that includes company and executive profiles and records, CRM contact and account information, and personal contact lists imported from Outlook or LinkedIn.

Once a trigger event occurs, the system presents contacts that are weighted for relevancy to enable the user to follow up leads that are exposed by the trigger event.  A contact from LinkedIn, for example, is weighed highly because it is presumed to be a personal contact.  This enables sales and business development professionals to find the shortest connection path to a prospect or contact via their work history, CRM system, and personal contact lists.

Dow Jones Companies and Executives Sales is a significant step towards presenting useful information as it is needed without requiring extraneous effort, and will help to surface critical information that would have otherwise gone unnoticed.  It is great starting platform with great potential for exciting features and functionality, and we are eager to see how it develops

Cody Burke is a senior analyst at Basex.

Change Afoot in the Content Management Space

Thursday, October 1st, 2009 by David Goldes

Content management systems are taking on increasing importance in organizations of all sizes.

The content management market is seeing dramatic change thanks to new open source and commercial open source entries that are making significant inroads with customers. In addition, just to make things a bit more complex, companies need to prepare to manage multiple forms of content including wikis, blogs, RSS feeds, social networks, podcasts, and video.

This in turn has significantly changed the process of selecting a content management solution, a process that was never exactly straightforward as it requires both an in-depth understanding of both the organization’s needs and what the market has to offer.

Consider that companies that spend hundreds of thousands of dollars for content management systems might do equally well with platforms that cost one-tenth that amount.

Content management is no longer a nice-to-have tool; given the critical role of content (in all of its forms) in the enterprise, CM platforms have now been accorded the status of essential IT infrastructure.  That’s why one sees names such as EMC, IBM, and Oracle in the space.

Basex estimates that the U.S. market for content management was $4.1 billion in 2008 and will reach $10 billion by 2014.  Open source content management is gaining traction in some circles and the overall open source software market is growing rapidly.

The increase in our reliance on content and the amount of content that is being created in the enterprise makes it even more critical that companies manage content effectively in order to avoid the problem of Information Overload.

To help companies navigate the space, Basex just released The Definitive Guide to Today’s Content Management Systems and Vendors, a 150-page report series.  The report series looks at 32 key content management vendors and 43 platforms and provides in-depth analysis — including market trends, drivers, and barriers — to guide decision makers in the selection process.

The good news is that companies today can find a wide range of content management systems at varying price points.  The bad news is that selecting the RIGHT platform is more critical than ever to a company’s future and most companies don’t have the resources to thoroughly investigate their options.  Managers have to understand the total cost of ownership, support options and functionality when making that decision.

The report series is being published on a subscription basis and includes an in-depth industry survey, Content Management Systems: The New Math for Selecting Your Platform, and 16 Vendor Profiles of key content management providers and their offerings.

The vendor profiles provide a comprehensive analysis of content management offerings from Autonomy, Acquia, Alfresco, Bluenog, Day Software, EMC, EpiServer CMS, FatWire, Hippo, IBM, Microsoft, MindTouch, Nuxeo, Oracle, Open Text and Xerox.

You can purchase the report at a special introductory price from the Basex Web site.

David M. Goldes is the president of Basex.

The Content Management Interoperability Standard

Thursday, August 27th, 2009 by Jonathan Spira

Editor’s note: The following article was published in conjunction with the release of the Basex report series, The Definitive Guide to Today’s Content Management Systems and Vendors.

For organizations with multi-vendor, multi-repository content management environments, the time and money that must be spent to integrate these systems with other enterprise tools, as well as to get disparate content management platforms to somehow talk to one another, is significant.  Until such integration occurs, a sizable amount of content is accessible only within its original platform.  This means that most organizations have not even come close to unlocking the full value of their content.

As companies move deeper into the knowledge economy, content management is no longer a platform that can evolve separately from other key application platforms in a company’s information infrastructure: it has to be fully integrated.

The future of the knowledge workers’ desktop lies in a fully-integrated Collaborative Business Environment, a workspace that supersedes the traditional desktop metaphor and provides the knowledge worker with access to all forms of information, resources (including people), tools, and applications that support his work.  A true Collaborative Business Environment will include systems that integrate multiple content repositories and provide seamless access to enterprise content.

Content management vendors recognized that a common standard was needed; one that would allow knowledge workers to access disparate repositories and, in 2006, EMC, IBM, and Microsoft began discussions towards that end.  The result was the Content Management Interoperability Standard, or CMIS.  The new standard is a jointly developed specification that uses Web Services to enable application interoperability with disparate content management repositories.   By the time CMIS was announced in September of 2008, the three partners had been joined by Alfresco, BEA (now Oracle), Open Text, and SAP.  At that time, the standard was turned over to OASIS (Organization for the Advancement of Structured Information Standards) for advancement through its standards development process.

The goal for CMIS is to reduce the impact on IT stemming from maintaining multi-vendor, multi-repository content management platforms.  Companies typically incur high costs in order to create and maintain code that integrates different ECM systems within their organizations.  Software vendors have to create platform-specific applications that work with a specific CM platform.  The CMIS specification is designed to support integration between multiple vendors and repositories, making the added expense a thing of the past.

CMIS, which is development platform and language agnostic, is designed to support existing content repositories, meaning that organizations will be able to unlock content they already have built up, in some cases, over several decades.  It will decouple Web services and content from the repository itself, thereby allowing organizations to manage content independently.  It also supports the development of composite applications and mash-ups.

Currently, multiple vendors and platforms support CMIS including Acquia, Alfresco, Day Software, Drupal, Ektron, EMC, Fatwire, IBM, Joomla, Microsoft, Nuxeo, Open Text, Optaros, and Vignette (recently acquired by Open Text) among others.

Jonathan B. Spira is the CEO and Chief Analyst at Basex.

Information Overload – It Isn’t Just Too Much E-mail

Thursday, August 20th, 2009 by Jonathan Spira

One might assume that pinpointing the sources of Information Overload is relatively black and white, i.e. it’s just too much e-mail. In reality, nothing could be farther from the truth.

The problem of Information Overload is multifaceted and impacts each and every organization whether top executives and managers are aware of it or not.  In addition to e-mail, Information Overload stems from the proliferation of content, growing use of social networking tools, unnecessary interruptions in the workplace, failed searches, new technologies that compete for the worker’s attention, and improved and ubiquitous connectivity (making workers available anytime regardless of their location).  Information Overload is harmful to employees in a variety of ways as it lowers comprehension and concentration levels and adversely impacts work-life balance.  Since almost no one is immune from the effects of this problem, when one looks at it from an organizational point-of-view, hundreds of thousands of hours are lost at a typical organization, representing as much as 25% of the work day.

So what else besides e-mail overload is at issue here?  Here’s a quick rundown.

- Content
We have created billions of pictures, documents, videos, podcasts, blog posts, and tweets, yet if these remain unmanaged it will be impossible for anyone to make sense out of any of this content because we have no mechanism to separate the important from the mundane.  Going forward, we face a monumental paradox.  On the one hand, we have to ensure that what is important is somehow preserved.  If we don’t preserve it, we are doing a disservice to generations to come; they won’t be able to learn from our mistakes as well as from the great breakthroughs and discoveries that have occurred.  On the other hand, we are creating so much information that may or may not be important, that we routinely keep everything.  If we continue along this path, which we will most certainly do, there is no question that we will require far superior filtering tools to manage that information.

- Social Networking
For better or worse, millions of people use a variety of social networking tools to inform their friends – and the world at large – about their activities, thoughts, and observations, ranging down to the mundane and the absurd.  Not only are people busily engaged in creating such content but each individual’s output may ultimately be received by dozens if not thousands of friends, acquaintances, or curious bystanders.  Just do the math.

- Interruptions
We’ve covered this topic many times (http://www.basexblog.com/?s=unnecessary+interruptions) but our prime target is unnecessary interruptions and the recovery time (the time it takes the worker to get back to where he was) each interruption causes, typically 10-20 times the duration of the interruption itself.  It only takes a few such interruptions for a knowledge worker to lose an hour of his day.

- Searches
50% of all searches fail and we know about the failure.  What isn’t generally recognized is something that comes out of our research, namely that 50% of the searches you think succeeded failed, but the person doing the search didn’t realize it.  As a result, that person uses information that is perhaps out of date or incorrect or just not the right data.  This has a cascading effect that further propagates the incorrect information.

- New technologies
We crave shiny new technology toys, those devices that beep and flash for our attention, as well as shiny new software.  Each noise they emit takes us away from other work and propels us further down Distraction Road.  It’s a wonder we get any work done at all.  Even tools that have become part of the knowledge workers’ standard toolkit can be misused.  Examples here include e-mail (overuse of the reply-to-all function, gratuitous thank you notes, etc.) and instant messaging (sending an instant message to someone to see if he has received an e-mail).

Jonathan B. Spira is CEO and Chief Analyst at Basex.

Walter Cronkite – Before the Age of Information Overload

Friday, July 17th, 2009 by Jonathan Spira

The passing of Walter Cronkite, a man so closely associated with television news that the word for news anchor in several countries is a variation of Cronkiter, serves as a demarcation between an information age and the age of information overload.

For much of the 20th century, news was delivered once a day, first for 15 minutes and later for 30. That concept is foreign to generations that have grown up in an age of CNN and, later, the Internet.   Even when Cronkite retired as managing editor of the CBS Evening News in 1981, the 24-hour news cycle was virtually unknown (CNN, the first 24-hour news network, was founded in 1980 but was virtually unheard of at the time).

Most people in America expected to get their news, including the good and the bad, from one person, Walter Cronkite (with the help of correspondents, of course).  Given the tremendous fragmentation in the media today, with dozens of 24-hour news stations competing not only against one another but also against Internet-based sources, the phenomenon of a single news source is unlikely to happen again and that also means that the world may never again see someone with the presence and stature that Cronkite had during his tenure.

Today people are used to a battery of news and information, generally from people far less informed and insightful than Cronkite, which really is a shame.  News programs today tend towards sensationalism, entertainment, and opinion – a far cry from the traditional values of in-depth reporting, verification, relevance, and context.  Today, bloggers who masquerade as journalists post stories online which they are almost certain are not true, for the sole purpose of getting more hits on their site.

Information can be a wonderful thing but too much information can have a toxic effect.  Regardless of the medium and technology, the solution to information overload is almost always better filtering systems.  For 19 years, Walter Cronkite was the filter for America’s news.  It’s unlikely we’ll ever see a better filter.

Jonathan B. Spira is CEO and Chief Analyst at Basex.

The New York Times’ Ironic Piece on Blogging

Tuesday, June 9th, 2009 by Jonathan Spira

The New York Times’ coverage of rumor-mongering in blogs is perhaps well intentioned but ultimately the article is as flawed as the concept of printing pieces masquerading as news articles without the benefit of fact checking.

The fact that the writers of posts touting the possibility of Apple purchasing Twitter (which was completely unfounded) suspected the story was false and didn’t care does little to advance the cause of bloggers.  The fact that the Times journalist, Damon Darlin, seems to celebrate news gathering without fact checking – given recent events in the New York Times’ own history – is nothing short of irresponsible.

Bloggers (whatever that term actually means right now) claim they want to be recognized as a form of legitimate media but a formal admission that “‘Getting it right is expensive’ [but] ‘Getting it first is cheap’” is the modus operandi does little to advance the cause.

Jonathan B. Spira is the CEO and Chief Analyst at Basex.

The Googlification of Search

Thursday, March 19th, 2009 by Jonathan Spira

Google’s clean home page, combined with the simple search box, has made it easy to look up something online.  Indeed, using Google may just be too easy.

Google uses keyword search.  The concept sounds simple.  Type a few words into a search box and out come the answers.  Unfortunately, it isn’t that simple and it doesn’t really work that way.

Search is a 50-50 proposition.  Perhaps 50% of the time, you will get what appear to be meaningful results from such a search.  The other 50% of the time, you will get rubbish. If you’re lucky that is.

Why does this only work sometimes?  This is because there are two types of searchers, or more accurately, two types of searches.  One is keyword search, the second is category, or taxonomy, search.

It is possible to get incredibly precise search results with keyword search.  Indeed, there is no question that keyword search is a powerful search function.  Being able to enter any word, term, or phrase allows for great precision in some situations – and can result in an inability to find useful information in many others.

However, the use of a taxonomy, or categories, in search, allows the knowledge worker to follow a path that will both provide guidance and limit the number of extraneous search results returned.  Using a taxonomy can improve search recall and precision due to the following factors:

1.)    In keyword search, users simply do not construct their search terms to garner the best results.
2.)    Users also do not use enough keywords to narrow down the search.
3.)    Google’s search results reflect Google’s view of the importance of a Web page as determined by the company’s PageRank technology, which looks at the number of high-quality Web sites that link to a particular page.  This doesn’t necessarily mean that the first pages in the search results have the best content but only that they are the most popular.
4.)    Web site owners can manipulate Google and other search engine results through search engine optimization (SEO).  There is an entire industry built around this service and the use of SEO can dramatically impact the positioning of a Web site on the results page.

Unfortunately, in part thanks to Google’s ubiquity as well as its perceived ease of use, the concept of search to most people seems to equal keyword search.  As more and more Web sites and publications (the New York Times being one prominent example) move to a Google search platform, the ability to find relevant information may be compromised.

In the case of the New York Times, much of the functionality previously available disappeared when the Times deployed Google Custom Search.  Only those visitors who know to click on “advanced search” can specify a date range and whether they want to search by relevancy, newest first, or oldest first, although even the “advanced” search experience is still lacking compared to the Times’ earlier system.  Thanks to the Googlification of search, however, most visitors only access the search box, and their ability to find the answers they are seeking is hobbled by the system’s limitations.

Jonathan B. Spira is the CEO and Chief Analyst at Basex.

Rekindling the Flame – Amazon Introduces Kindle 2

Tuesday, February 10th, 2009 by Jonathan Spira

When the original Amazon Kindle was introduced, I tried very hard to like it.  While there were many things that it did well (see my original review), the reader experience was ultimately unsatisfying.  At the time of its introduction, however, the Kindle was certainly the latest and probably greatest eBook reader, a concept that goes back to Sony’s introduction of the Bookman in 1991 and the Sony Data Discman in 1990.

The original Bookman weighed two pounds and could play full-length audio CDs.  It was, essentially, an 80286-based, MS DOS-compatible computer with a 4.5″ monochrome display.  Even before the Bookman, Sony had introduced the Data Discman Electronic Book Player.  The Discman weighed only 1.5 pounds and books had to be created using the Sony Electronic Book Authoring System.  Its three-hour battery life, relatively low resolution, and limited content greatly limited its utility and, ultimately, its lack of success.

All of these designs, including the newest Kindle, overlook the rather profound question of what makes for a satisfying book-reading experience.

It all boils down to the fact that reading a book is just that, something one does with paper.  No amount of searchable text, clickable links, and video wizardry will replace that experience, and putting a table of contents, page numbers, and an index around words that come to the reader electronically is a different reading experience.

Books also have other advantages, including a drop-proof, shock-proof chassis, extremely low power consumption, and a bulletproof operating system.

What we read from did migrate once before. By the end of antiquity, the codex had replaced the scroll.  The codex user interface was improved over time with the separation of words, use of capital letters, and the introduction of punctuation, as well as tables of contents and indices.  This worked so well, in fact, that 1500 years later, the format remains largely unchanged.

With the original Kindle, the reader experience, while light-years ahead of reading a book on a laptop, was still greatly lacking compared to the pleasure readers continue to derive from paper books (it appears we are at the cusp of having to create a retronym, “paper books,” to describe the non-eBook variety).  My 1996 “invention” of the Lazerbook , an in-home device that printed books on demand on reusable paper, has still not yet been built but I suspect that, were it to arrive on the scene today,  readers would still prefer paper.

This week Amazon introduced Kindle 2.  Although units are not yet available for purchase (although Amazon is accepting pre-orders now) or for testing, I suspect that I will like this Kindle a whole lot more.  In addition to the new Kindle, Amazon said it would start to sell e-books that can be read on non-Kindle devices including mobile phones.  It also announced an exclusive short story by Stephen King.

Kindle 2, sporting a new design with round keys and a short, joystick-like controller, has seven times the memory of the original version, a sharper display, and it turns pages faster.  Despite these improvements, the price remains the same: $359.  At the launch, Amazon CEO Jeff Bezos told the audience that “our vision is every book, ever printed, in any language, all available in less than 60 seconds.”  Amazon also announced Whispersync, a feature that allows the reader to start a book on one Kindle and continue where he left off on another Kindle or supported mobile device.

Apple and Google, not traditional book publishers, represent the greatest challenge to the Kindle beyond, of course, the codex.  Google has, to date, scanned millions of books, many out of print and hence not easily available in traditional form.  Readers can find several e-book programs online for the iPhone and iPod Touch.

What will the future hold? Check with me in, say, 1500 years.

You can order the new Kindle from Amazon.

Jonathan B. Spira is the CEO and Chief Analyst at Basex.

Trying to Like the Amazon Kindle

Thursday, June 26th, 2008 by Jonathan Spira

If you are looking for an electronic book reader, the Amazon Kindle is head and shoulders above the competition. But the question really is, do you want an electronic book reader.

I really wanted to like the Kindle, with its E Ink high resolution display that gives an almost print-like appearance, free wireless connectivity (limited to the U.S. because it uses Sprint’s EVDO network, and post modern interpretation of, well, a book.

But I found the experience of reading a book or newspaper on the Kindle strangely unsatisfying.

At 10.3 ounces (without the cover), the Kindle felt heavier than a trade paperback book although it is similarly sized. The E Ink technology takes a second to refresh when you change pages (it fades to black and blinks), which interrupts the flow of reading and is quite jarring. (On the plus side, you can read the Kindle in direct sunlight so there are pluses and minuses to the display technology).

While reading a book on the Kindle was somewhat akin to reading a book on paper, reading a newspaper was unsettling if you like to scan stories as opposed to having one average less than a full paragraph visible at one time.

Navigating through the Amazon.com store was relatively easy and a big plus of electronic book reader technology is that you can quickly download sample chapters of books you might want to read before making a purchase.

You can bookmark interesting or key passages and edit and export notes. You can also e-mail documents to the Kindle including PDF files. The Kindle always saves your place so you can pick up where you left off. Newspapers, which are normally free on the Web, require a paid subscription on the Kindle (the New York Times costs $13.99 a month) so you are paying for convenience but many books (more than 130,000 available) are $9.99, a bargain. Finally, if you lose your Kindle as opposed to a throwaway paper or paperback book, well…

You can purchases the Kindle at Amazon.com

Jonathan B. Spira is the CEO and Chief Analyst at Basex.

Lazerbook

Tuesday, April 2nd, 1996 by Jonathan Spira

The LazerBook is Basex’  foray into the future of book publishing and distribution and was conceptualized by Jonathan Spira.  It does, of course, not exist today, and probably will not be practical for some time.

However, in carefully analyzing the direction that the book publishing industry must take, it has become apparent to us that the prediction that “books will disappear” is ill-advised. Most notably, pundits have predicted that books, as we now know them, will be replaced by will be replaced by electronic tablets, perhaps similar to screens on a laptop computer.  Sony, in fact, tried this approach with its ill-fated Bookman product, introduced in 1991.   In our view, customers were predictably slow to turn to a pocket television-screen-sized device for their reading pleasure.

It is the last word, “pleasure,” that is perhaps most important to the concept of the LazerBook.  Books are enjoyable; they elicit a reaction, and the experience of reading a book is not limited to the words on a page.  There is a sensory experience also associated with reading a book.  Opening a musty, leather-bound tome gives rise to a heightened sense of adventure.  The binding itself adds to the reading event, as does the quality of the paper, the typeface used (and sometimes even specially designed for a particular work), and the ability to gauge the progress you are making, as the unread pages slowly diminish.

It is clear that the Bookman did little to emulate this experience.

What, then, would?  Let us first consider that there are three broad categories of books on the market today:  reference works (i.e., encyclopedias, travel books, collections of articles, cartoons, art books, etc.); works of non-fiction (such as biographies, business texts, and treatises on various maters); and fiction (which constitute our traditional body of literature).

Reference book publishers are in the knowledge business; they compile knowledge, such as in-depth information on travel, which can then be resold to someone who requires such information.  In the present distribution model, experts sell “information” to information warehousers (publishers), who create a medium for the information and resell it to information distributors (booksellers).  Booksellers sell it to the book buyer (information consumer).  Within the existing model of the World Wide Web, the expert can place his information online, available for direct purchase by the information consumer, thus bypassing publishers and booksellers.  The information, however, does not yet form a
traditional reference work; the output, perhaps printed on a regular paper stock, limits the overall reading experience.  In contrast to this model, LazerBook can compile a fully- customized and bound travel guide on demand.  Furthermore, the information consumer can purchase only the desired information.

Non-fiction works have a distribution model similar to reference works, with the exception that there is generally one author and it is a marketed item; unlike a reference work, which is a collection of information from different sources, a biography or treatise would usually be by one scholar who has in-depth knowledge of his subject.   LazerBook would produce the tome on demand and, after it was no longer wanted, recycle it.

The model varies slightly for works of fiction.  A reader might wish to have an anthology of works in a genre, a collection of short stories by one author, or some other combination.  Or the reader might desire a classic, bound novel.  In any of these instances, LazerBook delivers the desired work to the reader, day or night, even a “rare” book, perhaps out-of- print in an alternative distribution model.

The payment mechanism for LazerBook-produced products is likely to follow the e-cash scenario touted so highly today.  The reader inserts his electronic purse and makes the purchase.  It can be that simple.  Alternatively, if the reader is a member of the LazerBook- of-the-Month club, he might receive pricing and benefits similar to the off-line Book-of- the-Month Club that exists today.

Many futurists argue that the computer will enhance the book-creation process, because it will facilitate reader involvement in the creation of a story.  This, however, changes the book into more of an on-line game. I believe that the LazerBook, like the traditional book, will have its story determined by the author, with little reader interaction.

Jonathan B. Spira is the CEO and Chief Analyst at Basex.


google