» Archive for April, 2011

Cloud Burst – Dark Skies Ahead for Cloud Computing?

Wednesday, April 27th, 2011 by Jonathan Spira

Last week, Amazon’s side business of selling online computing resources suffered a major failure and businesses of all sizes were significantly impacted.

Time to come in from the storm?

In some cases, a company’s internal systems, as well as its Web sites, went down for several days and even as this is written, some sites were still being affected.

Amazon, which entered the cloud computing business five years ago, has been a leader in a field that has become popular as more and more organizations look to move computing from their own data centers onto the Internet.

With cloud computing companies are essentially purchasing raw computing power and storage.  They do not need to invest in computers or operating systems; that part is handled by Amazon and its competitors.

Two major trends have evolved in this effort.  One is a utility computing model à la Amazon; the other looks to sell companies cloud technology that the company owns and manages (the so-called private cloud).

In recent years, more and more data storage and processing have been off-loaded into the cloud without the appropriate precautions being taken to ensure data accessibility and redundancy.

Amazon has data centers in North America, Europe, and Asia, and the problems seem to have centered around a major data center in Northern Virginia.  As of last Thursday, dozens of companies ranging from Quora, a question-and-answer site, to Foursquare, a social networking site, reported downed Web sites, service interruptions, and an inability to access data stored in Amazon’s cloud.

Some companies using Amazon’s servers were unaffected because they designed their systems to leverage Amazon’s redundant cloud architecture, which means that a malfunction in one data center would not render a system or Web site inaccessible.  Less sophisticated companies typically have neither the budget nor the know-how to do this and therefore paid the price in the form of downed systems and sites.

As of Saturday, April 23, (effectively, day 3 of the outage), Amazon’s status page continued to show problems in the Northern Virginia data center including “Instance connectivity, latency and error rates” in the Amazon Elastic Compute Cloud and “Database instance connectivity and latency issues” in the Amazon Relational Database service.

Visitors to the Web site of BigDoor, a software company, saw the following message on Saturday: “We’re still experiencing issues due to the current AWS outage.  Our publisher account site and API are recovering now, but apparently AWS thinks our corporate site is too awesome for you to see right now.”

At 4:09 EDT on the 25th, Amazon posted the following: “We have completed our remaining recovery efforts and though we’ve recovered nearly all of the stuck volumes, we’ve determined that a small number of volumes (0.07% of the volumes in our US-East Region) will not be fully recoverable. We’re in the process of contacting these customers.”

Even without such outages, moving your organization to a cloud computing-based architecture still entails risks.  Unlike typical scenarios where in-house IT staff control access to your sensitive information, a cloud computing provider may, by design or otherwise, allow privileged users access to this information.  Data is typically not segregated but stored alongside data from other companies.  In addition, a cloud computing provider could conceivably go out of business, suffer an outage, or be taken over, and all of these could impact data accessibility.

Amazon’s outage serves to reinforce that cloud computing is not immune from risk and is not the magic bullet that companies that offer the service would like their customers to believe.  Rather, it is in many respects no different than any other distributed system and such systems are nothing new.  Indeed, distributed systems have been around for decades and IT professionals should have learnt enough about fault tolerance and security by now.

Backing up and replicating data and applications across multiple sites should be old hand by now, but the Amazon incident proves this not to be the case.  Until we begin to think about cloud computing as being no different than any other type of computing, we will continue to experience major systems failures such as Amazon’s.

Jonathan B. Spira is CEO and Chief Analyst at Basex.

Is The Future of Work Less E-mail?

Thursday, April 21st, 2011 by Cody Burke

This week, Skype and GigaomPro released The Future of Workplaces, a study on the changing nature of work.  The report focused on the increase in remote working and the way that changes in technology use are impacting that trend.  In conjunction with the report, Skype launched a series of video interviews on the subject of remote work and the changing workplace; one of the featured speakers is Basex’ chief analyst Jonathan Spira.

The overall conclusions of the study that remote work is on the upswing, due to the capabilities of new technologies that have begun to enter into the business world from the consumer market, should come as no surprise to most knowledge workers.  Effective collaboration technologies have been sorely needed to address the complications of working across disparate time zones, between ad hoc teams, and across large enterprise environments that may not have effective knowledge sharing and collaboration tools in place.  The rise of effective desktop video tools, VoIP calling, and increasingly powerful mobile devices supports a move away from what Spira calls the “Dilbertian” work environments.

The study also contains some interesting statistics about Information Overload and communications tools, which bear some discussion in light of our own ongoing research in these areas.

According to the report, 42% of those surveyed felt that the workplace is increasingly suffering from Information Overload.  This correlates with our own survey data, which shows that over 50% of knowledge workers feel that the amount of information they are presented with on a daily basis is detrimental to getting their work done, and that 94% at some point have felt overwhelmed by information to the point of incapacitation.

Interestingly, the Gigaom study also stated that 35% of respondents felt that e-mail was the number one contributor to Information Overload.  E-mail is clearly a large contributor, but when conceptualizing Information Overload, one must also consider non-technology sources, such as interruptions, meetings, and the impact of multitasking.

The narrative of late in the media is that e-mail is in decline, and data shows that although it is still a major business communications tool, there are signs that its prominence is slipping.  The Future of Workplaces study states that e-mail (as well as the office landline) is likely to decline in use.  Only 35% said that they would use e-mail more in the future than they do presently, compared to 40% who said they now use e-mail more compared to last year.  Although the concept of expected use is problematic and should be taken with a grain of salt, this shows a decline in the expected use of the tool; conversely, almost all the other technologies studied showed higher rates of expected future use than present use.  Expected use of video conferencing and calling, VoIP calling, instant messaging, texting, and mobile phones all showed increases.  Interestingly, the numbers for social networking tools were essentially level; 29% said they were likely to use the tools more in the future, and 30% said they were using more compared to last year.

This may be a good sign, as a reduction in e-mail portends positive benefits for the knowledge worker.  However, caution is warranted here, because the likelihood is that knowledge workers will simply transfer something that was sent via e-mail to another communications medium.  In some cases, such as an IM conversation about what to have for lunch, that is a good thing.  In other cases, such as a poorly filtered activity feed full of irrelevant information that must be sorted through, more Information Overload may actually be introduced through a move away from e-mail.

Cody Burke is a senior analyst at Basex.

The Impact of Interruptions and Multitasking On Knowledge Worker Efficiency and Effectiveness

Thursday, April 14th, 2011 by Cody Burke

So this is going to get worse as I get older?

Interruptions and multitasking are two afflictions that take a tremendous toll on our ability to focus, complete tasks, and be productive.  Our own research on interruptions shows that the recovery time, that is, the time it takes an individual to return to a task after being interrupted, can be as much as 10 to 20 times the length of the original interruption.  This means a 30 second interruption can result in an average of five minutes of recovery time, and that is optimistically assuming that one returns to the original task and does not abandon it.

It’s already been established that multitasking is not really possible for the human brain to engage in with any efficiency; instead, it is really just a series of interruptions, or task switches.  Multitasking results in lowered efficiency in all of the tasks being performed: there is no substitute for focused thinking on a single task.

New research from the University of California, San Francisco, that was published this week in the Proceedings of the National Academy of Sciences shows that the impact of multitasking and interruptions on older people is even more pronounced.  The study took 20 young adults with an average age of 25, and 20 older adults, with an average age of 69, and showed both groups a landscape picture.  They were told to keep the picture in their mind, and were then shown an image of a face and were asked several questions about it.  Then the subjects were shown another landscape picture, and asked to determine if it matched the first picture they were shown.

While the subjects were being shown the images, their brains were being scanned using an fMRI machine to show brain activity.  Both groups were able to switch from the landscape picture to the face image with the same proficiency, however, the brain scans showed that the elderly subjects took longer to switch from thinking about the image of the face back to the landscape portraits.  (The younger subjects were negatively impacted as well, but not as severely as the older subjects.)

Dubbed an “interruption recovery failure” by the researchers, the findings suggest that, as we age, our ability to recover from interruptions is reduced.  Another (albeit unlikely) interpretation of the findings is that there are also cultural factors at work, such as the younger test group’s relatively higher exposure to high amounts of distraction and interruptions as they grow up.

A critical outcome from the study was that the initial hypothesis, that older people experienced more detrimental effects from interruptions because they fixated on the new interruption more than younger people, was false.  In fact, the degree to which the subjects switched focus to the interruption was the same regardless of age; it is the “interruption recovery failure,” or what we call recovery time, that set the groups apart.

For the knowledge worker, young or old, the study demonstrates not only the existence of the recovery time phenomenon but also that it may increase in severity with age.  We don’t yet fully understand the impact that excessive multitasking and interruptions have on the brain as it develops and ages, but we do know now that there is a very real impact on brain activity, and we should redouble our efforts to reduce both the interruptions we are subjected to, as well as those we inflict on others.

Cody Burke is a senior analyst at Basex.

Google +1: Does Search Need to be Social?

Thursday, April 7th, 2011 by Cody Burke

The searcher can see that one user has given the first search result a +1, and can click on the +1 button themselves to recommend any of the search results.

If you have a public Google profile, “Google +1″ is a new feature that can be enabled on your account.  Essentially a clone of Facebook’s “Like” button, +1 is an attempt by Google to harness social search, an elusive concept that purports to improve one’s search results by factoring in the opinions and preferences of one’s social contacts.

Google +1 allows a user to recommend Web pages by clicking on the +1 button next to the site after conducting a search.  There are further plans to have Web sites include the +1 button on actual sites, much like Facebook’s increasingly ubiquitous Like button.  If the feature is enabled and the user is logged into his Google account, search results will show relevant pages that the user’s contacts have given the +1 approval to.

Google’s motivation for providing this service is somewhat self-serving: by gathering even more information on user behavior and preferences, the company will be able to further refine its targeted advertising.  In theory, there should also be a benefit for the user, as more relevant search results are elevated as other users, specifically one’s contacts, recommend them.  Additionally, the new feature may help to clamp down on Web spam in search results, by pushing relevant and useful results higher in the search results.

Improving search is critical because unsuccessful searches represent a huge and costly problem; ca. 50% of all searches end in failure, and of those searches that the knowledge worker believes to have succeeded, a further 50% fail because they result in the knowledge worker unknowingly using incorrect or out of date information.  Searching also takes up ca. 10% of the knowledge workers’ day, representing a significant consumption of time.

The ability of +1 to improve search results is not yet clear, and may be severely limited by the relatively small number of people who have public Google accounts and who choose to participate.  Facebook has had success with its “Like” feature but the company also has a huge user base (600 million as of January 2011), which has allowed it to export the “Like” metaphor around the Web.

It is important to note that there are virtually no privacy controls with +1.  Sites that a user gives approval to are viewable by anyone, because the feature requires a user to have a public Google profile.  This means users must keep in mind that there is a record of their +1 sites, which is accessible via a tab in their public Google profile.  Even Facebook, for all its failings, has privacy settings to prevent a completely unknown person from seeing a history of a given user’s “Likes.”

In what appears to be a complete lack of foresight on the part of Google, the +1 feature was announced on the same day that the company settled its privacy complaint with the FCC over its Buzz service, which drew outrage for automatically connecting people based on a user’s Gmail contact list.  While +1 is different in the sense that users must have opted in to a public Google profile, some lingering privacy concerns remain.

Similar to Google’s previous lackluster entries into social software that include Buzz, Lively, Wave, and Orkut, +1 is a gamble for Google.  We as consumers and knowledge workers hope that it can improve Google’s search results and decrease the search failure rate, but if past social experiments by Google are any indication, it may be a challenge to get +1 to catch on.

Cody Burke is a senior analyst at Basex.


google