Enjoy It While It Lasts: E-mail Overload to Resume Next Week

August 30th, 2012 by David Goldes

Traffic resumes next week

In the last week of August, nothing seems to get done.  E-mail goes unanswered, meetings are rescheduled, even my local favorite coffee shop is empty as people sneak away for their vacations.

Indeed, last week, Jonathan Spira’s commentary in this space was a mere 83 words long as he rushed out for a holiday trip.

I enjoy this time of year, but therein lies the rub.  Starting next week, all those same people will be back at work and back in their inboxes, refreshed and flush with a sense of false urgency.

But do we really need to bring those stress levels back to normal?

While you think about it, I’m outta here…

David M. Goldes is the president of Basex.

Gone Fishin’ – For Information

August 24th, 2012 by Jonathan Spira
E-mail free: the Lake Neusiedl, Burgenland, Austria

E-mail free: the Lake Neusiedl, Burgenland, Austria

It’s August. The end of August. A Friday at the end of August, specifically.

The number of e-mails has declined dramatically as the number of people away from the office increases.

I practically shake my laptop to see if there are any new e-mails as so few are arriving this morning.

If nothing else, my experience, which I am told is not uncommon, does show that we do know how to disconnect.

And that’s what I am going to do right now…

(Picture: Jonathan Spira)

Limits on Recording Everything: Is the Genie Already Out of the Bottle?

August 17th, 2012 by Jonathan Spira

Ted Nelson, the inventor of hypertext, famously recorded numerous moments of his life on tape recordings, video, notepads, and the like.  Nelson, whose work in hypertext dates back to the early 1960s and coined the term, was not only ahead of his time in this respect but also in terms of documenting his own life (he claimed that his reason for doing so was his poor memory).

An article in the New York Times this past week called my attention to a white paper by John Villasenor, a senior fellow at the Brookings Institution and electrical engineer by trade, entitled Recording Everything: Digital Storage as an Enabler of Authoritarian Governments.

While Mr. Villasenor’s point-of-departure relates to the potential for governmental abuse, I was far more interested in the fact that he quantified what I had long suspected, namely that the cost of storage has dropped to the point where anything and everything can be recorded.

The fact that we can is interesting.  But this begs the question, should we?

Today, most individuals generate a vast amount of information each day.  Starting with our conversations and meetings, we move onto e-mail, text messages, social networks, website visits, and cameras.  Our activities, using a credit card, placing a phone call, or sending a text, create additional information (and record our location) on an ongoing basis.

Imagine if all of this were recorded centrally.

Mr. Villasenor estimates that merely storing the audio from a typical knowledge worker’s phone calls throughout a year would require 3.3 gigabytes and cost a mere 17 cents.  That figure, he points out, will drop to two cents by 2015.

Given his focus on authoritarian regimes, he points out that it would cost just $2.5 million to store one year’s worth of phone calls from every person above the age of 14 in Syria (which has a population of 15 million people over the age of 14).  While most of our readers are not planning to record the conversations of their fellow citizens, the $2.5 million figure is mind-numbingly low.

Clearly, things will not end with simply storing the data.  The question is what happens to the data afterwards.  We need to think of all the ramifications that will be the outcome of gathering it, including security and privacy.  Despite their limitations, today’s search tools are more than capable of finding multiple needles in haystacks of recordings.  The question that intrigues me, however, is, what will the impact on an already overloaded society be if and when we start to record our every movement.

Right now, doing so is a curiosity, something an eccentric such as Ted Nelson or a researcher at MIT can do but most mainstream knowledge workers couldn’t and wouldn’t.

There are numerous other issues here besides Information Overload, most prominent among them privacy and government overreach.  At the moment, since we’re at the very beginnings of gathering information on such a massive scale, society does not yet perceive this as a problem.  However, once we really start the ball rolling, we’ll most likely find that it’s impossible to put the genie back in the bottle.

Jonathan B. Spira is CEO and Chief Analyst at Basex and author of Overload! How Too Much Information Is Hazardous To Your Organization.

(Photo: Hannes Grobe)

Advertising to the Robots: Marketing Dollars Well Spent?

August 9th, 2012 by Cody Burke
Facebook like button

The robot approves this message

Something strange is going on with social advertising. Two interesting stories in the past few weeks offer contrasting views: Google is buying social ad startup Wildfire, while in a different corner of the Internet, one company is pulling its social ads from Facebook after alleging that only 20% of clicks on its ads were from humans.

The optimistic view first. Google’s acquisition of software developer Wildfire will allow the company to deliver software and services to brands for running social marketing and ad campaigns on Facebook, Google+, LinkedIn, Pinterest, Twitter, and YouTube. Wildfire was Google’s second choice; the company bid on Buddy Media earlier this year but lost out to Salesforce. Interestingly, Wildfire only offers ads through its partner Adaptly, somewhat limiting the utility of the company’s offerings, although that is subject to change after the acquisition.

Google will enter the social ad space through the acquisition, and will be soon positioned to not only sell services on its own Google+ platform, but also to provide its customers with marketing tools that can be used on other companies’ platforms, which of course would include Facebook.

But back to the pessimistic view of social ads. Facebook has stumbled as of late, with its share price nearly 45% below its initial $38 per share IPO price on May 18. Concerns about slowing user growth and the lack of an effective mobile strategy are spooking investors. Just days before the IPO, General Motors pulled the $10 million it was spending on Facebook ads, citing a failure of the ads to impact consumer purchases.

Now, Limited Run, a music industry-focused e-commerce startup, is pulling its ads from Facebook as well. The company claims that, through running analytics of who is clicking on their Facebook ads, they have determined that nearly 80% of clicks are from automated bots, not real human potential customers. To experiment, Limited Run tried ad campaigns on various other social platforms and came up with the same results, only 15%-20% of clicks could be verified as coming from flesh and blood humans.

In July, the BBC conducted an experiment by setting up a fake company on Facebook, VirtualBagel. The BBC team set up some ads and then left it alone to see what would develop. Within 24 hours, the fake company had over 1,600 likes, and within a week, the company had 3,000. After analyzing the data, it was observed that the majority of the likes came from profiles of 13-17 year old Egyptians with suspiciously similar and sometimes outright contradictory profile information, indicating the presence of nonhuman profiles. For its part, Facebook admits that 9%, or ca. 54 million, of its profiles are fake, and that number is nothing to sneeze at in a market where every advertising dollar counts.

For Google, now entering directly into the social ad space, it is critical to not only solve the automated bot problem, but demonstrate the clear value of social ads. Social everything has been hyped beyond belief, and as we are seeing with the ill-fortunes of Facebook’s falling stock value, there may be some value in being anti-social, at least for advertising.

Cody Burke is a senior analyst at Basex. He can be reached at cburke@basex.com

Tweeting Away Your Vacation

August 3rd, 2012 by Jonathan Spira
cows in field

Even THEY get a vacation.

A vacation – at least as defined by the American Heritage Dictionary – is “a period of time devoted to pleasure, rest, or relaxation, especially one with pay granted to an employee.”

That means that, during a vacation, one takes a break from what is considered to be work the other 48-50 weeks a year.

A few years ago, people blogged – sometimes incessantly – about their vacations, typically after the fact. Now, people take their fans and followers along on the journey, a point somewhat driven home by a recent Wall Street Journal piece that focused on how those actively engaged in social media could not – in many cases – take a break.

As the article put it, “the chatter keeps flowing.”

There are two reasons for this, at least as far as the author of the piece was concerned:

  • Fans and followers won’t accept substitute tweeters and posters
  • So-called “power tweeters” risk losing traction with their readers

The problem is that, just as with information in general, the number of Facebook posts and tweets is growing by leaps and bounds. One or two posts may simply be the equivalent of a needle in a haystack and will simply go unnoticed.

Douglas Quint, a co-founder of Big Gay Ice Cream, has over 37,500 followers, many of whom want to know where his ice cream truck is on a given day. “We need to appear active,” Mr. Quint says. “We want to appear in people’s Twitter feeds once or twice a day.”

Soon, that may not be enough. As quantity increases, social media posters fight to be noticed. That may mean that, where once just a few posts per day sufficed, following that practice now might not even get one noticed.

Andrew Zimmern, who hosts a Travel Channel program called Bizarre Foods with Andrew Zimmern, tweets as often as 50 times a day to his 410,800 followers. But these numbers should give one pause: Using these figures as an example, Zimmern generates what amounts to 20.5 million discrete messages in a single day.

Who has time to follow someone who can post 50 messages a day? For that matter, who has time to post 50 messages a day? This article reminded me of one thing – why I’ve stayed away from Twitter. The temptation is great. It would be easy to get sucked in. But once that happens, I suspect it’s the opposite of a Roach Motel: messages go out but nothing meaningful comes in.

Jonathan B. Spira is CEO and Chief Analyst at Basex and author of Overload! How Too Much Information Is Hazardous To Your Organization.

Trouble Finding Time to Complete Tasks? Take a Deep Breath…

July 25th, 2012 by Jonathan Spira

Breath and go to your happy place...

A knowledge worker’s effectiveness depends on completing assignments. Sometimes, work just gets in the way of getting things done.

I’ve been thinking a lot about work. Specifically, what I’ve been contemplating is an increasing inability to spend substantive time writing during the day.

Perhaps it’s indicative of the problem that I am writing this at 10 p.m. Monday evening. I thought about starting to write this several times during the day, but work got in the way.

Of course, for me, writing is my work, so what happened? It isn’t writer’s block; indeed, I have plenty to say (and write).

Let’s look at when I was, and was not, productive for a moment. Over the weekend, I was able to spend a total of six or so relatively quiet hours writing. I finished three articles.

It’s not that I was constantly being interrupted during the workdays of the previous week. Today, with the prevalence of e-mail, my phone rings only once or twice a day.

During the normal workday, the typical nine-to-five, I find that many people are exchanging information and messages in an automaton-like fashion. They equate volume with a depth of understanding, facts with knowledge. They are deluding themselves.

Looking at the world in this fashion is not only misguided but wrong. But I digress.

Our jobs are not to move mounds of information from one pile to another. As knowledge workers, it is our job to digest information and extract a kind of wisdom from it.

Nonetheless, it’s possible to get caught up with my fellow automatons that are out there pushing out information and the day is over before you know it.

Even though I’ve cut back tremendously on my sources of information, I still find that my curiosity gets the best of me. There are so many things to be curious about and so much is available with the tap of a few keys that one can get lost in an abyss without even trying.

A few years ago, I scaled back from two 22” LCD monitors to the single 13” display built into my laptop. I thought I was being far more efficient and effective with wall-to-wall information but I was fooling myself. Instead, I was giving myself more ways to succumb to distraction.

Last year, I cut back on my news intake – and found that the absence of a constant barrage of small bits of news I really didn’t need gave me back the ability to concentrate more effectively.

This year, I find myself fighting to regain more time for thought and reflection. Alert readers will recall that we found that only 5% of the day is typically available to a knowledge worker for thinking and reflecting. That isn’t nearly enough time for a workforce that, essentially, thinks for a living.

What I am starting to see work for me is practicing deep breathing techniques. By doing so, I basically turn off all of my thoughts (it was a struggle at first but eventually it will happen to you too) and focus on my breathing. I feel more relaxed and more focused afterwards, and the result for me was a productive weekend of writing.

Jonathan B. Spira is CEO and Chief Analyst at Basex and author of Overload! How Too Much Information Is Hazardous To Your Organization.

Office 2013: Microsoft Goes All In On Mobile And The Cloud

July 20th, 2012 by Cody Burke

Will this office cloud be different?

Microsoft unveiled the latest version of its flagship desktop productivity suite this week, and took aim at more than a few of its challengers (namely Apple and Google) in the industry with new features and functionality.

Essentially, the three main areas that Microsoft is using to separate itself from potential competitors are cloud storage, mobile access, and new features via integrations.  There is a host of other new bells and whistles, but from a strategic standpoint, the really key developments in the new release focus on improvements in those areas.

Microsoft has tightly tied Office 2013 with its SkyDrive cloud storage offering, allowing automatic synching and access to files from any computer.  The suite will ship with the SkyDrive app, which enables synching of files between multiple devices and the cloud drive.  Hopefully, this can help prevent a major problem in document management, namely that of losing track of multiple versions of documents.

Office 2013 will also be available via Office 365, the company’s cloud-based subscription version of the suite.  Unlike previous iterations of Office 365, which were based completely on cloud access, users will be able to download the full applications to multiple computers, which will all synch files via the SkyDrive.

Microsoft started down this road with Office 2010, when the company introduced basic browser-based versions of Word, PowerPoint, and Excel.  Today, the goal is to keep users and business from being tempted by Google’s productivity offerings, which were built from the ground up with cloud access in mind, and from Apple’s iCloud services, which offer content synching for its popular iOS devices.

In the mobile arena, Office 2013 (as well as the upcoming Windows 8 operating system) has been optimized for touch interfaces and tablets with relativity small interface changes such as having the ribbon disappear when not in use, thus preserving screen real estate, and the addition of a full onscreen numeric keypad for working in Excel.  According to early reviews (full disclosure, I have not used it), the onscreen keyboard functions well, and the touch gestures that are introduced in Windows 8 make using Office 2013 apps on a tablet a viable option instead of it being an emergency back-up plan.

Further addressing the mobile market, Office 2013 comes in a version that will run on ARM chip-based devices, such as the base model of the company’s recently announced Surface tablet.  The Office 2013 RT version is to be included on future ARM chip-based tablets running Windows 8 RT.  Surprisingly, no iPad version of Office 2013 was announced, although rumors persist that Microsoft is working on bringing the suite to iOS.

Integration with collaborative tools also play a main role in Office 2013.  Social networking functionality from Yammer, a company Microsoft recently acquired, will be bundled into the suite to introduce activity streams and document sharing into the applications.  Microsoft has also announced that it will leverage technology from Skype to provide presence functionality within its Lync communications platform.

Due to its long history in the desktop productivity space and its extremely large established user base, at this time, Microsoft still appears to be the most capable of the major technology players to deliver a complete desktop productivity offering.  It is also clear that the company is not resting on its laurels and that with Office 2013 it is betting big on a future of computing that values cloud access and mobility (a vision that Apple and Google have already embraced).  We will report back with a full review of the new features of the applications soon.

Cody Burke is a senior analyst at Basex.

Warning: The Cloud Isn’t All It’s Cracked Up To Be

June 30th, 2012 by Jonathan Spira

This is not the first time I am writing this column.

Cloudy forecast for cloud computing?

While I last tackled the topic of Amazon’s outages some 14 months ago, when Amazon’s cloud-based data center service went down in a big way, Amazon has had several more highly-publicized outages, most recenty in April and in the middle of June.

These outages impact businesses and consumers alike.  Home movie fans who tried to use Netflix last Friday were disappointed.  Instagram users were in shock because they couldn’t share photos.  And Amazon has thousands of other customers, many of whom found themselves in the same boat.

This time, the problems were caused by the severe East Coast storms that left over two million people without power, but the cause doesn’t matter.  It’s the lack of preparedness for dealing with storms and outages that worries me.

If you’re thinking, “oh, but this is in the cloud,” I have some news for you.  The cloud has to have an earthly connection somewhere and redundancy doesn’t seem to be Amazon’s strong suit (or that of its customers).  The failure occurred at an Amazon location in Virginia, and that’s where many of these companies had their data.  They didn’t seem to think it made sense to put it in a second place, perhaps for safe keeping or in case a storm blew in and knocked one location off line.

Amazon did do a good job of updating its status messages with somewhat terse language (example: we are “investigating elevated error rates impacting a limited number” of customers) but that didn’t bring the data center back online any faster.

The problems weren’t over Saturday morning although the company said that some of its servers and services were back online.  ”We are continuing our recovery efforts for the remaining EC2 instances,” the company posted shortly before noon.

If you were an Amazon customer, these messages were all you would get.  Presumably your websites were offline as were any services, so you had to hope that Twitter wasn’t using Amazon’s cloud in Virginia that day as that’s how many Amazon customers were telling their own customers about the outage.

The point of my bringing this up is simple and something I’ve said before, numerous times: more and more data storage and processing have been off-loaded into the cloud without the appropriate precautions being taken to ensure data accessibility and redundancy.

I’m sure that we’ll see people questioning the viability of cloud-based services and storage because of this occurrence – but taking steps to ensure that a single outage doesn’t take thousands of companies down in one fell swoop seems to be quickly forgotten until the next time it happens.

It’s not up to Amazon. If you’re an Amazon customer, you’re clearly on your own – and you need to be working on a plan to ensure uptime.  If not, your customers will, by going somewhere else.

Jonathan B. Spira is CEO and Chief Analyst at Basex and author of Overload! How Too Much Information Is Hazardous To Your Organization.

Google X Lab Hunts YouTube for Cats

June 29th, 2012 by Cody Burke
feral cat in Virginia

That computer is watching me...

When left to its own devices, what does one of the largest neural networks for machine learning in the world use its 16,000 computer processors and one billion connections do? Solve complex environmental problems? Crunch scientific data to illuminate the mysteries of deep space? No and no. Turns out, the huge, powerful neural network taught itself to recognize cats (and humans) in only three days.

Researchers from Google and Stanford working at Google’s secretive X Lab connected 16,000 processors (still far short of a human’s estimated 80 billion neurons) and fed the neural network digital images extracted at random from 10 million YouTube videos. The machine was then left alone to learn what it could with no instruction on how to proceed. For three days, the network pored through the images, making connections and finding commonalities between objects. Next, the researchers attempted to see what the computer could identify from a list of 20,000 items.

By learning from the most commonly occurring images the computer was able to achieve an 81.7% accuracy rating in identifying human faces, 76.7% accuracy at identifying human body parts, and 74.8 percent accuracy when identifying cats. The increases represent a 70% jump in accuracy compared to previous studies.

The network constructed a rough image of what a cat would look like by extracting general features as it was exposed to the 10 million images, in much the same way that a human brain uses repeated firing of specific neurons in the visual cortex to train itself to recognize a particular face. What the experiment proved is that it is possible for the computer to learn what something is without it being labeled; the neural network created the concept of both humans and cats without being prompted.

Machine learning lies in the use of algorithms to allow computers to evolve behavior based on data by recognizing complex problems and making intelligent, data-based decisions. The problem is that when faced with complex problems with large data sets, it is next to impossible for all possible variables to be covered, so the system must generalize from the data it has. In the case of the cats, Google’s neural network was able to generalize what humans and cats look like with relatively high accuracy.

The potential applications of this kind of neural network are broad. Speech and facial recognition, as well as translation software would benefit from machine learning that only requires vast amounts of data with no hints or guidance from human operators. The current Big Data movement is providing huge quantities of data, and it is encouraging to know that there may be actual applications for it.

Reflecting the confidence that Google has in the project, the company is moving the neural network research out of X Lab and into its division charged with search and related services. Expect to see larger neural networks with even higher accuracy rates constructed in the near future. Hopefully we can move on to using them for something more important… perhaps even recognizing dogs?

Cody Burke is a senior analyst at Basex. He can be reached at cburke@basex.com

(Photo: Stavrolo)

Microsoft Gets in the Tablet Game

June 21st, 2012 by Cody Burke
Microsoft Surface

What am I? Tablet or laptop?

Against the backdrop of continued Apple dominance of the tablet market, Microsoft has thrown its hat in the ring with the Microsoft Surface. The new tablet comes in two models, a slimmed down version running Windows RT, (the company’s simplified mobile Windows for tablets) and a professional offering that runs full Windows 8. Both enter a crowded but lopsided tablet market where the iPad dominates the full size tablet space and a confusing plethora of Android tablets fight for the scraps.

Taking cues from Apple, Microsoft staged the announcement with much secrecy leading up to the event, and by all accounts, put on a good show for the eager crowd. This is Microsoft’s first attempt to build its own tablet hardware; the company is currently relying on third party hardware developers to release multiple upcoming Windows 8 tablets. Microsoft’s last attempt to take over hardware design on its own and do battle with Apple was the ill-fated Zune. The Zune, despite getting positive reviews for its design and functionality, never gained enough marketshare and now only lives on as software.

Although the new Surface tablets looked good in the demos, there are many questions remaining about the devices. What we do know is that both models feature a 10.6”, 16:9 “ClearType” 1920 x 1080 display, front and rear-facing cameras, an SD slot, USB port, magnesium casing, Gorilla Glass, and a kickstand for landscape view. Wi-Fi is enabled via a 2×2 MIMO antenna. The Surface for Windows RT is 9.3 mm thick (the iPad is 9.4 mm), weighs .68 kg, and runs on an ARM processor. It runs Metro apps, so it will deliver a mobile experience comparable to competing Android tablets and the iPad.

The Surface also features a pressure-sensitive keyboard hidden in the tablet’s cover, called Type Cover. When closed, the cover looks similar to the iPad’s Smart Cover, but when open, a full keyboard is revealed on the inside. Using the tablet’s kickstand allows the screen to be propped up in landscape orientation with the keyboard rolled out in front, approximating the experience of working on an Ultrabook or netbook.

Most intriguing is the Surface for Windows 8 Pro, which runs full Windows 8 on an Intel Ivy Bridge Core i5 x86 processor. It is slightly thicker at 13.5 mm, adds a USB 3.0 port, and is available with either 64 or 128 GB of storage. The critical point about the professional Surface tablet is that it is essentially a full-featured PC in a tablet form factor. When combined with the Type Cover, the Surface is almost bypassing the iPad to go up against the MacBook Air and other Ultrabooks (for better or worse). If the typing experience is good, that is a win for Microsoft as it adds functionality that the iPad currently lacks and allows it to attempt to woo mobile workers. If Type Cover turns out to be gimmicky and not suited for real work, then the Surface will remain in the tablet arena.

Running a full version of Windows 8 allows the device to run Office, Photoshop, or any other professional desktop application. Not having to depend on mobile apps could be a game changer, particularly since Windows 8 has been designed from the ground up as a touch-screen friendly iteration of Windows. Past Windows tablets have struggled with trying to squeeze the desktop metaphor onto a touchscreen tablet, with poor results.

The question now is who will actually buy the Surface. The base model may struggle to compete with Android tablets and the iPad unless it truly brings something new to the tablet experience (I’m not sure the fold out keyboard will be enough). The professional model however, if it does indeed provide a full Windows 8 workspace and the ability to run desktop applications, may be able to differentiate itself from the iPad in ways that the BlackBerry Playbook, HP TouchPad, and the legions of Android tablets have not been able to.

Regardless, the Surface is a significant and bold move for Microsoft, and you have to respect the audacity of going up against the dominant iPad, and maybe, just maybe, coming out ahead on some features. For Microsoft’s (and the consumers’) sake, lets hope that the Surface is not a repeat of the Zune.

Cody Burke is a senior analyst at Basex. He can be reached at cburke@basex.com


google