Open Source

It’s only a matter of time until the consumerization of IT bleeds over from your non-IT employees into your IT department. While this may sound far fetched, iPad like systems, such as appliances or workload optimized systems, are already finding a foothold in your IT datacenter, and it’s not about to stop.

Consumerization of IT is here to stay
As InfoWorld’s Galen Gruman explains, the consumerization of IT is in full force with employees choosing hardware and software that best meets their needs without regard for corporate IT standards. The trend started well before, iPhones and iPads made their way into the enterprise, but these three technologies are important because they highlight the choices being made by employees. These choices are often markedly different from the choices an IT professional would tend to agree with when making corporate purchase decisions.

All three technologies offered fewer choices and were less open than the alternatives already in use within a given IT department. Terms like walled-gardens or lock-in were often associated with, the iPhone and iPad in their early days of enterprise usage. In many respects, these concerns still apply. And yet, all three technologies have somehow found their way onto the corporate standard list. This doesn’t mean that these are the preferred technologies in every case, but they have a role to play within today’s modern IT department.

The value versus control spectrum
Consumerization of IT hits close to home for me. I started to type this post on my iPad and then later on my Macbook Air. I use both for work purposes at varying times, and both are my personal devices.

It occurred to me that in choosing an iPad and a Macbook Air I made choices that I’d never have expected making even 2 years ago.

For the better part of 15 years I’d purchased hardware and software that I could tinker with and had broad control over. However, the “it just works” nature of the iPad and performance, portability and yes, the aesthetics, of the Macbook Air became important decision factors.

By going to a Mac after years on a PC, most of my applications, tools and custom scripts stopped being useful. I have fewer choices of applications and much lower configurability on my Macbook Air and iPad. It wasn’t a painless transition. I still need to keep a Windows 7 and VMware Fusion license around as my tax program of choice only supports Windows.

However, the value I perceived from a simpler to use and better integrated system helped me get over my historical approach to IT systems and software. I highly doubt that I’m alone in this progression on the spectrum of control and configurability versus integrated system ease of use and performance.

Growing use of appliances and workload optimized systems in datacenters
The very same concerns I had when considering an iPad or Macbook Air are relevant for IT professionals tasked with doing more with less. The notion of giving up control and choice is often viewed in a negative light by IT professionals. But, when the value of a workload optimized system is considered, especially if it’s based on open standards, the attractiveness of these systems begin to outweigh the reduced control and configurability.

The very same professionals reading this blog and running countless IT departments are happily toting iPhones, iPads, Samsung Galaxy Tabs or Macbook Airs. The ease of use and performance at certain tasks that these integrated systems provide is bound to affect standard decision making in their IT roles.

Think about all the time and effort spent on building systems from piece parts, applying fixes and upgrades to individual pieces of the system. How much more valuable work could you do for your company if you didn’t spend hours or months on these tasks? How much time do you spend keeping your iPad up to date? Virtually no time at all.

This idea clicked for me a few months ago, and it’ll take hold with more and more IT professionals. Some will ignore the logical conclusions, while others will question whether their current approach to building, maintaining and upgrading systems is optimal for every situation. Note however, here is no reason to think that the growing use of workload optimized systems means the end of the custom built systems market. Both types of systems have a role to play in a modern datacenter.

For instance, appliances are already a growing part of the IT landscape. IT has long been comfortable with appliances for important, but non-differentiating layers of the IT stack, such as firewalls.

Customers are increasingly looking at appliances for higher value IT capability like business analytics. Oracle’s Exadata and IBM’s Netezza Twinfin are two appliances that have been growing by narrowing choice and configurability while optimizing for a particular task. In fact, Oracle made a point of highlighting the growth of Exadata as a bright spot in an otherwise disappointing quarter.

While we’re likely decades away from replacing your systems of choice with a big fat tablet device, the consumerization of IT will increase the willingness of IT professional to adopt appliance and appliance like systems in enterprise datacenters. Is your IT department ready for this shift?

should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

With the holiday season upon us, and tablets at the top of many gift lists, it’s all but certain that millions of new users will get exposed to an open source based Android Tablet. By all accounts, Amazon’s Kindle Fire is expected to leapfrog into, at least, number two position in the tablet market. While this would appear to be good news for Android tablets and the Android OS, it may actually be exactly what Apple and Microsoft had asked for Christmas (or any other holiday these companies choose to celebrate).

Great price and Amazon content versus clunky user experience
I’m not going to do a blow by blow review of the Kindle Fire. Glen has a good review of the Kindle Fire versus Apple iPad. I’d also recommend the Kindle Fire review from Instapaper developer Marco Arment from a user experience standpoint.

The first common thread across reviews is the price of a Kindle tablet, at $199, can’t be beat. Some have referred to the Kindle Fire as the people’s tablet.

Second, reviews are virtually unanimous that the Kindle Fire is great when restricted to Amazon’s content, even if some magazines aren’t optimal for a 7 inch screen. The Kindle Fire becomes less attractive as users venture outside of Amazon’s content garden. Even the new Silk browser, touted to speed on device browsing, appears to be a let down.

Finally, many reviews describe a less than delightful user experience while using the Kindle Fire operating system and user interface. The Kindle Fire OS responsiveness is said to lag user input, sometimes forcing users to redo an action only to find that the first input was in fact registered.

The 7 inch form factor, while easier to hold than a 10 inch tablet, presents the added complication of smaller targets for users to press in order to carry out their intended tasks. One of Arment’s issues with the Kindle Fire interface is that: “Many touch targets throughout the interface are too small, and I miss a lot. It’s often hard to distinguish a miss from interface lag.”

Like it or not, iPad is Kindle Fire’s comparison
There are many older users who don’t need a laptop and could benefit from a small and moderately priced tablet for email, browsing and reading. A Kindle Fire seems like a great solution. It’s likely that many of this cohort will receive a Kindle Fire from a well meaning family member or friend. In fact, my wife suggested getting a Kindle Fire for several retired members of our family.

However, the usability issues that Arment brings up, especially surrounding interface lag and smaller touch targets will undoubtedly have an impact on their desire to use the device, or store it away with that interesting looking tie received over the holidays.

It seems that a lack of comfort with new computing devices, fat thumbs and poor eyesight, something we all have to look forward to, aren’t great ingredients for being delighted with the Kindle Fire.

Even younger users, many who own or have used an iPod touch or iPhone are at risk of being annoyed with the lag and user interface roughness of the Kindle Fire.

Some have argued that you can’t compare a $499 iPad with a $199 Kindle Fire. That’s true, on paper. In practice, users are going to compare their Kindle Fire experience with an iPad. There isn’t a tablet market, there’s an iPad market. It’s the reason that most Kindle Fire reviews compare to the leading entry in the market, the iPad, and not other Android or 7 inch based tablet.

A poor Kindle Fire experience reflects on Android
When the Kindle Fire is perceived to deliver a less enjoyable experience than an iPad, the real risk is that the Android tablet market will be viewed in the same light as the Kindle Fire. That may not be fair considering Amazon has forked the Android OS, and Android continues to get better. However, since the Kindle Fire is expected to reach an order of vastly more users than other Android tablets, and considering Amazon’s technical reach, don’t be surprised if typical users generalize their Kindle Fire experience to Android tablets.

Earlier this week Bloomberg BusinessWeek’s Ashlee Vance wrote on his Twitter feed: “Just opened up the old Kindle Fire. Android sure has a Windows 3.0 feel, dunnit?”

That is exactly the type of comment that should make Apple happy and give Microsoft a faint hope in their tablet plans. If Amazon, with its great content and proven track record with Kindle devices, can’t pull off a device users prefer to an iPad, then what’s the likelihood that any Android vendor can?

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

While the DevOps movement is centered on reducing the friction between developers and operations teams and the ability for developers to deliver applications faster, neither outcome is possible without standardization. But are developers and operations teams ready to agree on standard environments?

Why developers operations teams are focused on DevOps
A recent post from Matt Asay, now at cloud vendor, Nodeable, links to a survey from Puppet Labs of seven hundred and fifty plus respondents. While DevOps is often considered a developer led movement, nearly 4 times as many operations and administrators responded to the survey as developers. Surprisingly, as Matt points out, operations teams appear to see the same potential benefits from DevOps as developers.

In the survey, Puppet Labs finds that fifty five percent of respondents ranked the automation of configuration and management tasks as the top benefit expected from the DevOps movement. Another thirteen percent, ranked this benefit in their top three benefits expected.

You can’t automate what you can’t standardize
I’ve been spending time learning more about the interaction between developers and operations teams. One thing I’ve come to understand is that these two groups tend to think differently, even if they are using the same words and nodding in agreement.

It’s no surprise that developers want adopt tools and processes that allow them to become more efficient in delivering net new applications and continuous updates to existing applications. Today, these two tasks are hindered, to a degree, by the operations teams who are responsible for production environments. As Matt points out in his post, developers seek automation, which, as an aside, is a reason enterprise developers have been very open to using public clouds.

Operations and administrations teams, as the Puppet Labs survey shows, are also drawn to the automation of manual tasks they must do today, some of which result in the delays that developers experience when they’re waiting on the operations team to provision new resources or promote applications into production.

While both side of the DevOps coin value automation, neither is explicitly calling out the fact that you can’t automate without standardization. Well, maybe can’t is too strong a word. In IT we can do pretty much anything with enough elbow grease.

But ask yourself, could you automate the provisioning of a development environment, or the deployment of a middleware stack, without standardizing what exactly is being automated? No, not really. Operations teams understand this; it’s why operations teams promote enterprise wide standards. In most cases, these enterprise wide standards, whether for things like operating systems, hypervisors, databases or application servers, etc., allow for some limited degree of variability. For instance, may IT shops support Linux and Windows, but if you want Solaris, sorry, you’re outside of the corporate standard.

This is the environment that developers are working in today. You can select amongst enterprise-wide standards. And yet, this is the exact environment that is frustrating developers. Sometimes the corporate standard doesn’t exactly fit the developer’s skills or interests or the project’s needs. Hearing, “tough luck champ”, is what’s driving developers towards DevOps. However, developers automating IT stacks that don’t fit into the corporate standard won’t help bridge the gap between development and operations.

What’s needed is a joint agreement, across development and operations, on the acceptable environments that make up the corporate standard, and which are, in turn, automated.

This isn’t a technical issue. It’s a cultural issue. Before your teams push forward a DevOps initiative, get them beyond the buzzwords and into the nitty gritty around corporate standards, degrees of variability and if or how to allow developers to experiment outside of the corporate standard. If you don’t you’ll have developers and operations expecting different outcomes while using similar words.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

The Open Compute Project Foundation recently announced results from Facebook’s attempts to build an efficient data center at the lowest possible cost. The foundation claims to have reduced the cost of building a data center by 24 percent, and improved ongoing efficiency by 38 percent versus state of the art data centers.

Open Compute Project design specifications
The Open Compute Project foundation released design specifications for servers and data center technology earlier this week.

The servers themselves fit into a chassis that is slightly taller than a 1.5U standard server chassis. The servers can use either an Intel or AMD motherboard. The v2.0 Intel specification provides double the compute density as v1.0 using two next generation Sandy Bridge based Intel processors per board. The v2.0 AMD specification also doubles the compute density with support for two AMD G34 Mangy Cours or Interlagos processors per board.

Open Compute servers are racked into three adjoining 42U racks, dubbed Triplets. Each rack column contains 30 Open Compute Project servers, for a total of 90 servers per Triplet. Each rack column has two top of rack switches.

A battery pack rack cabinet sits between a pair of Triplets providing DC power in the event of loss of AC power.

Bringing deep data center engineering skills to the masses
By releasing the cost savings figures, and more importantly, the underlying hardware specifications for the motherboards, power supply and chassis, the foundation hopes to bring efficiency and lower cost data centers to companies that don’t have the engineering depth of companies such as Facebook, Google, or Amazon.

Facebook deserves kudos for their work on the project. Getting together a board of directors including Andy Bechtolsheim from Arista Networks, Don Duet from Goldman Sachs, Mark Roenigk from Rackspace and Jason Waxman from Intel couldn’t have been easy. Although, cost reduction and efficiency figures upwards of 20 percent must have attracted attention from prospective board members and the long list of hardware, software and institutional partners, including the likes of Dell, Intel, Huawei, Red Hat, Netflix, and North Carolina Sate University, to name but a few.

Nothing to sell here? Ok, but where’s the certification?
At the Open Compute Project Summit this week, Andy Bechtolsheim was quoted saying “Open Compute Foundation is not a marketing org. There’s nothing to sell here”.

While the foundation has nothing to sell, it’s critical that hardware vendors quickly release Open Compute Project certified hardware. There isn’t a certification process for hardware as yet, but this is something the foundation needs to work on immediately.

As GigaOM reports, “when the effort launched in April Dell and Hewlett-Packard both showed off servers that incorporated some of the elements of Open Compute.” The term “some elements” should be worrisome to the Open Compute Project and to potential buyers. Otherwise “Open Compute Project based” hardware will proliferate without any standard comparison across vendor offerings as vendors rush to take advantage of the Open Compute Project’s buzz with existing offerings under a different marketing banner.

Silicon Mechanics, a rack mount server manufacturer and member of the Open Compute foundation, announced an Open Compute Triplet based on the Open Compute Project specifications. A 90 compute node Triplet with entry level processors, RAM and disk and without any operating system or software starts at $287,755 and can grow to $2 million and above.

Good progress so far, more work to do
In a post at, Frank Frankovsky, Director of Technical Operations at Facebook and Chairman/President of the Open Compute Project foundation wrote “… what began a few short months ago as an audacious idea — what if hardware were open? — is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum. We are officially on our way.”

Yes, the Open Compute Project foundation is officially on its way.

You’re encouraged to read through the design specification and compare to your current or future data center plans. However, until the Open Compute Project foundation comes out with a certification process, buyers are urged to ask vendors which parts of the product align with the Open Compute Project specifications and which parts are outside of the specifications. In some ways, it’s buyer beware when it comes to products claiming to offer Open Compute Project-based products, for now at least.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

Lync, Microsoft’s unified communications platform, combining voice, web conferencing and instant messaging, is reportedly poised to become the next billion dollar business at Microsoft. It’s time you considered alternatives before Lync becomes engrained in your IT environment, much like SharePoint has for many companies.

Lync follows in SharePoint’s billion dollar footsteps
According to reports from Microsoft’s Worldwide Partner Conference (WPC) 2011, the company has high expectations for Lync, with several Microsoft managers telling editorial director of MSPmentor, Joe Panettieri, that Lync’s sales trajectory will make Lync Microsoft’s next billion dollar platform.

With Lync, formerly Office Communications Server, Microsoft is following a similar strategy to Microsoft’s SharePoint, another billion dollar plus business.

With Lync, as with SharePoint before it, Microsoft has built a set of applications that leverages Microsoft Office’s massive install base. Microsoft is now accelerating partner involvement to shift Lync from a set of applications to a platform that partners can manage and customize.

Microsoft expects to target the 10 million legacy voice over IP (VoIP) phone lines that Cisco currently controls, largely in the enterprise space. However, as Panettieri explains, Microsoft has the install base and partner channel to grow Lync in the small and medium business market.

Lync is available on the Office 365 cloud, but is expected to garner higher on premises interest, an attractive point for Microsoft’s managed service provider partners, driven by a more complete feature set on premises.

Consider alternatives before Lync arrives at your door
Lync only furthers your company’s reliance on Microsoft Office – a smart strategy for Microsoft.

As Microsoft partners get more involved with Lync, you’ll be getting briefings on the benefits of Lync in your business. Now would be a good time to start considering alternatives, especially a few in the open source arena, to be ready for Lync conversations with your friendly neighborhood Microsoft partner.

As Lync is growing by selling into the Microsoft Office install base, the first alternative to consider is Google Apps, a direct cloud competitor of Microsoft’s Office 365. While Google doesn’t yet offer a PBX, OnState Communications offers a cloud-based PBX that from the Google Apps Marketplace. It also stands to reason that Google will add some degree of PBX capabilities into Google Apps.

Twilio, a self purported cloud communications vendor, offers a platform to build voice and SMS applications using simple to use APIs. Twilio also offers an open source phone system through its OpenVBX offering. Twilio is targeted at developers while Lync is a ready to use platform for companies. However, systems integrators or managed service providers could take the Twilio APIs and build a repeatable solution that offers much of Lync’s capabilities.

While several open source PBX phone systems are available, the open source Asterisk project is by far the best known. Companies could consider Asterisk as a piece of a Lync alternative. However, Asterisk, as a PBX product, does not itself offer the full platform for voice, web conferencing and instant messaging as yet.

Perhaps the best alternative to Lync, especially for small and medium sized business, is a unified communications offering from the likes of Cisco or Avaya.

Earlier this year Cisco announced the Cisco Unified Communications 300 Series, aimed at companies with up to 24 employees. Cisco also offers the Cisco Unified Communications Manager Business Edition 3000, for companies with up to 300 users.

It would be interesting for a Cisco competitor, such as Avaya, to acquire Twilio and build a customer and developer friendly offering that rivals Cisco’s unified communications platform and Microsoft Lync.

Whatever alternatives to Lync you ultimately decide to consider, ensure that you’ve done this due diligence before Lync arrives at your company’s doorstep. Make no mistake, Lync offers value, but it also further entrenches Microsoft into critical pieces of your IT and communications environment.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

VMware’s vSphere 5 brings new features and performance improvements, as InfoWorld’s Ted Samson reports. vSphere 5 also introduces a new licensing approach, one which many users are claiming will significantly increase prices. Will your organization be impacted by the new pricing? If so, consider using an open source hypervisor to for certain workloads.

New vRAM licensing model
With the introduction of vSphere 5, VMware is evolving its product licensing model to give customers a “pay for consumption” approach to IT.

The new licensing model is still based on CPU cores, but does away with a limitation of physical RAM per server license. Instead, VMware has introduced the notion of virtual memory, or vRAM in VMware’s terminology. vRAM is defined as the virtual memory configured to virtual machines.

vSphere 5 is licensed per processor with a varying amount of pooled vRAM entitlements based on the vSphere package purchased.

According to VMware’s whitepaper on the new licensing model for vSphere 5, vRAM helps customers better share capacity across their IT environment:

An important feature of the new licensing model is the concept of pooling the vRAM capacity entitlements for all processor licenses. The vRAM entitlements of vSphere CPU licenses are pooled–that is, aggregated–across all CPU licenses managed by a VMware vCenter instance (or multiple linked VMware vCenter instances) to form a total available vRAM capacity (pooled vRAM capacity). If workloads on one server are not using their full vRAM entitlement, the excess capacity can be used by other virtual machines within the VMware vCenter instance. At any given point in time, the vRAM capacity consumed by all powered-on virtual machines within a pool must be equal or lower than the pooled vRAM capacity.

Since vRAM entitlements can be shared amongst multiple host servers, VMware suggests that customers may require fewer vSphere licenses.

Prepare for higher VMware vSphere licenses due to available RAM memory
VMware doesn’t mention that the new vRAM-based licensing model could lead to significantly higher license requirements as the per-CPU licensing for vSphere 5 only has limits on vRAM per license.

If your configuration has more vRAM than is entitled for use with the CPU license of vSphere 5, then you would need additional licenses.

For example, the vSphere Enterprise Plus package, priced at $3,495 per CPU, allows up to 48GB of vRAM.

Let’s evaluate a scenario where you have a two socket processor, with no more than 12 cores per socket, with 256GB of RAM. The two processors would require two vSphere licenses, resulting is an entitlement of 96GB of vRAM entitlements (2 x 48GB of vRAM per licensed CPU). However, your server has 256 GB of RAM, all of which needs to be licensed. As a result, you must buy an additional 4 licenses of vSphere 5 Enterprise Plus. In total, you would need 6 licenses of Enterprise Plus which would entitle you to run 288 GB of vRAM, sufficient for your 256GB of physical RAM.

Many users shocked by new VMware vSphere prices
User response to the new licensing at VMware’s community forum has been decidedly negative.

One person commenting on the VMware forum writes: “We just purchased ten dual-socket servers with 192GB RAM each (enterprise license level) and we’ll need to triple our license count to be able to use all available RAM if allocated by VMs.”

Another person claims that their small and medium business will see a 300 percent increase in price as a result of the new model.

The general tone of responses on the VMware community forum has been one of shock at VMware. Fear of having to explain to one’s boss that the cost of VMware virtualization licensing is going to be two or three times higher than expected is, not surprisingly, a key concern.

Echoing the comments of many on the forum, Vince77 writes:

Also, when virtualizing servers the only bottleneck I run into is Memory, VMware also knows that so they now build their licensing (moneymaker) based on that.

And every new version of Windows “likes” more ram to make it run smooth.

Now it’s time to really take a good look at Xenrver or even….. .. HyperV!

An opportunity for open source hypervisors
The new pricing model further increases the price gap between VMware and Red Hat Virtualization or Citrix Xen virtualization solutions.

For instance, Red Hat offers a 1 year subscription for up to 6 managed sockets, regardless of cores per socket, for $4,495 per year. Red Hat’s virtualization offering also doesn’t have any restrictions on RAM entitled for use with a licensed socket.

Over a 5 year period, Red Hat’s solution would cost $26,970.

To compare with Red Hat, a customer would buy at least 6 processor licenses of VMware vSphere Enterprise Plus with product support and subscription over 5 years for $3,495 per processor license and 5 years of support and subscription at $874 per year per processor.

Over a 5 year period, VMware’s solution would cost at least $47,190, or at least 74 percent higher than Red Hat.

I stress, “at least”, to take into account the fact that VMware’s pricing could be significantly higher if additional processor licenses were required to entitle for the amount of RAM being used.

As the pricing gap gets closer to 100 percent higher using VMware versus a leading open source virtualization solution, and open source virtualization solutions become more mature, customers will have to reconsider open source options. I’m not suggesting a wholesale shift from VMware to an open source alternative – such migrations seldom happen, and often never as quickly as pundits would suggest.

I am suggesting that you evaluate the new vSphere pricing and usage of server virtualization to determine if a portion of your virtualization needs couldn’t be better served, at a lower cost, using an open source solution.

This balance between enterprise-grade commercial software product, and less mature, but compelling, open source software product has been playing out over much of the software market.

VMware’s vSphere 5 pricing could simply serve to accelerate the shift towards a mixture of commercial and open source in the virtualization arena.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

Simon Wardley writes an interesting post claiming that Microsoft’s biggest enemy is not Google, Facebook, Apple – it’s Microsoft. Wardley’s research suggests that inertia makes it easy for Microsoft to continue doing what they’ve done in the past with great success. Inertia is also an important force within IT departments. IT decision makers seeking to help their businesses differentiate from competition should guard against technology inertia.

Bill Gates: Success is a lousy teacher
Wardley spoke at OSCON 2010 about how open source vendors could disrupt market incumbents by taking advantage of the incumbent’s existing business model.

Wardley quotes Bill Gates, who once noted “Success is a lousy teacher”. Wardley explains:

That’s one of those basic lessons which often gets forgotten in business. In this world of competition, there are two fronts to fight on. The external front includes those competitors who attempt to either gain a creative leadership position or to disrupt your existing model. The other front is internal and against your own past success.

Vendors often focus on external competition, but the ability to compete effectively externally is directly impacted by how the degree a vendor’s corporate culture can allow it to look beyond past success.

Historical success in a given product area creates sacred products which must be protected and definitely not commoditized when considering new opportunities or new competitors.

Wardley claims that Microsoft’s recent cloud moves, while admirable, aren’t enough to compete in a services-based marketplace built around open source:

Whilst MSFT has made much of a fanfare about its recent moves into the cloud, it was a probably a significant internal battle for MSFT just to make the change from products to services. However, this new world is likely to be rapidly commoditized to marketplaces based around open source and hence the real question becomes whether MSFT will be able to make the further change necessary to survive in that world?

Microsoft’s future business should be intertwined with open source in the domain of utility services. Unfortunately, the last group of people who are usually willing to accept such a change are those who have built careers in the previous domain e.g. products.

I’ve seen the scenario Wardley lays out play out in my product areas and across IBM. However, most of the time, we’ve been able to look beyond sacred products and try new business models that on the surface could commoditize our most important products. These actions have typically helped grow the overall IBM revenue base, and in many cases, further grow the penetration of those sacred products. Looking beyond past success isn’t easy for vendors, but it’s critical for long term viability.

IT departments must also fight inertia
There’s another angle to consider before concluding that vendors simply follow their inertia – they do. However, customers also follow their own corporate IT inertia. This in turn makes it possible for vendors to continue viewing the market as they have in the past.

Whether it’s past success or “just the way we do it here”, many IT departments I’ve interacted with put a premium on existing process, technologies, skills and buying preferences.

One can hardly blame IT decision makers considering the financial, and more importantly, skills investments that their companies have made with a given technology.

However, as is evident when considering the fate of vendors that cling too closely to sacred products and inertia, IT decision makers are cautioned to look beyond inertia when delivering value to the business in five years.

One approach to doing so is to allocate a portion the IT budget for projects and technologies that run counter to the IT department’s technology and process inertia. Start with a less critical project initially, and learn from unforeseen challenges before apply these new technology choices throughout the IT department.

Developers, startups and perennial early adopters don’t let IT inertia get in the way. Companies that tend to fall into the early or late majority should build plans to innovate outside of their comfort zones also.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

As open source usage has grown into the mainstream, have users started to contribute less time and money to open source projects, thereby putting the future of the project at risk? One CEO of a leading open source based company thinks so.

Open Source loses it cachet
Today vendors are adding “cloud” into the description of their company or product for two reasons. First, in the hopes of riding the hype around cloud computing. Second, in order to shape the definition of what a cloud company or product is.

The above held true for “open source” five years ago.

Since then, open source has become much better understood as a development, distribution, pricing and licensing model by IT decision makers today. As a result, as The 451 Group’s Matt Aslett explains, the term “open source” holds less value as a differentiator for vendors. Aslett writes:

“…but these are among the highest profile open source-related vendors, so the fact that half of them have dropped open source as an identifying differentiator in the last 12 months (and another two long before that) is not insignificant.”

User contributions on a decline?
Brian Gentile, CEO of JasperSoft, an open source business intelligence vendor, agrees with Aslett’s conclusion that the term “open source” has lost its differentiating ability as open source is a mainstream option for many companies.

As open source has become mainstream, Gentile writes that he is seeing a decline in user contribution of time and money to open source communities. Gentile defines user contributions as follows:

“Open source communities thrive based on the community members donating either their time and/or money. Donating money typically comes in the form of buying or subscribing to the commercial versions of the open source products. Donating time can come in a wide variety of forms, including providing technical support for one another in forums, reviewing and commenting on projects, helping to QA a release candidate, assisting in localization efforts, and of course contributing code improvements (features, bug fixes and the like).”

Results from the 2010 Eclipse Survey support Gentile’s claims about user contributions of time declining.

In 2010, 41 percent of respondents, up from 27 percent in 2009, claimed they use open source software without contributing to the project.

Part of the decline in contribution is surely linked to corporate policies. The Eclipse survey found that 35 percent of respondents in 2010, down from 48 percent in 2009, claimed their employer’s corporate policies allowed employees to contribute to open source projects.

Open source users and customers are different
Is user contribution of money to an open source project also on a decline as Gentile worries?

James Dixon, CTO and founder of Pentaho, also an open source business intelligence vendor, disagrees with Gentile’s notion of users contributing money to a project.

Dixon believes that attempting to sell an enterprise version of software and services to community members is a mistake, one which misses the distinction between users and customers.

“As a commercial open source (COSS) company you can provide tools for your community members to persuade their employers to become customers, and you can explain how this benefits both companies involved and the community. For most COSS companies is it impossible to monetize the community directly, and therefore ridiculous to try.”

Users can contribute time, customers can contribute money
It’s important to separate, as Dixon does, the expectations on users versus customers. For enterprise software, users seldom have the budget authority to become paying customers. Users can encourage IT decision makers to become customers.

Gentile is correct in stating that users of an open source project, especially an early stage project, contribute their time to a project.

As the project becomes widely adopted by users, companies decide to adopt the resulting product from the open source project. These companies contribute money to the vendor, who in turn uses the funds to further enhance the product and the open source project.

A decline in user contributions of time is not necessarily an issue. Nor should it be concerning that community users aren’t contributing money to a project, for the simple reason that they don’t have the budget authority to do so within an enterprise.

Over time, user contribution declines, but the project is sustained by the funds made available to the project through corporate purchasers of the product. In a sense, as projects mature, user contribution of time is inversely proportional to customer contribution of money.

Follow me on Twitter at SavioRodrigues. I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

NoSQL is still not well understood, as a term or a database market category, by enterprise IT decision makers. However, one NoSQL vendor, 10gen, creators of open source MongoDB, appears to be growing into enterprise accounts and distancing themselves from competitors. If you’re considering, or curious about, NoSQL databases, spend some time looking at MongoDB.

Understanding not only SQL, aka NoSQL
While the term NoSQL suggests a product category that is anti-SQL or anti-relational databases, the term has evolved to mean “not only SQL”.

According to, there are over 122 NoSQL database products to date. These products differ from traditional relational databases as they don’t rely on a relational model, are often schema free and support eventually consistent, not ACID, transactions.

While companies needing to manage terabytes of data across commodity servers, such as Facebook, FourSquare or Shutterfly have been early adopters of NoSQL databases, traditional enterprises such as Disney and Intuit are joining the NoSQL customer list.

Max Schireson, president of 10Gen, asserts that relational databases are here to stay and have an important role to play in tomorrow’s enterprise.

Schireson sees NoSQL and relational databases both being used within a given enterprise, albeit for different applications.

If this positioning sounds familiar, recall that MySQL attempted to paint a picture of co-habitation with enterprise database vendors.

If an application is processing sales orders and needs absolute guaranteed transactions, a relational database supporting ACID transactions is a must. If an application is processing millions of events, such as click streams, in order to better optimize an online sales catalog, and losing a few of those events is less critical than being able to scale the application and data across commodity servers, then a NoSQL database could be a perfect fit.

MongoDB distances itself from NoSQL alternatives
While NoSQL databases such as Cassandra, originally developed and used by Facebook, or CouchDB get a lot of media attention, MongoDB appears to be the product to catch in this hot market.

Worldwide Google searches for various NoSQL product names shows the marked increase in MongoDB and Mongo searches since January 2011. Google searches for MongoDB and Mongo exceeded searches for CouchDB, Couchbase, Membase, Cassandra, and HBase combined.

According to, jobs seeking MongoDB or Mongo skills have outpaced other leading NoSQL products. MongoDB and Mongo now represent the most sought after NoSQL skills amongst companies hiring on

Recently announced platform as a service offerings from Red Hat and VMware featured MongoDB at the data services layer of their respective offerings.

Schireson shared some stats on 10Gen’s commercial business growth into the enterprise with MongoDB.

Six months ago the majority of 10Gen customers were startups; today the majority are traditional enterprise customers. In fact, 10Gen counts five Fortune 100 companies amongst its over 200 paying customers.

With over 100,000 downloads per month and developer attendance to MongoDB conferences increasing 400 percent to nearly 2,000 across San Francisco, New York and Beijing, MongoDB traction continues to increase.

Schireson explained that many enterprises have developers interested in MongoDB, as the above download and conference attendance data backs up. However, enterprises are waiting for their peers to go first into the world of NoSQL.

Schireson revealed that securing Disney as a public MongoDB reference has led to increased enterprise interest in 10gen from MongoDB users.

MSQL poster child Craigslist adopts MongoDB
Another recent coup for 10Gen was winning Craigslist as a customer of MongoDB.

Craigslist’s Jeremy Zawodny, author of the popular High Performance MySQL book, recently spoke about Craigslist adopting MongoDB to handle Craigslist’s multi-billion document deployment. Zawodny explains Craigslist’s evolution from being a MySQL everywhere shop to selecting the appropriate database technology based on varying data and performance needs.

When Zawodny, a MySQL performance guru, gets behind MongoDB, it’s time for enterprises interested in NoSQL to consider MongoDB.

Follow me on Twitter at SavioRodrigues. I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

An ex-Google employee recently expressed concerns about the antiquity of Google’s software infrastructure. This is the same software infrastructure underpinning Google’s App Engine. Learn more before your enterprise considers Google App Engine.

Engineer claims Google’s software infrastructure is obsolete
In a post explaining why he’s leaving Google, former Google Wave engineer Dhanji R. Prasanna wrote:

Here is something you’ve may have heard but never quite believed before: Google’s vaunted scalable software infrastructure is obsolete. Don’t get me wrong, their hardware and datacenters are the best in the world, and as far as I know, nobody is close to matching it. But the software stack on top of it is 10 years old, aging and designed for building search engines and crawlers. And it is well and truly obsolete.

Protocol Buffers, BigTable and MapReduce are ancient, creaking dinosaurs compared to MessagePack, JSON, and Hadoop. And new projects like GWT, Closure and MegaStore are sluggish, overengineered Leviathans compared to fast, elegant tools like jQuery and mongoDB. Designed by engineers in a vacuum, rather than by developers who have need of tools.

Prasanna argues that Google’s software infrastructure hasn’t kept up with alternatives developed in an open community forum.

Don’t take Prasanna’s statements as those of a disgruntled ex-employee. He writes that working for Google was the “best job I’ve ever had, by a long way”. Additionally, Prasanna had built up serious technical credibility, both within and outside of Google, especially in the Java arena.

As interesting as Prasanna comments may be, Google software infrastructure has little impact on your enterprise, right? Correct, unless you’re considering Google App Engine.

Google’s software infrastructure bleeds through Google App Engine
Google isn’t going to open source its back end software infrastructure. However, as the Register’s Cade Metz writes, Google’s software infrastructure is surfaced for enterprise usage through Google’s App Engine.

Google App Engine product manager Sean Lynch, who has since left the company, explains how Google exposes their internal software infrastructure to 3rd party developers and enterprises. Lynch states:

We decided we could take a lot of this infrastructure and expose it in a way that would let third-party developers use it – leverage the knowledge and techniques we have built up – to basically simplify the entire process of building their own web apps: building them, managing them once they’re up there, and scaling them once they take off.

Make no mistake, Google App Engine is a success with developers, with over 100,000 developers accessing the online console each month and serving up 1.5 billion page views a day according to Metz’s story. However, keep in mind the difference between success with developers versus success with enterprises.

Lynch stated that Google App Engine is a long term business focused on the enterprise space. Later this year, Google App Engine expects to exit of a three-year beta period and introduce enterprise class service level agreements.

Google App Engine started as a way to expose Google vaunted, to use Prasanna’s description, software infrastructure to third party developers. But it appears that the market has moved faster than Google’s internal users demanded.

Enterprises seek vendors whose core business is linked to the software platform they’re selling
I would propose that part of this gap between Google and outside technologies is driven by the fact that offering a cloud platform as a service is not core to Google’s business. This is why I’ve argued that commercial software vendors have little to fear from vendors that produce software primarily for their own use and then opt to secondarily open source the code. Unless and until the open sourced code is picked up by one or more vendors whose core business is tied to the project, enterprises will shy away from adoption.

Consider where Hadoop, first developed and open source by Yahoo!, would be if not for Cloudera and other vendors whose core business is linked to the enterprise success of Hadoop.

In the case of Google App Engine, a key question to ask is how your enterprise needs will be prioritized against the needs of internal Google developers.

While both user groups would share a set of feature requests, there are undoubtedly features that enterprises will seek that Google developers will not need. With both user groups vying for the next feature on their wish-list to be completed, will Google address the needs of its internal developers or third party enterprise? Keep in mind that revenue from projects that internal developers are working on will far outweigh revenue from third party enterprises through Google App Engine for the foreseeable future.

According to Prasanna, developers and enterprises can get more innovative, state of the art and high performance software infrastructure from open source projects that have replicated some of Google’s best ideas, like Hadoop or mongoDB, than by using Google’s software infrastructure itself.

Combining individual best of breed software building blocks into a cloud platform as a service environment that offers the functionality of Google App Engine, requires enterprises to do a lot more work than simply using Google App Engine.

Another alternative would be to consider a vendor whose business is tied to the success, or failure, of the cloud platform as a service offering. For instance,, VMware, Red Hat, Microsoft, and IBM, to name but a few vendors, offer environments designed and built for third party enterprise use.

Keep these considerations in mind while evaluating Google App Engine, or any cloud platform as a service offering for enterprise use.

Follow me on Twitter at SavioRodrigues. I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

Next Page »