VMware’s vSphere 5 new licensing model raised eyebrows and tempers across the industry just three weeks ago. Bowing to the negative response, VMware has announced three changes that should appease customers – but will still likely lead to higher virtualization costs for VMware customers.

Customer backlash with original vSphere 5 licensing model
When first announced, the new vSphere 5 licensing model was positioned as being a positive for customers. VMware didn’t however mention that each vSphere 5 CPU-based license came with a fixed amount of virtual RAM (vRAM) entitlements. If your configuration has more vRAM than is entitled for use with the CPU license of vSphere 5, you need additional licenses.

As I’d previously covered, VMware customers complained of 2x to 3x higher licensing costs for vSphere 5 versus vSphere 4.

VMware announced three changes that attempt to address concerns around vRAM as a licensing metric and the resulting increases in cost for a VMware customer. Users commenting on VMware’s forum seem to be less irate with the new changes but still question the vRAM model.

Increasing vRAM entitlements per license
First, the vRAM entitlement per vSphere edition has been increased from 24/24/24/32/48 to 32/32/32/64/96 gigabytes for vSphere Essentials, Essentials Plus, Standard, Enterprise and Enterprise Plus respectively.

Let’s evaluate the result of this modification on our previously discussed customer example. We have a customer with a two-socket processor with no more than 12 cores per socket, with 256GB of RAM.

Under the original vSphere 5 licensing model, the customer needed six CPU-based vSphere Enterprise Plus licenses — not two — to be entitled in order to use the full 256GB of RAM on the system.

Under the revised vSphere 5 licensing model, the customer would need three CPU-based vSphere Enterprise Plus licenses — not two.

The first two CPU-based licenses, sufficient for the two CPUs on the system, would have provided 96GB of vRAM entitlement each, for a total of 192GB.

However, since the scenario included 256GB of RAM, the customer would have had to buy a third vSphere CPU-based licenses in order to use the full 256GB of RAM.

Using the old vSphere 4 licensing model, the customer would only need to purchase 2 CPU-based vSphere Enterprise Plus licenses. As a result, a customer in this situation would still be paying 50 percent more than they did with vSphere 4, but at least its not 300 percent, as was the case with the original vSphere 5 licensing model.

Capping vRAM usage to encourage virtual machines with large amounts of vRAM
The second licensing change VMware announced is that the amount of vRAM counted against any one virtual machine is capped at 96GB. Using our example above, if your system has 256 GB of physical RAM and you want to allocate 128GB of vRAM to two virtual machines, VMware will only reduce your vRAM allocation from 256 to 192GB. As a result, you’d be 64GB under your 256 vRAM entitlement even while using a full 256 GB of vRAM across the two virtual machines.

VMware made reference to the fact that customers could use a 1TB vRAM virtual machine wile only reducing their vRAM allotment by 96GB from the total vRAM pool.

Customers would still need sufficient vSphere CPU-based licenses to use the 1TB, or whatever number, of physical RAM available on the system. As such, this change only applies to how much vRAM is used from within the vRAM pool. This is pertinent for additional vSphere CPU-based licenses that may be needed in following years if the customer surpasses the total pooled vRAM allotment in the previous year.

VMware aims for Tier 1 applications
With the third licensing change VMware now calculates a 12 month average of consumed vRAM rather than tracking the high water mark of vRAM used.

By limiting the vRAM counted per VM to 96GB and tracking a 12 month average of vRAM usage, it’s less likely that a customer will use more than their vRAM allotment per year.

The combination of the second an third licensing change make it much more attractive for customers to run multiple Tier 1 applications on VMware by reducing the licensing hurdles for doing so.

The risk however, especially after the current licensing fiasco, is how licensing may change in the future, after customers are heavily reliant on VMware for even their Tier 1 applications.

As we’ve discussed before, it’s good to evaluate options, especially as open source hypervisors become more mature.

As you move more business critical applications to VMware, consider moving some of your existing, less critical, VMware applications and environments to an open source hypervisor.  Using a mixture of VMware and an open source hypervisor, is likely your best long term option for balancing costs and flexibility.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

A new cloud infrastructure provider expects to “disrupt and democratize cloud computing” using open source cloud software and commodity hardware.

Nebula hopes to simplify private cloud creation
Nebula, founded by former NASA CTO Chris Kemp, was launched at OSCon this week. Nebula borrows its name and initial technology from a project that Kemp led at NASA, and was later open sourced by NASA into a project named OpenStack.

Nebula plans to sell hardware appliances to create private clouds using your existing or new compute and storage hardware. OpenStack is used to allocate compute and storage resources to a given user or application in an elastic fashion.

Each Nebula hardware appliance is able to control up to 20 compute and storage nodes within your private cloud. If your private cloud has hundreds of nodes, as would be expected, you’ll need multiple Nebula appliances.

A recent survey of 500 enterprises found that the average enterprise maintains 662 physical servers. Creating a private cloud out of these 662 physical servers would require 33 Nebula appliances, and that’s before including any storage nodes into the calculations.

According to VentureBeat, Kemp is quoted saying:

You buy 10 or 100 of our boxes and plug a whole rack of servers into our boxes. It is data center infrastructure, offered as a service. This is the kind of shift that has to happen if the data center revolution is going to proceed.

Depending on the pricing for these Nebula appliances, buying tens or hundreds of Nebula appliances could start adding up to significant costs. That said, Nebula claims to be able to help build a private cloud in minutes, not months, thereby providing time to value benefits that Nebula would seek to monetize.

Nebula’s attempt to differentiate through openness
Nebula’s approach is interesting and follows the growing adoption of appliances optimized for specific purposes and the specific trend of using appliances as building blocks for a private cloud platform.

Vendors such as IBM, Oracle and VMware/Cisco/EMC (VCE) already offer, to varying degrees, appliances to help build out your private cloud. Even Microsoft has spoken about an Azure appliance, although it’s been delayed several times.

Nebula hopes to differentiate from better known IT vendors by leveraging the openness of its cloud infrastructure software layer.

Nebula claims that the appliance is built on the same APIs and runtime as OpenStack, but adds numerous security, management, and platform enhancements. It remains to be seen whether these additional enhancements will also be open sourced. If these enhancements are not open sourced, the system’s openness would come into question.

Does an open source foundation matter in the cloud
Nebula’s product page claims the following key value propositions: Open Software, Open Hardware, DevOps Compatible, Self-Service, Security, Massive Scalability, Elastic Infrastructure and High Availability.

The only value proposition on this list that IBM, Microsoft, Oracle, or VMware/Cisco/EMC couldn’t claim equally as well is that they provide “open software”. I say equally as well, as “open software” is a broad term.

Nebula’s key differentiator is that their solution is based on an open stack. But does that matter to buyers? Would it matter to you?

Microsoft’s Gianugo Rabellino, Senior Director for Open Source at Microsoft, explained Microsoft’s stance that as long as the APIs and protocols for the cloud are open, customers care less about the openness of the underlying platform.

This view is shared by the newly launched Open Cloud Initiative. Open Cloud Initiative director Sam Johnston writes

…so long as the interfaces and formats are open, and there are “multiple full, faithful and interoperable implementations” (at least one of which is Open Source) then it’s “Open Cloud”.

Enterprise IT vendors have a long history of cooperating on standards and competing on the basis of their implementation. This too will occur at the cloud level. And when it does, differentiation will shift towards things like ease of use, interoperability with existing assets, high performance and total cost of ownership.

Therein lies the challenge for Nebula. Their key differentiator, openness, isn’t a sustainable advantage.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

Lync, Microsoft’s unified communications platform, combining voice, web conferencing and instant messaging, is reportedly poised to become the next billion dollar business at Microsoft. It’s time you considered alternatives before Lync becomes engrained in your IT environment, much like SharePoint has for many companies.

Lync follows in SharePoint’s billion dollar footsteps
According to reports from Microsoft’s Worldwide Partner Conference (WPC) 2011, the company has high expectations for Lync, with several Microsoft managers telling editorial director of MSPmentor, Joe Panettieri, that Lync’s sales trajectory will make Lync Microsoft’s next billion dollar platform.

With Lync, formerly Office Communications Server, Microsoft is following a similar strategy to Microsoft’s SharePoint, another billion dollar plus business.

With Lync, as with SharePoint before it, Microsoft has built a set of applications that leverages Microsoft Office’s massive install base. Microsoft is now accelerating partner involvement to shift Lync from a set of applications to a platform that partners can manage and customize.

Microsoft expects to target the 10 million legacy voice over IP (VoIP) phone lines that Cisco currently controls, largely in the enterprise space. However, as Panettieri explains, Microsoft has the install base and partner channel to grow Lync in the small and medium business market.

Lync is available on the Office 365 cloud, but is expected to garner higher on premises interest, an attractive point for Microsoft’s managed service provider partners, driven by a more complete feature set on premises.

Consider alternatives before Lync arrives at your door
Lync only furthers your company’s reliance on Microsoft Office – a smart strategy for Microsoft.

As Microsoft partners get more involved with Lync, you’ll be getting briefings on the benefits of Lync in your business. Now would be a good time to start considering alternatives, especially a few in the open source arena, to be ready for Lync conversations with your friendly neighborhood Microsoft partner.

As Lync is growing by selling into the Microsoft Office install base, the first alternative to consider is Google Apps, a direct cloud competitor of Microsoft’s Office 365. While Google doesn’t yet offer a PBX, OnState Communications offers a cloud-based PBX that from the Google Apps Marketplace. It also stands to reason that Google will add some degree of PBX capabilities into Google Apps.

Twilio, a self purported cloud communications vendor, offers a platform to build voice and SMS applications using simple to use APIs. Twilio also offers an open source phone system through its OpenVBX offering. Twilio is targeted at developers while Lync is a ready to use platform for companies. However, systems integrators or managed service providers could take the Twilio APIs and build a repeatable solution that offers much of Lync’s capabilities.

While several open source PBX phone systems are available, the open source Asterisk project is by far the best known. Companies could consider Asterisk as a piece of a Lync alternative. However, Asterisk, as a PBX product, does not itself offer the full platform for voice, web conferencing and instant messaging as yet.

Perhaps the best alternative to Lync, especially for small and medium sized business, is a unified communications offering from the likes of Cisco or Avaya.

Earlier this year Cisco announced the Cisco Unified Communications 300 Series, aimed at companies with up to 24 employees. Cisco also offers the Cisco Unified Communications Manager Business Edition 3000, for companies with up to 300 users.

It would be interesting for a Cisco competitor, such as Avaya, to acquire Twilio and build a customer and developer friendly offering that rivals Cisco’s unified communications platform and Microsoft Lync.

Whatever alternatives to Lync you ultimately decide to consider, ensure that you’ve done this due diligence before Lync arrives at your company’s doorstep. Make no mistake, Lync offers value, but it also further entrenches Microsoft into critical pieces of your IT and communications environment.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

VMware’s vSphere 5 brings new features and performance improvements, as InfoWorld’s Ted Samson reports. vSphere 5 also introduces a new licensing approach, one which many users are claiming will significantly increase prices. Will your organization be impacted by the new pricing? If so, consider using an open source hypervisor to for certain workloads.

New vRAM licensing model
With the introduction of vSphere 5, VMware is evolving its product licensing model to give customers a “pay for consumption” approach to IT.

The new licensing model is still based on CPU cores, but does away with a limitation of physical RAM per server license. Instead, VMware has introduced the notion of virtual memory, or vRAM in VMware’s terminology. vRAM is defined as the virtual memory configured to virtual machines.

vSphere 5 is licensed per processor with a varying amount of pooled vRAM entitlements based on the vSphere package purchased.

According to VMware’s whitepaper on the new licensing model for vSphere 5, vRAM helps customers better share capacity across their IT environment:

An important feature of the new licensing model is the concept of pooling the vRAM capacity entitlements for all processor licenses. The vRAM entitlements of vSphere CPU licenses are pooled–that is, aggregated–across all CPU licenses managed by a VMware vCenter instance (or multiple linked VMware vCenter instances) to form a total available vRAM capacity (pooled vRAM capacity). If workloads on one server are not using their full vRAM entitlement, the excess capacity can be used by other virtual machines within the VMware vCenter instance. At any given point in time, the vRAM capacity consumed by all powered-on virtual machines within a pool must be equal or lower than the pooled vRAM capacity.

Since vRAM entitlements can be shared amongst multiple host servers, VMware suggests that customers may require fewer vSphere licenses.

Prepare for higher VMware vSphere licenses due to available RAM memory
VMware doesn’t mention that the new vRAM-based licensing model could lead to significantly higher license requirements as the per-CPU licensing for vSphere 5 only has limits on vRAM per license.

If your configuration has more vRAM than is entitled for use with the CPU license of vSphere 5, then you would need additional licenses.

For example, the vSphere Enterprise Plus package, priced at $3,495 per CPU, allows up to 48GB of vRAM.

Let’s evaluate a scenario where you have a two socket processor, with no more than 12 cores per socket, with 256GB of RAM. The two processors would require two vSphere licenses, resulting is an entitlement of 96GB of vRAM entitlements (2 x 48GB of vRAM per licensed CPU). However, your server has 256 GB of RAM, all of which needs to be licensed. As a result, you must buy an additional 4 licenses of vSphere 5 Enterprise Plus. In total, you would need 6 licenses of Enterprise Plus which would entitle you to run 288 GB of vRAM, sufficient for your 256GB of physical RAM.

Many users shocked by new VMware vSphere prices
User response to the new licensing at VMware’s community forum has been decidedly negative.

One person commenting on the VMware forum writes: “We just purchased ten dual-socket servers with 192GB RAM each (enterprise license level) and we’ll need to triple our license count to be able to use all available RAM if allocated by VMs.”

Another person claims that their small and medium business will see a 300 percent increase in price as a result of the new model.

The general tone of responses on the VMware community forum has been one of shock at VMware. Fear of having to explain to one’s boss that the cost of VMware virtualization licensing is going to be two or three times higher than expected is, not surprisingly, a key concern.

Echoing the comments of many on the forum, Vince77 writes:

Also, when virtualizing servers the only bottleneck I run into is Memory, VMware also knows that so they now build their licensing (moneymaker) based on that.

And every new version of Windows “likes” more ram to make it run smooth.

Now it’s time to really take a good look at Xenrver or even….. .. HyperV!

An opportunity for open source hypervisors
The new pricing model further increases the price gap between VMware and Red Hat Virtualization or Citrix Xen virtualization solutions.

For instance, Red Hat offers a 1 year subscription for up to 6 managed sockets, regardless of cores per socket, for $4,495 per year. Red Hat’s virtualization offering also doesn’t have any restrictions on RAM entitled for use with a licensed socket.

Over a 5 year period, Red Hat’s solution would cost $26,970.

To compare with Red Hat, a customer would buy at least 6 processor licenses of VMware vSphere Enterprise Plus with product support and subscription over 5 years for $3,495 per processor license and 5 years of support and subscription at $874 per year per processor.

Over a 5 year period, VMware’s solution would cost at least $47,190, or at least 74 percent higher than Red Hat.

I stress, “at least”, to take into account the fact that VMware’s pricing could be significantly higher if additional processor licenses were required to entitle for the amount of RAM being used.

As the pricing gap gets closer to 100 percent higher using VMware versus a leading open source virtualization solution, and open source virtualization solutions become more mature, customers will have to reconsider open source options. I’m not suggesting a wholesale shift from VMware to an open source alternative – such migrations seldom happen, and often never as quickly as pundits would suggest.

I am suggesting that you evaluate the new vSphere pricing and usage of server virtualization to determine if a portion of your virtualization needs couldn’t be better served, at a lower cost, using an open source solution.

This balance between enterprise-grade commercial software product, and less mature, but compelling, open source software product has been playing out over much of the software market.

VMware’s vSphere 5 pricing could simply serve to accelerate the shift towards a mixture of commercial and open source in the virtualization arena.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

Simon Wardley writes an interesting post claiming that Microsoft’s biggest enemy is not Google, Facebook, Apple – it’s Microsoft. Wardley’s research suggests that inertia makes it easy for Microsoft to continue doing what they’ve done in the past with great success. Inertia is also an important force within IT departments. IT decision makers seeking to help their businesses differentiate from competition should guard against technology inertia.

Bill Gates: Success is a lousy teacher
Wardley spoke at OSCON 2010 about how open source vendors could disrupt market incumbents by taking advantage of the incumbent’s existing business model.

Wardley quotes Bill Gates, who once noted “Success is a lousy teacher”. Wardley explains:

That’s one of those basic lessons which often gets forgotten in business. In this world of competition, there are two fronts to fight on. The external front includes those competitors who attempt to either gain a creative leadership position or to disrupt your existing model. The other front is internal and against your own past success.

Vendors often focus on external competition, but the ability to compete effectively externally is directly impacted by how the degree a vendor’s corporate culture can allow it to look beyond past success.

Historical success in a given product area creates sacred products which must be protected and definitely not commoditized when considering new opportunities or new competitors.

Wardley claims that Microsoft’s recent cloud moves, while admirable, aren’t enough to compete in a services-based marketplace built around open source:

Whilst MSFT has made much of a fanfare about its recent moves into the cloud, it was a probably a significant internal battle for MSFT just to make the change from products to services. However, this new world is likely to be rapidly commoditized to marketplaces based around open source and hence the real question becomes whether MSFT will be able to make the further change necessary to survive in that world?

Microsoft’s future business should be intertwined with open source in the domain of utility services. Unfortunately, the last group of people who are usually willing to accept such a change are those who have built careers in the previous domain e.g. products.

I’ve seen the scenario Wardley lays out play out in my product areas and across IBM. However, most of the time, we’ve been able to look beyond sacred products and try new business models that on the surface could commoditize our most important products. These actions have typically helped grow the overall IBM revenue base, and in many cases, further grow the penetration of those sacred products. Looking beyond past success isn’t easy for vendors, but it’s critical for long term viability.

IT departments must also fight inertia
There’s another angle to consider before concluding that vendors simply follow their inertia – they do. However, customers also follow their own corporate IT inertia. This in turn makes it possible for vendors to continue viewing the market as they have in the past.

Whether it’s past success or “just the way we do it here”, many IT departments I’ve interacted with put a premium on existing process, technologies, skills and buying preferences.

One can hardly blame IT decision makers considering the financial, and more importantly, skills investments that their companies have made with a given technology.

However, as is evident when considering the fate of vendors that cling too closely to sacred products and inertia, IT decision makers are cautioned to look beyond inertia when delivering value to the business in five years.

One approach to doing so is to allocate a portion the IT budget for projects and technologies that run counter to the IT department’s technology and process inertia. Start with a less critical project initially, and learn from unforeseen challenges before apply these new technology choices throughout the IT department.

Developers, startups and perennial early adopters don’t let IT inertia get in the way. Companies that tend to fall into the early or late majority should build plans to innovate outside of their comfort zones also.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

As open source usage has grown into the mainstream, have users started to contribute less time and money to open source projects, thereby putting the future of the project at risk? One CEO of a leading open source based company thinks so.

Open Source loses it cachet
Today vendors are adding “cloud” into the description of their company or product for two reasons. First, in the hopes of riding the hype around cloud computing. Second, in order to shape the definition of what a cloud company or product is.

The above held true for “open source” five years ago.

Since then, open source has become much better understood as a development, distribution, pricing and licensing model by IT decision makers today. As a result, as The 451 Group’s Matt Aslett explains, the term “open source” holds less value as a differentiator for vendors. Aslett writes:

“…but these are among the highest profile open source-related vendors, so the fact that half of them have dropped open source as an identifying differentiator in the last 12 months (and another two long before that) is not insignificant.”

User contributions on a decline?
Brian Gentile, CEO of JasperSoft, an open source business intelligence vendor, agrees with Aslett’s conclusion that the term “open source” has lost its differentiating ability as open source is a mainstream option for many companies.

As open source has become mainstream, Gentile writes that he is seeing a decline in user contribution of time and money to open source communities. Gentile defines user contributions as follows:

“Open source communities thrive based on the community members donating either their time and/or money. Donating money typically comes in the form of buying or subscribing to the commercial versions of the open source products. Donating time can come in a wide variety of forms, including providing technical support for one another in forums, reviewing and commenting on projects, helping to QA a release candidate, assisting in localization efforts, and of course contributing code improvements (features, bug fixes and the like).”

Results from the 2010 Eclipse Survey support Gentile’s claims about user contributions of time declining.

In 2010, 41 percent of respondents, up from 27 percent in 2009, claimed they use open source software without contributing to the project.

Part of the decline in contribution is surely linked to corporate policies. The Eclipse survey found that 35 percent of respondents in 2010, down from 48 percent in 2009, claimed their employer’s corporate policies allowed employees to contribute to open source projects.

Open source users and customers are different
Is user contribution of money to an open source project also on a decline as Gentile worries?

James Dixon, CTO and founder of Pentaho, also an open source business intelligence vendor, disagrees with Gentile’s notion of users contributing money to a project.

Dixon believes that attempting to sell an enterprise version of software and services to community members is a mistake, one which misses the distinction between users and customers.

“As a commercial open source (COSS) company you can provide tools for your community members to persuade their employers to become customers, and you can explain how this benefits both companies involved and the community. For most COSS companies is it impossible to monetize the community directly, and therefore ridiculous to try.”

Users can contribute time, customers can contribute money
It’s important to separate, as Dixon does, the expectations on users versus customers. For enterprise software, users seldom have the budget authority to become paying customers. Users can encourage IT decision makers to become customers.

Gentile is correct in stating that users of an open source project, especially an early stage project, contribute their time to a project.

As the project becomes widely adopted by users, companies decide to adopt the resulting product from the open source project. These companies contribute money to the vendor, who in turn uses the funds to further enhance the product and the open source project.

A decline in user contributions of time is not necessarily an issue. Nor should it be concerning that community users aren’t contributing money to a project, for the simple reason that they don’t have the budget authority to do so within an enterprise.

Over time, user contribution declines, but the project is sustained by the funds made available to the project through corporate purchasers of the product. In a sense, as projects mature, user contribution of time is inversely proportional to customer contribution of money.

Follow me on Twitter at SavioRodrigues. I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

NoSQL is still not well understood, as a term or a database market category, by enterprise IT decision makers. However, one NoSQL vendor, 10gen, creators of open source MongoDB, appears to be growing into enterprise accounts and distancing themselves from competitors. If you’re considering, or curious about, NoSQL databases, spend some time looking at MongoDB.

Understanding not only SQL, aka NoSQL
While the term NoSQL suggests a product category that is anti-SQL or anti-relational databases, the term has evolved to mean “not only SQL”.

According to Nosql-database.org, there are over 122 NoSQL database products to date. These products differ from traditional relational databases as they don’t rely on a relational model, are often schema free and support eventually consistent, not ACID, transactions.

While companies needing to manage terabytes of data across commodity servers, such as Facebook, FourSquare or Shutterfly have been early adopters of NoSQL databases, traditional enterprises such as Disney and Intuit are joining the NoSQL customer list.

Max Schireson, president of 10Gen, asserts that relational databases are here to stay and have an important role to play in tomorrow’s enterprise.

Schireson sees NoSQL and relational databases both being used within a given enterprise, albeit for different applications.

If this positioning sounds familiar, recall that MySQL attempted to paint a picture of co-habitation with enterprise database vendors.

If an application is processing sales orders and needs absolute guaranteed transactions, a relational database supporting ACID transactions is a must. If an application is processing millions of events, such as click streams, in order to better optimize an online sales catalog, and losing a few of those events is less critical than being able to scale the application and data across commodity servers, then a NoSQL database could be a perfect fit.

MongoDB distances itself from NoSQL alternatives
While NoSQL databases such as Cassandra, originally developed and used by Facebook, or CouchDB get a lot of media attention, MongoDB appears to be the product to catch in this hot market.

Worldwide Google searches for various NoSQL product names shows the marked increase in MongoDB and Mongo searches since January 2011. Google searches for MongoDB and Mongo exceeded searches for CouchDB, Couchbase, Membase, Cassandra, and HBase combined.

According to Indeed.com, jobs seeking MongoDB or Mongo skills have outpaced other leading NoSQL products. MongoDB and Mongo now represent the most sought after NoSQL skills amongst companies hiring on Indeed.com.

Recently announced platform as a service offerings from Red Hat and VMware featured MongoDB at the data services layer of their respective offerings.

Schireson shared some stats on 10Gen’s commercial business growth into the enterprise with MongoDB.

Six months ago the majority of 10Gen customers were startups; today the majority are traditional enterprise customers. In fact, 10Gen counts five Fortune 100 companies amongst its over 200 paying customers.

With over 100,000 downloads per month and developer attendance to MongoDB conferences increasing 400 percent to nearly 2,000 across San Francisco, New York and Beijing, MongoDB traction continues to increase.

Schireson explained that many enterprises have developers interested in MongoDB, as the above download and conference attendance data backs up. However, enterprises are waiting for their peers to go first into the world of NoSQL.

Schireson revealed that securing Disney as a public MongoDB reference has led to increased enterprise interest in 10gen from MongoDB users.

MSQL poster child Craigslist adopts MongoDB
Another recent coup for 10Gen was winning Craigslist as a customer of MongoDB.

Craigslist’s Jeremy Zawodny, author of the popular High Performance MySQL book, recently spoke about Craigslist adopting MongoDB to handle Craigslist’s multi-billion document deployment. Zawodny explains Craigslist’s evolution from being a MySQL everywhere shop to selecting the appropriate database technology based on varying data and performance needs.

When Zawodny, a MySQL performance guru, gets behind MongoDB, it’s time for enterprises interested in NoSQL to consider MongoDB.

Follow me on Twitter at SavioRodrigues. I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”