April 2010

As open source usage becomes mainstream, it’s important to ensure you’re using a product your company can rely on in the future and that the use complies with open source licensing.

Open source support provider OpenLogic reports over 330,000 open source software packages for enterprises to choose from. Finding the right open source project, with the right license and the assurance of a viable future for the project can be difficult for enterprises to say the least.

Finding the right open source product:
OpenLogic mines through these 330,000 packages to certify and provide direct support subscriptions for over 500 of these open source packages. OpenLogic uses a 42 point certification process to reduce the risk associated with a given open source package. By narrowing down the field from 330,000 to approximately 500, OpenLogic helps enterprises focus their open source selections to projects with, amongst other things, a viable community, well understood licensing, documentation and active maintenance by the project leader.

New to the open source project evaluation arena is SOS Open Source, an automated methodology from open source strategist Roberto Galoppini. The tool enables companies to determine the level of risk associated with using any given open source software. SOS Open Source uses 24 metrics and information collected from open source project directories, forges and meta-forges. Galoppini explains that SOS Open Source is keenly focused on the project strength, measured by the stability and maturity of the project and whether the project is backed by a predictably viable community. Related to the quality of community, Galoppini’s methodology also measures the level of community or vendor support available. Finally, the methodology attempts to rate the possibility of project evolution, whether by the current project committers or third parties. Funambol, an open source provider of cloud synchronization and push email, was recently rated highly using Galoppini’s SOS Open Source evaluation.

Ensuring compliance with open source licensing:
But what if your developers are already using open source without of your knowledge? Well, there’s an app for that. Amongst others, Black Duck Software, OpenLogic and Protecode offer services that can crawl through your enterprise and report back the use of open source software. In fact, these vendors can even crawl through the source code in your internally developed applications to ensure that open source libraries or code fragments are not being used in contravention of their associated licenses.

If your company hasn’t already set an open source usage policy, there’s no better time than the present to start down that path.

Follow me on twitter at: SavioRodrigues

PS: I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.”

Red Hat’s cloud strategy appears focused on retaining existing customers, not attracting new customers.

There’s little argument that Red Hat is the undisputed leader in the enterprise Linux market by the measure that counts most, revenue. However, Red Hat’s position as the leading Linux vendor for cloud workloads remains in dispute at best, and far from reality at worst. All signs point to Ubuntu as the future, if not current, leader in the Linux cloud workload arena.

First, data from the Ubuntu User Survey decidedly points to Ubuntu’s readiness for mission critical workloads, with over 60 percent of respondents considering Ubuntu a viable platform for cloud based deployments.

Second, statistics taken from Amazon EC2 and synthesized by The Cloud Market, clearly point to Ubuntu’s leading position against other cloud operating systems in EC2 instances today.

With these facts in hand, once could have expected Red Hat to take steps to grow Red Hat Enterprise Linux (RHEL) adoption in cloud environments. In fact, Red Hat’s Cloud Access marketing page boldly claims:

“Red Hat is the first company to bring enterprise-class software, subscriptions, and support capabilities all built in to business and operational models that were designed specifically for the cloud.”

However, Red Hat announced a program in which existing customers, or new customers willing to purchase, at least 25 active subscriptions of RHEL Advanced Platform Premium or RHEL Server Premium could deploy unused RHEL subscriptions on Amazon EC2. With the minimum support price of $1,299 for RHEL Advanced Platform Premium, and a minimum of 25 subscriptions, the price of entry is $32,475. Well, you’ll actually need at least 26 subscriptions, so you can move subscription number 26 to Amazon EC2 with full 24×7 Red Hat support. As such, the price of entry is $33,774. I’m assuming that customers have to pay the full cost of Premium support per year even if the Amazon EC2 instance is not running 24x7x365. If it were otherwise, one would expect Red Hat’s marketing page to point out this nice feature. Additionally, once a customer elects to move an unused RHEL subscription to the Amazon EC2, the subscription must remain there for a minimum of six months according to current eligibility guidelines.

These requirements seem at odds with the low cost of entry, ease of trial, selection and disposal, and pay-per-usage of software and hardware on public cloud infrastructure.

The other alternative is to use the beta of Red Hat Enterprise Linux on Amazon EC2 for a Basic EC2 server. This hourly beta offering provides unlimited email support with 2 business day response time at a cost of $0.21 per hour. This is a much easier way for customers new to RHEL to try it out in a cloud environment. If a customer decided to deploy cloud workload on RHEL requiring 24×7 support, they would however be faced with the $33,774 price of entry calculated above.

I wouldn’t be surprised if Red Hat, and frankly, other software vendors, try several different pricing models before finding the approach that balances the vendor revenue potential from licenses deployed in customer’s datacenters with the flexibility and freedom of pay by usage pricing in the cloud.

Follow me on twitter at: SavioRodrigues

PS: I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.”

Fears about the future of an open source vendor or project are laid to rest with the knowledge that “the source is available”. Trust in an open source vendor, project or product is often linked to source code availability. However, this trust is at the very least overstated.

I wrote about a license change to Solaris 10 a few weeks ago and made the conscious decision not to talk about OpenSolaris. Seeing as Oracle was still working through its acquisition of Sun, I felt it prudent to give Oracle more time to lay out their vision for OpenSolaris. My feelings haven’t changed. I use OpenSolairs below as an example based on current information available. I do not want to imply anything about the future of OpenSolaris. Large companies can take time to make decisions.

OpenSolaris is an open source operating system and Sun Solaris is a proprietary distribution of OpenSolaris. While Sun helped form an OpenSolaris Governing Board (OGB), decisions surrounding OpenSolaris releases and technology roadmaps remained within Sun’s control.

We can fork, right?!??:
Earlier this week the OpenSolaris mailing list was abuzz about the lack of information about a forthcoming release and the overall development model for OpenSolaris. Not surprisingly, a one commenter suggested that the OGB take a stand and ask Oracle to clarify the future of OpenSolaris releases or threaten to sever ties with Oracle. Said differently, “the source is available, so be warned Oracle”. OpenSolaris community member, Ben Rockwood, wrote a measured response:

“Here is where I want to be careful. Asking for autonomy at this juncture would be very foolish I think. If they grant it, they will essentially expect us to fork and re-establish the community without Sun/Oracle resources. That means the website goes, communication is severed, employees are instructed not to putback to the autonomous codebase, etc. I think it would go very very badly and we’d essentially help kill the community.

The size of the community at present is pretty small and relatively inactive. Support for Nexenta, Belenix, etc, is orders of magnitude less so. These projects are productive and active, but the numbers are tiny compared to the official community. Add it all up and I think we have little reason to think that an autonomous community would really have any real support unless we get a sudden and massive influx of contributing developers. So it is, imho, a non-starter.”

Another community member, Martin Bochnig, also cautious about suggesting a fork, asked:

“How are you going to replace the bright brilliant skilled and experienced Sun kernel engineers?”

Finally, community member, Damian Wojslaw, wrote:

“We do have a community. It’s alive and well. It’s a community of users and administrators. We don’t have community of developers. And this is why we can’t just fork off and start anew.”

A vibrant community can exist even while that community wouldn’t be reasonably able to support a fork. This is true for many open source communities – lots of users, very few of whom would be qualified to continue developing the open source project should a fork occur, and even fewer who would be interested in doing so.

Insurance when forking isn’t viable:
Adopting open source products from multi-vendor communities is the best insurance for enterprises. There is by definition multiple stakeholders that could reasonably be expected to guide the open source project forward should a fork be required. However, open source products developed by a single vendor controlled community are far more common. In these cases, the likelihood of an organization to reasonably support a fork is higher if a strong partner or system integrator ecosystem exists around the open source project. Another suggestion is to use the paid version of the open source project. Paying the vendor for its work is a good way of ensuring that the vendor isn’t motivated to do awkward things that encourage revenue while hurting users.

The best advice may be to simply ignore, or at least put much less weight in, the availability of source code when making a product selection.

Follow me on twitter at: SavioRodrigues

PS: I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.”

Lately there’s been lots of blog and twitter chatter about recognizing an open source product.  While an interesting intellectual exercise, the debate could also have real world impact on IT purchasing decisions.

Open source purity:
I used to spend time debating the open source “purity” of a given open source vendor.  I moved on when Shaun Connolly, of JBoss at the time, wrote this post titled “Open Source Community and Barack Obama”.  In 2010, it’s incredibly difficult to define an “open source vendor”, because virtually every IT vendor utilizes open source in their products, or contributes to open source or provides services around open source.

The recent debate about open source “purity” extends beyond the vendor, and instead focuses on products.  The debate is being spurred by the increasingly popular open core licensing approach and the delivery of software products through cloud offerings.  The 451 Group’s Matt Aslett writes:

“It ought to be simple: either the software meets the Open Source Definition or it does not. But it is not always easy to tell what license is being used, and in the case of software being delivered as a service, does it matter anyway?

The ability to deliver software as a hosted service enables some companies that are claimed to be 100% open source to offer customers software for which the source code is not available.”

In the perfect world, customers would pay vendors for the value they receive from usage of free and open source products.  Since that hasn’t really panned out, open core licensing and cloud delivery of open source software are gaining attention as leading approaches to capture revenue around open source products.

Keep an eye on freedom of action:
Customers using or considering purchasing a product that falls into the open core licensing category should be aware that the enterprise commercial product they purchase is unlikely to offer the same freedoms as the open source community edition that their developers likely used and became advocates for.  Some enterprise open core commercial products don’t even offer source code access.  This obviously limits freedom of future action versus using the open source community edition.  Other open core commercial products do provide source code access, but only as long as your subscription license is current.  As such, it’s important to understand how easily your company can shift from using the enterprise commercial open core product to using the open source community edition. Its important to understand if the enterprise features are really product extensions or are they integral to your usage?  Gartner analyst Brian Prentice has argued that customers will eventually need to evaluate and price the enterprise commercial version of an open core product.  This is all the more true if there isn’t a clear distinction between which kinds of features fall into the open source community edition versus the enterprise open core commercial product.

Customers using or considering using an offering that falls into the cloud delivery of open source category need to consider two elements of freedom of action.  First, is it possible to run the product on another cloud infrastructure or within the customer’s own data center?  Second, and often more important, is the customer’s data locked into the vendor’s cloud offering?

While usage of open source licensed products is heading in only one direction, it’s important for decision makers to understand that “open source” is used in many shapes and forms.  With no malice intended, your interpretation may not match your vendor’s interpretation.

Follow me on twitter at: SavioRodrigues

PS: I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.”

With the iPad’s imminent release and growing adoption of touchscreen smartphones, it’s only a matter of time before natural user interfaces become a mainstream IT requirement.

Useful, usable and desirable applications:
I’ve been spending some time learning about enterprises that are evolving their existing web applications for devices other than a personal computer.  Several increasingly related trends are behind this evolution.  First, enterprises are adopting Web 2.0 design and interaction practices – yes Web 2.0 is still an investment area for enterprises.  Second, these enterprises are being pushed by their users, and their competitors, to expose enterprise applications to mobile devices.  Third, enterprises are beginning to expand social interactivity and communications enablement inside of their web applications.  Finally, enterprises are beginning to expose their web application content to third party sites by exposing APIs to their enterprise web applications.  This is being done in order to deliver content to users where users are versus expecting users will always end up on the enterprise website.  The central driver behind these four trends is, not surprisingly, to deliver better user experiences.  However, “better” only tells half the story.  After reading Forrester analyst Mike Gualtieri’s post about user experience, I realized that “better” really means experiences that are useful, usable and desirable.

Growing interest in natural user interfaces:
As serendipity would have it, Forrester’s Jeffrey Hammond just wrote about natural user interfaces, which absolutely embody useful, usable and desirable user experiences.  Forrester and Dr. Dobbs Developer survey conducted in 3Q09 suggest that multi-touch/natural user interfaces weren’t exactly at the top of the list of emerging trends that respondents were interested in. However, “Mobile Apps”, “RIAs” and “Social Networking Apps” are very much related to natural user interfaces.  An enterprise building out an RIA or social networking application has to consider how that application will behave on a mobile device.  As such, the interest in natural user interfaces is likely understated and growing every day.

Jeffrey goes on to write:

“We’ve had a few inquires this quarter into NUIs, and whether the time is right to start firing up R&D efforts within large application development shops. In general, I think the answer is “Yes” when it comes to multi-touch, not just because of Mobile devices like iPhone and Android phones, but also because of the native capabilities built into .NET 4.0. As organizations refresh PCs and move toward Windows 7 and .NET 4.0, the number of multi-touch ready devices is about to increase dramatically.”

Next, add Walt Mossberg’s review of the iPad:

“I believe this beautiful new touch-screen device from Apple has the potential to change portable computing profoundly, and to challenge the primacy of the laptop. It could even help, eventually, to propel the finger-driven, multitouch user interface ahead of the mouse-driven interface that has prevailed for decades.”

If the iPad can live up to even half of its hype, enterprises will soon begin to target it as they have the iPhone and iPod. For instance, here’s a great iPhone application from USAA which lets users deposit checks by taking a picture of the check.  Appcelerator just released mobile developer survey data that continues to show interest in building applications for devices that enable natural user interfaces, such as the iPad, iPhone and Android platform.

Broad reach or highly tailored experience:
One of the biggest challenges that enterprises face in building useful, usable and desirable user experiences is selecting the device to design for.  An application that receives rave reviews from iPad users won’t necessarily run on an Android device or a Blackberry.  Open source solutions from PhoneGap, Appcelerator and Rhomobile seek to address this issue by insulating developers and applications from the underlying mobile device the application will run on.  It remains to be seen whether enterprises will select the device agnostic or native device route when designing new application experiences.  The former approach allows the enterprise to reach a larger customer base than the latter approach does; a very important consideration when facing constrained IT budgets. If the mobile device and operating system race ends in a two-horse race between iPhone/iPod/iPad and Android, we may well see enterprises targeting each with native applications.  However, today, the Blackberry and Symbian platforms are too large to ignore. In any case, now would be a good time for IT departments to begin proof of concepts to consider whether a device agnostic or native device application is appropriate for the needs of the business and its end users.

As a user, I can’t help but get excited about these new user experiences. Oh, and I still want flying cars.

Follow me on twitter at: SavioRodrigues

PS: I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.”

While Novell’s ownership of Unix was confirmed by a jury earlier this week, Novell’s future as an independent company, at least in its current form, is far from secure.  With the recent jury ruling, a Novell acquisition could impact Linux vendors and customers.

Novell recently secured a jury decision against SCO pertaining to the ownership of Unix. Here are two relevant questions and answers from Ian Bruce, Novell’s director of PR:

Q: Given that SCO barely exists any more, what is the real relevance of all this?
A: The jury has confirmed Novell’s ownership of the Unix copyrights, which SCO had asserted to own in its attack on Linux. An adverse decision would have had profound implications for the Linux community.

Q: If Novell owns the copyrights to Unix, what does that mean for Linux?
A: We own the copyrights and we will continue to protect the open source community, including Linux.

Consider that Novell’s board rejected an unsolicited takeover offer from investment fund Elliott Associates just two weeks ago.  Novell’s board said the offer “undervalues the company’s franchise and growth prospects.”  However, the board did commit to a review of its alternatives, including an outright sale.

Many IT vendors could be considered as viable candidates for acquiring Novell or part of its assets.  For instance, rumors, jokes and suggestions that Microsoft should or could acquire Novell go back to 2007 and at least one April Fool’s article.  Until now, as Gartner analyst Brian Prentice noted at OSBC, Microsoft’s open source strategy remains muddled as an enabler of other open source firms versus being an open source vendor in its own right.  Acquiring Novell and distributing SUSE Linux would dramatically change that position.  It would also allow Microsoft to differentiate against Red Hat in a way that Red Hat could not match – choice.  Most customers I speak to have heterogeneous systems, so finding a customer that uses Windows servers and Linux servers is the norm, not the exception.  While Microsoft and Novell can, and aim to, jointly address these heterogeneous customers today, a streamlined development, marketing and sales process could benefit customers and Microsoft.  Being April fools, one has to consider the notion of Microsoft acquiring Novell in order to own the copyrights to Unix, which could be used in thinly veiled threats against Linux users and customers.  Personally, I don’t think suing customers is good for business. [Update 2010-04-01: I am not suggesting that Microsoft would or could legally do this. I am not a lawyer. I included this idea because everyone jumps to it when Novell’s future is discussed.  But as @Kirovs comments here, Novell has released the code under the GPL, thereby impacting the legal rights of Novell’s potential acquirer and other Linux vendors.]

Follow me on twitter at: SavioRodrigues

PS: I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.”