Cloud Computing

With Microsoft’s Windows Azure striving for greater relevance and adoption, a relatively unknown vendor, Tier 3, is providing a cloud alternative for Microsoft .NET applications. Tier 3 is using VMware’s open source code as the basis of its offering, which opens the door for direct competition amongst VMware and Microsoft for .NET cloud workloads in the future.

Tier 3’s .NET play
Colleague J. Peter Bruzzese recently provided an update on new pricing, open source support and a free trial of Windows Azure. Support for Node.js and Apache Hadoop for Azure are sure to attract the developer attention. Whether the attention, and the free trial, will turn into paying users is an open question. That said, Azure remains the leading cloud destination for Microsoft development shops seeking a platform as a service offering. That’ll change if Tier 3, and maybe VMware, has a say.

Tier 3 recently open sourced Iron Foundry, a platform for cloud applications built using Microsoft’s .NET Framework. Iron Foundry is a fork of VMware’s Cloud Foundry open source platform as a service. According to Tier 3,

we’ve been big supporters of Cloud Foundry–the VMware-led, open-source PaaS framework–from the beginning. That said, we’re a .NET shop and many of our customers’ most critical applications are .NET-based.

It seems to have been natural to start with the Cloud Foundry code and extend it to support .NET. Tier 3 is continuing its efforts to better align elements of the core Cloud Foundry code to better support Windows using .NET technologies in areas such as command line support on Windows, which Cloud Foundry supports through a Ruby application. Tier 3 is also working with the Cloud Foundry community to contribute elements of Iron Foundry back into Cloud Foundry and into other the Tier 3 led open source project.

Tier 3 offers users two routes to use Iron Foundry. Open source savvy users can download the Iron Foundry code from GitHub under the Apache 2 license and run it as they wish. Alternatively, users can use a test bed environment of Iron Foundry for 90 days at no charge. The test bed is hosted on Tier 3’s infrastructure. Pricing for the hosted offering has not been released. This should raise some concerns about committing to a platform prior to knowing what the cost will be as I’ve discussed before.

VMware’s path to .NET support
It’ll be interesting to see how Microsoft and VMware react to Iron Foundry over time. VMware appears to have the most to gain, and least to lose with Iron Foundry.

Since Iron Foundry is a fork from Cloud Foundry, there’s just enough of a relationship between the two that VMware can claim .NET support with Cloud Foundry. In fact, VMware can claim the support with very little direct development effort themselves, obviously a benefit of their open source and developer outreach strategy around Cloud Foundry.

VMware could, at a later time, take the open sourced Iron Foundry code and offer native .NET support within the base Cloud Foundry open source project and related commercial offerings from VMware. Considering that Microsoft is aggressively pushing HyperV into VMware ESX environments, there’s sure to be a desire within VMware to fight into Microsoft’s turf.

On the other hand, Iron Foundry is a third party offering, over which VMware holds little say. If it falls flat against Windows Azure, VMware loses very little, and didn’t have to divert its development attention away from their Java-based offerings on Cloud Foundry.

Microsoft on the other hand faces the threat of Iron Foundry attracting developer attention away from Windows Azure. Until now, Microsoft has been able to expand Windows Azure into areas such as Tomcat, Node.js and Hadoop support without having to worry about its bread and butter offering, support for .NET based applications in the cloud. Having to compete for .NET application workloads will take resources away from efforts to grow platform support for non-Microsoft technologies on Windows Azure.

Request details from Tier 3 and VMware
As a user, the recommendation to understand pricing before devoting time and resources holds true for Tier 3’s offering. The added dynamic of and established vendor like VMware potentially seizing the torch, either by acquisition or a competitive offer, from Tier 3, could prove attractive to some .NET customers seeking an alternative to Windows Azure.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

Reading a recent interview with Eucalyptus CEO Marten Mickos, I’m beginning to reconsider my views on Eucalyptus versus OpenStack becoming the dominant open source cloud platform.

OpenStack’s rise
The vendor attention around OpenStack of late has been nothing short of amazing. Once a project controlled by Rackspace, vendors such as Dell, Citrix, and HP have joined the OpenStack open source community. Rackspace has given control of the project to the OpenStack foundation, apparently at the behest of large vendors contributing to the project.

However, as Mickos states, OpenStack is still a work in progress and not production ready – yet.

A tale of two open source projects
Like many, I’d assumed that the community around OpenStack gave it the critical mass required for OpenStack to become the leading open source cloud platform. I’m questioning that assumption now.

To explain why, let’s look back at two leading open source projects, the Linux and Apache HTTP Server. I use “Linux project” in the broadest sense, including the Linux kernel and all the various open source packages that round out a typical Linux distribution.

History has shown us that when an open source project is dealing with a valuable layer of the software stack, that project has tended to be controlled by a single vendor who can directly monetize the project. The term “value” is used to represent differentiation that can be monetized. While multiple implementations or distributions may result from the project, a single vendor becomes the dominant provider in the space. For Linux, that’s Red Hat with its Red Hat Enterprise Linux products. In the open source database space, MySQL would fit this model.

History also shows us that when an open source project is dealing with a commodity layer of the software stack, the project tends to be controlled by a foundation. In these cases, the project is used as a piece of a higher value product which provides differentiated value, and hence can be monetized. Said differently, the opens source project itself is indirectly monetized through the higher value product. The Apache HTTP Server, used within most commercial application server products, or Eclipse, used within many commercial application development products, are two examples.

Eucalyptus and OpenStack’s future:
If history is to repeat itself, then we need to consider whether an open source cloud platform is a valuable and directly monetizable part of the software stack or not. If it is, then a single vendor controlled open source project has a higher potential of success than a foundation-controlled project.

Open source foundations are great and play a valuable role with various open source projects. However, the mixture of ten or one hundred vendor motivations makes it increasingly difficult to meet the needs of the project and the monetization goals of each vendor.

Keep in mind that this only applies if the project is a directly monetizable layer of the software stack. As an outsider looking in, this appears to be the fundamental difference in Eucalyptus and the overall OpenStack community.

The OpenStack community, especially vendors such as Dell, HP and Rackspace, view OpenStack as addressing a part of the software stack that isn’t directly monetizable. These vendors would rather use OpenStack to build a higher value product that can be monetized. For instance, Dell and HP would likely sell “Cloud Platform Ready” hardware systems, rather than selling an OpenStack software product itself.

Clearly Eucalyptus disagrees, and Mickos claims to be growing customers “at an amazing rate”. Eucalyptus has grown from 15 to 70 employees over the past year and added a new headquarters in London to grow in EMEA.

In the end, IT buyers will decide whether Eucalyptus or the OpenStack community made the right bet. I tend to agree with Eucalyptus that a cloud platform is indeed a valuable, and hence directly monetizable, layer of the software stack.

What do you think?

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

I’ve been on the road with clients and partners of late and one thing I can attest to, other than the fact that trains are a much more civilized form of travel versus planes, is that enterprise interest in cloud greatly outpaces actual cloud investments.

The second thing I can attest to is, at the highest levels of companies, there’s a realization that today’s approach to IT is suboptimal. Cloud computing is supposed to help, but C-level folks aren’t convinced. Why? Because IT is stuck in the weeds and still isn’t thinking about what end users care about, and how to serve end users through cloud computing.

IT values infrastructure, while end users value applications
Applications have value to end users; All the storage, networking, compute, operating systems, hypervisors and middleware that underpin these applications are, from an end user standpoint, irrelevant. We in IT find these piece parts incredibly relevant, sometimes even sexy. Many careers in IT are spent on going deep on one of these piece parts, and many services hours are spent integrating products from each piece part into a platform to run the application, you know, the thing the end user cares about.

It pains us as IT professionals to not have control over each and every layer of the stack I mention above. We want not only control; we want to tinker with each layer of the stack. Vendors provide best practices for their layer of the stack and ask us to follow these guidelines. Sometimes we do, but most times we think our particular environment is so different than others that we need these additional 5 configuration tweaks. We love the control.

Giving up a little control for a lot of benefit
I couldn’t fathom why any self respecting IT professional would buy an iPhone. Sure it was beautiful and easy to use, but could I install additional memory? Could I change the battery? Could I run any application I want? Simply put, would I have the same level of control over the device as I’d become accustomed to.

Some developers asked whether they had the same level of control and flexibility as they were accustomed to with Web and Windows applications when building an iOS application.

I couldn’t do any of those things above, and developers had to live within the confines of iOS APIs.

And yet, just look at how much better life is for end users and iOS developers as a result of Apple saying “no” to the degree of control, configuration and tinkering we’re all so accustomed to within any IT organization.

Cloud vendors still suck in IT weeds, for how much longer?
Try applying lessons from the iPhone to today’s cloud offerings. To date, the most successful cloud provider, Amazon, enables IT to remain stuck in the weeds, with virtually all of the control and complexity they’re used to. Is it any wonder that C-level folks aren’t rushing to approve a “cloud project”?

OpenStack, the open source cloud computing platform, is firmly rooted in the infrastructure as a service layer of the cloud computing spectrum. For all its aspirations, OpenStack doesn’t remove the complexity of piecing together storage, networking, compute resources and hypervisors from varying vendors.

Nebula, an OpenStack based startup that I’ve previously covered, tries to simplify the IT infrastructure piece through an appliance offering. But there’s still a lot of work to provision a platform for the things your end users, and your C-level managers, care about, applications.

In announcing Oracle’s public cloud offerings, Larry Ellison called out as the “Roach Motel” of cloud services. While true, to a degree, what Larry neglected to mention is the immense value that is providing to developers, and ultimately, end users, by providing a platform for applications. Sure the applications have to fit within the APIs supported by The fact that’s platform as a service is not standards based, as Ellison pointed out in a roundabout fashion, should not be applied to platform as a service cloud offerings in general.

Make no mistake that enterprise vendors, many of whom are bringing out enterprise cloud offerings, are going to take a page out of the Apple playbook. In fact, some already are. IBM talks about workload optimized systems. Oracle talks about hardware and software engineered together.

These offerings take away much of the time and challenges of building IT environments from piece parts. These environments fast track the delivery of applications to end users. Some IT departments will resist these pre-integrated products, especially in the cloud arena. As I mentioned, we IT folk like control. The fact that an order of magnitude too much control leads to complexity and gets in the way of providing applications to end users is often an after thought. For how much longer?

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

HP announced intentions to take on Amazon in the public cloud infrastructure as a service (IaaS) arena. However, the beta announcement has little to no information about why anyone should consider HP Cloud over Amazon and other public cloud IaaS providers.

Little to differentiate HP Cloud Services thus far
HP’s recently launched beta of HP Cloud Services provides users access to two initial offerings, HP Cloud Compute and HP Cloud Object Storage. HP describes the beta as an opportunity to try these two services “through our easy to use, web-based UI on top of open, RESTful APIs, based on HP’s world-class hardware and software, and OpenStack technology.”

These two cloud offerings compete directly with Amazon’s Elastic Compute Cloud (EC2) and Simple Storage Service (S3) cloud services.

In describing HP Cloud Compute or HP Cloud Object Storage, HP makes no claims about why a company, ISV or developer should be interested in HP’s public cloud, over Amazon’s AWS cloud or other alternatives.

Enterprise-grade SLAs, management and monitoring, or hybrid cloud support, or differentiated pricing would all have been areas that HP could have used to differentiate HP Cloud Services.

But no. Instead, there is a seemingly random point about the HP Cloud being based on OpenStack technology. A point that received a lot of press, mind you. But let’s look at the reality here. HP joined the nascent OpenStack project on July 27, 2011. Knowing a thing or two about launching products within a large company, it’s very difficult to believe that HP could have altered their HP Cloud offerings in a meaningful fashion in a month.

HP’s cloud blog does make the OpenStack effort a little more real. HP’s Emil Sayegh writes: “HP developers are already active and many of our ideas will be shared at the upcoming OpenStack Design Summit and Conference, of which we are a sponsor.”

At this point, the OpenStack linkage with HP Cloud Services seems like a distraction. Hopefully this will change over time.

Why HP didn’t make more reference to its monitoring and management capabilities, areas where HP could clearly differentiate itself with Amazon AWS, is an open question. It could be that HP is targeting the broad market, and is less interested in enterprises at this time. The fact that HP is requiring a credit card for billing could be a tip here.

Asking clients interested in public clouds why they’re not using Amazon AWS today, I’ve often heard responses to the effect: “because my IT department doesn’t run on a credit card.”

Billing through a credit card absolutely lowers the bar to entry to HP Cloud Services. But it also turns of enterprise IT departments.

HP’s silence on pricing poses barrier to entry
Staying on the pricing thought, HP states the following on its website:

Stay tuned for information on pricing. We’ll communicate more before we begin charging for services.

Developers and enterprise IT should be concerned about devoting time to HP Cloud Services before pricing is known. It’s curious that HP decided to launch the beta without any pricing details just weeks after Google faced developer backlash after substantially raising prices once App Engine left preview mode.

Considering HP’s enterprise software and hardware heritage one could argue that HP will price higher than Amazon’s AWS, but offer higher value to enterprises. However, the focus on broad based developers, and requiring a credit card access to the beta, suggests aggressive pricing versus Amazon’s AWS. We’ll have to wait for additional information from HP to know for sure. If that makes you uncomfortable before approving proof of concept usage of HP Cloud Services, you should be.

Ask for clarity before devoting your time
Taking my vendor hat off for a minute, it’s absolutely within your rights as buyers and users to ask vendors, HP in this case, for clarity before making investment decisions. You and your teams have too much on your plates to work on proof of concepts without understanding how your business will benefit and what it’ll cost.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

As with any new trend, hyped up by vendors and pundits, developers and CIOs interested in cloud must invest their time and budgets cautiously. Even with all the great new product and vision announcements at VMworld and DreamForce this week, two announcements will make it more difficult for developers and CIOs to leap into their next cloud investment with confidence; Google, VMware and, three vendors vying for cloud leadership status, share the blame.

Preview pricing has no place in the enterprise
Google products are well known for their beta status well into their public life cycles. The beta, or preview, moniker is fun and cutesy, until you’re trying to establish an enterprise foothold, which Google App Engine is trying to do.

The problem with betas and previews, aside from the lack of SLA support for enterprise production workloads, is the uncertain pricing associated with pre-GA products and offerings.

This point became crystal clear when Google announced new pricing for its App Engine cloud platform. The Hacker News and Google Groups message boards dedicated to App Engine are filled with developers complaining about dramatic, anywhere from 50 percent to over 2800 percent, cost increases. Speaking of the individual facing a 2800 percent cost increase, he writes: “we are moving 22 servers away. Already started the process to move to AWS“.

Amazon Web Services appears to be the beneficiary of Google’s new pricing announcement. Enterprise developer and CIO confidence in using pre-GA cloud services definitely take a hit with Google’s new pricing.

Complex cloud pricing poses a barrier for enterprises
It’s been said before that Google, for all its greatness, just doesn’t understand the enterprise software market; take a look at the current App Engine pricing model for proof.

Pricing per usage of bandwidth or compute instances is increasingly well understood by IT. In fact, these were the key elements of the original App Engine pricing model when the service was still in preview mode.

Pricing for five different API uses, as Google has introduced with the new App Engine pricing, is overly complex at best. Does the priced API model better reflect Google’s costs, and provide developers and CIOs an opportunity to reduce their costs by using cost effective APIs? Yes. But it’s also confusing and complex. In some respects, the new pricing model feels like Google let really smart engineers, or actuaries, set the pricing model as a fun math exercise.

For enterprises the dramatically increased pricing and complexity of App Engine’s new pricing model will become the cautionary tale to those pushing an enterprise to adopt a cloud offering until the pricing and pricing metrics are established.

Cloud leaders aim to control the entire technology stack
The second announcement, or lack thereof, that will affect cloud adoption is the news that “VMforce is dead”, to borrow words from Gartner analyst Yefim Natis.

A little over a year ago, and VMware made news by announcing a strategic alliance to let VMware and Spring developers build and deploy applications onto’s cloud platform.

Yefim broke the news about VMforce:

Yesterday at VMworld conference Tod Nielsen, a VMware executive leading its platform efforts, had announced that VMforce will not be delivered, CloudFoundry technology will not run in the data center and users of will be enabled to access in some unspecified way as a compensating feature. Today Byron Sebastian, platform executive, confirmed it. VMforce is dead.

Yefim repeats a long-standing Gartner maxim, “the only real partnerships are acquisitions”. went out an acquired Heroku to replace the VMware capabilities in VMforce.

Platform vendors, such as IBM, Microsoft, Oracle and SAP, control the entire technology stack underlying their platforms. As Yefim points out, this strategy will be replicated in the cloud arena. It’ll happen because cloud vendors, such as, are vying to join the ranks of platform vendors.

Enterprises and developers relying on cloud providers whose platforms are a collection of partnerships and strategic alliances are walking a slippery slope.

When these partnerships break down, developer and IT investments in applications that relied on these partnerships need to be migrated, rewritten or thrown away, resulting in wasted time, effort and money.

Enterprise developers and CIOs attending or paying attention to the news from VMworld or DreamForce 2011 have lots of exciting products and services to consider spending their time and money on. However, they’ll be weary of doing so without clear and long term reliable pricing and using platforms that a single vendor can deliver. This higher level of scrutiny is good for the cloud market, for clients and vendors.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

With a string of recent distribution and collaboration announcements, it’s time to look at Cloud Foundry’s progress since a beta announcement in April 2012.

VMware harvests fruits of SpringSource Acquisition
EMC VMware acquired SpringSource a little over two years ago. At the time, I wrote that the hype surrounding the deal greatly overshadowed the real opportunities that SpringSource brought to VMware.

After re-reading my original analysis, I still stand by the post. Even today, while SpringSource technology underpins VMware products like vFabric and Cloud Foundry, neither could be viewed as helping to move VMware’s revenue needle in a noticeable fashion.

VMware CEO Paul Martiz confirmed this during VMware’s 2Q2011 earnings call:

…as well as we continue to invest in the Spring Framework and the combination of the Spring Framework with Cloud Foundry. But I think it would be fair to say we’re still plowing the ground there. And we expect those investments to pay off well over the longer term, but we’re still in the development phases of the market.

Considering the typical five year payback periods used to evaluate acquisitions, all I can think is that VMware has a busy three years ahead of itself to justify the nearly $420 million valuation VMware paid for SpringSource. That said, with SpringSource technology, VMware is in a significantly better position to become grow beyond a hypervisor vendor, into a platform vendor with the likes of IBM, Microsoft and Oracle.

Whether VMware can pull it off, is still to be seen.

VMware expands Cloud Foundry distribution channels
This week, VMware announced deals with Canonical, Dell and enStratus to significantly expand distribution channels for Cloud Foundry technology.

Of these, the Canonical deal appears to be most interesting. Cloud Foundry can benefit from Ubuntu’s leading share of Linux cloud and virtualization deployments.

In explaining the collaboration with Canonical, VMware staff wrote:

Now starting with the 11.10 release both the (Cloud Foundry) VMC Client, and VCAP server functionality will be available directly as Ubuntu packages created by Canonical. With over 20 million active desktop users and a strong IaaS server OS popularity it represents an important milestone for the open source distribution of Cloud Foundry, and is just the beginning of an ongoing collaboration with Canonical. Having the VMC client pre-installed and ready on millions of developer desktops makes a Cloud Foundry app deployment just a few commands away for anyone using Ubuntu.

Cloud Foundry interest expanding, but not yet a game changer
Against this backdrop of potential opportunity for Cloud Foundry adoption is the reality of usage and interest to date.

During VMware’s 2Q2011 earning release, VMware’s prepared comments highlighted 25,000 developers signing up for Cloud Foundry. That certainly is a respectable number of interested users in three months since the beta announcement. It will be interesting to watch this figure over time. It’s not uncommon for new products to gain interest when first announced, only to trail off in the long run.

It is interesting however that the various Cloud Foundry Git repositories on GitHub are “watched”, a proxy for interest level amongst GitHub users, by fewer than 800 users, while leading repositories count well over 5000 watchers.

Of note, interest in the Cloud Foundry project targeted at Java applications is less than 20 percent of the interest of the overall Cloud Foundry project.

Considering the revenue that Java attracts from enterprises, even in the face of languages such as Ruby, or PHP, Cloud Foundry’s growth into enterprise accounts could be less than a smooth one.

Looking at Google search trends at open source based platform as a service offerings, VMware Cloud Foundry, Red Hat OpenShift, Amazon Beanstalk, and CloudBees self titled platform, it’s clear that the market is still wide open, with each offering in the 15 to 25 percent range.

Add Google App Engine into the comparison, and interest in Google App Engine dwarfs the interest in Cloud Foundry and others by an order of magnitude.

I purposefully did not include the established platform vendors, IBM, Microsoft and Oracle in the comparison above. As much attention as Google App Engine has received, and offerings like Cloud Foundry are getting today, they’ve yet to crack the enterprise market in a meaningful way.

In conclusion, it appears that Cloud Foundry is making some good progress, but the road to enterprise acceptance, adoption and revenue is well ahead of it.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

A new cloud infrastructure provider expects to “disrupt and democratize cloud computing” using open source cloud software and commodity hardware.

Nebula hopes to simplify private cloud creation
Nebula, founded by former NASA CTO Chris Kemp, was launched at OSCon this week. Nebula borrows its name and initial technology from a project that Kemp led at NASA, and was later open sourced by NASA into a project named OpenStack.

Nebula plans to sell hardware appliances to create private clouds using your existing or new compute and storage hardware. OpenStack is used to allocate compute and storage resources to a given user or application in an elastic fashion.

Each Nebula hardware appliance is able to control up to 20 compute and storage nodes within your private cloud. If your private cloud has hundreds of nodes, as would be expected, you’ll need multiple Nebula appliances.

A recent survey of 500 enterprises found that the average enterprise maintains 662 physical servers. Creating a private cloud out of these 662 physical servers would require 33 Nebula appliances, and that’s before including any storage nodes into the calculations.

According to VentureBeat, Kemp is quoted saying:

You buy 10 or 100 of our boxes and plug a whole rack of servers into our boxes. It is data center infrastructure, offered as a service. This is the kind of shift that has to happen if the data center revolution is going to proceed.

Depending on the pricing for these Nebula appliances, buying tens or hundreds of Nebula appliances could start adding up to significant costs. That said, Nebula claims to be able to help build a private cloud in minutes, not months, thereby providing time to value benefits that Nebula would seek to monetize.

Nebula’s attempt to differentiate through openness
Nebula’s approach is interesting and follows the growing adoption of appliances optimized for specific purposes and the specific trend of using appliances as building blocks for a private cloud platform.

Vendors such as IBM, Oracle and VMware/Cisco/EMC (VCE) already offer, to varying degrees, appliances to help build out your private cloud. Even Microsoft has spoken about an Azure appliance, although it’s been delayed several times.

Nebula hopes to differentiate from better known IT vendors by leveraging the openness of its cloud infrastructure software layer.

Nebula claims that the appliance is built on the same APIs and runtime as OpenStack, but adds numerous security, management, and platform enhancements. It remains to be seen whether these additional enhancements will also be open sourced. If these enhancements are not open sourced, the system’s openness would come into question.

Does an open source foundation matter in the cloud
Nebula’s product page claims the following key value propositions: Open Software, Open Hardware, DevOps Compatible, Self-Service, Security, Massive Scalability, Elastic Infrastructure and High Availability.

The only value proposition on this list that IBM, Microsoft, Oracle, or VMware/Cisco/EMC couldn’t claim equally as well is that they provide “open software”. I say equally as well, as “open software” is a broad term.

Nebula’s key differentiator is that their solution is based on an open stack. But does that matter to buyers? Would it matter to you?

Microsoft’s Gianugo Rabellino, Senior Director for Open Source at Microsoft, explained Microsoft’s stance that as long as the APIs and protocols for the cloud are open, customers care less about the openness of the underlying platform.

This view is shared by the newly launched Open Cloud Initiative. Open Cloud Initiative director Sam Johnston writes

…so long as the interfaces and formats are open, and there are “multiple full, faithful and interoperable implementations” (at least one of which is Open Source) then it’s “Open Cloud”.

Enterprise IT vendors have a long history of cooperating on standards and competing on the basis of their implementation. This too will occur at the cloud level. And when it does, differentiation will shift towards things like ease of use, interoperability with existing assets, high performance and total cost of ownership.

Therein lies the challenge for Nebula. Their key differentiator, openness, isn’t a sustainable advantage.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.”

Next Page »