After over a decade of Linux vendors attempting to grow into the enterprise and Red Hat, the poster child for Linux, approaching $1 billion in annual revenue, it’s easy to presume that Linux is pervasive in the enterprise. It is, but, as the Linux Foundation’s enterprise survey finds, there are still some barriers to overcome. Additionally, the survey presents new data showing Windows, not Unix, as the primary operating system for migrations to Linux.

The Linux Foundation recently released the results of a survey with 428 IT professionals from organizations with 500 or more employees, or $500 million in yearly revenue. North America represented 42 percent of the respondents, while Europe and Asia represented 21 and 15 percent respectively.

It’s unclear just how large of a percentage of respondents were sourced from The Linux Foundation’s end user council versus a broader sampling of IT respondents. That said, the survey results bode well for further Linux growth, and serve as a caution for Microsoft.

Linux growth at Windows expense
The survey results show that 84 percent of respondents reported their company’s usage of Linux grew over the past 12 months. Eighty percent of respondents felt that their company will increase Linux use over the next five years.

Alternatively, only 27 percent of respondents stated that their company plans to increase usage of Windows over the next five years.

That’s a 3x higher factor of Linux growth in the enterprise over the next 5 years versus Windows. While this is a great statistic for Linux proponents, it’s difficult to get excited considering the substantially higher market share for Windows vs. Linux in enterprises.

What Linux fans can get excited about, and should be worrisome to Microsoft is where Linux deployments have been coming from.

Over the past 2 years, 39 percent of respondents claimed that their company’s Linux deployments have been migrations from Windows. The comparable number of migrations from Unix to Linux was 35 percent.

Microsoft has often made the claim, one which I’ve repeated, that the growth of Linux was coming at the expense of Unix much more so than from Windows. However, comparing the 3x lower growth rate of Windows versus Linux usage and the virtually equivalent percent of Linux usage growth coming from Windows and Unix migrations, it’s difficult to ignore the impact of Linux on the Windows franchise.

To make matters worse for Microsoft, 69 percent of respondents claimed that Linux would be used for mission critical workloads over the next 12 months.

Management perceptions can be hard to change
When asked about the drivers for adopting Linux, there was a virtual tie between lower total cost of ownership, features/technical superiority and security, with each receiving over 60 percent of respondent selections.

With those results as a backdrop, I found it interesting that 40 percent of respondents claimed that management perception of Linux was impeding further growth of Linux at the company.

It would be great to know what these perception issues are. This is especially true since functionality and security are often held up as areas of concern for open source products in general. And yet, respondents claimed that these were the number two and three reason for adopting Linux over alternatives.

When the Harvard Business Review writes, based on Gartner data, about open source software reaching a tipping point, it’s fair to conclude that the tipping point is well behind us. In the case of Linux, while the tipping point may be a distant memory, historical perceptions about Linux, and maybe open source in general, may yet remain barriers for years to come.

What about you? Is your management still holding on to old perceptions about Linux?

should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

It’s only a matter of time until the consumerization of IT bleeds over from your non-IT employees into your IT department. While this may sound far fetched, iPad like systems, such as appliances or workload optimized systems, are already finding a foothold in your IT datacenter, and it’s not about to stop.

Consumerization of IT is here to stay
As InfoWorld’s Galen Gruman explains, the consumerization of IT is in full force with employees choosing hardware and software that best meets their needs without regard for corporate IT standards. The trend started well before Salesforce.com, iPhones and iPads made their way into the enterprise, but these three technologies are important because they highlight the choices being made by employees. These choices are often markedly different from the choices an IT professional would tend to agree with when making corporate purchase decisions.

All three technologies offered fewer choices and were less open than the alternatives already in use within a given IT department. Terms like walled-gardens or lock-in were often associated with Salesforce.com, the iPhone and iPad in their early days of enterprise usage. In many respects, these concerns still apply. And yet, all three technologies have somehow found their way onto the corporate standard list. This doesn’t mean that these are the preferred technologies in every case, but they have a role to play within today’s modern IT department.

The value versus control spectrum
Consumerization of IT hits close to home for me. I started to type this post on my iPad and then later on my Macbook Air. I use both for work purposes at varying times, and both are my personal devices.

It occurred to me that in choosing an iPad and a Macbook Air I made choices that I’d never have expected making even 2 years ago.

For the better part of 15 years I’d purchased hardware and software that I could tinker with and had broad control over. However, the “it just works” nature of the iPad and performance, portability and yes, the aesthetics, of the Macbook Air became important decision factors.

By going to a Mac after years on a PC, most of my applications, tools and custom scripts stopped being useful. I have fewer choices of applications and much lower configurability on my Macbook Air and iPad. It wasn’t a painless transition. I still need to keep a Windows 7 and VMware Fusion license around as my tax program of choice only supports Windows.

However, the value I perceived from a simpler to use and better integrated system helped me get over my historical approach to IT systems and software. I highly doubt that I’m alone in this progression on the spectrum of control and configurability versus integrated system ease of use and performance.

Growing use of appliances and workload optimized systems in datacenters
The very same concerns I had when considering an iPad or Macbook Air are relevant for IT professionals tasked with doing more with less. The notion of giving up control and choice is often viewed in a negative light by IT professionals. But, when the value of a workload optimized system is considered, especially if it’s based on open standards, the attractiveness of these systems begin to outweigh the reduced control and configurability.

The very same professionals reading this blog and running countless IT departments are happily toting iPhones, iPads, Samsung Galaxy Tabs or Macbook Airs. The ease of use and performance at certain tasks that these integrated systems provide is bound to affect standard decision making in their IT roles.

Think about all the time and effort spent on building systems from piece parts, applying fixes and upgrades to individual pieces of the system. How much more valuable work could you do for your company if you didn’t spend hours or months on these tasks? How much time do you spend keeping your iPad up to date? Virtually no time at all.

This idea clicked for me a few months ago, and it’ll take hold with more and more IT professionals. Some will ignore the logical conclusions, while others will question whether their current approach to building, maintaining and upgrading systems is optimal for every situation. Note however, here is no reason to think that the growing use of workload optimized systems means the end of the custom built systems market. Both types of systems have a role to play in a modern datacenter.

For instance, appliances are already a growing part of the IT landscape. IT has long been comfortable with appliances for important, but non-differentiating layers of the IT stack, such as firewalls.

Customers are increasingly looking at appliances for higher value IT capability like business analytics. Oracle’s Exadata and IBM’s Netezza Twinfin are two appliances that have been growing by narrowing choice and configurability while optimizing for a particular task. In fact, Oracle made a point of highlighting the growth of Exadata as a bright spot in an otherwise disappointing quarter.

While we’re likely decades away from replacing your systems of choice with a big fat tablet device, the consumerization of IT will increase the willingness of IT professional to adopt appliance and appliance like systems in enterprise datacenters. Is your IT department ready for this shift?

should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

While a renewed search deal between Google and Mozilla is welcome news to millions of Firefox users, Mozilla has three big ideas for 2012 and beyond that will see it competing much more with Google, Facebook and Apple. Here’s why you should be cheering Mozilla on.

Biting the hand that feeds it?
As InfoWorld’s Woody Leonhard writes, it was in Google’s best interests to prevent Microsoft’s Bing from becoming the default search provider in Firefox. As much as Mozilla relies on Google for 80 percent plus of its revenue, so too does Google rely on the search traffic from millions of Firefox users. While Mozilla’s blog post about the recently signed deal espouses a mutually beneficial agreement, its difficult to believe that the relationship is anything but strained between Google and Mozilla.

However, that relationship is going to get a lot more tenuous if Mozilla is able to make progress on three key areas laid out by Mozilla’s David Aacher.

Mozilla and Firefox became household names through the browser wars, particularly against Microsoft’s Internet Explorer, but mainly as a proponent of open standards and user rights on the web. Ascher writes: “In the case of the browser wars, the outcome has been pretty good for society, if slower than we’d have liked: standards have evolved, browsers got better and faster, and websites got more interesting.”

But now, Mozilla feels it’s time to look beyond the browser as the main front in it’s mission to safeguard the future of web for the people. Mozilla is also investing in an open stack for hardware OEMs, user-centric identity on the web and tools for building and running apps. These initiatives add to the value of Firefox from a user standpoint, but are being developed in parallel. Additionally, the latter two initiatives are applicable to other browsers also.

A truly open alternative to Android
The first initiative, named Boot to Gecko aims use open web technologies to deliver a runtime and underlying operating system for desktop and mobile applications. If this sounds like Android or Chrome OS, it should. Boot to Gecko is using some of the same lower level building blocks that Android uses, such as the Linux kernel and libusb. The team explains this choice was made to reduce the burden on device makers and OEMs who will be faced with certifying Boot to Gecko on new hardware. While some building blocks are shared, Boot to Gecko is not based on Android and will not run Android applications.

If Mozilla can successfully execute on initiative number 3 below, Boot to Gecko will be difficult for OEMs to ignore. There is a lot more work for Mozilla to do before Boot to Gecko can attract the attention of Android device manufacturers. However, OEMs and users will benefit from serious open source competition to Android.

User controlled identity
The second initiative, currently named BrowserID, although Mozilla is looking for a different name, addresses the need for users to regain control over their identity and sharing of personal information on the web.

BrowserID aims to become the open alternative to Facebook Connect and Google username on Google’s far reaching web properties. With BrowserID, Mozilla has built a user centric identity system that works in all modern browsers and will make the protocol available for other browser vendors to use. Ascher explains:

“For Mozilla devs, this is a bit shocking, as we’re not starting by putting a feature in Firefox first (although we sure hope that Firefox will implement BrowserID before the others!). While I love Firefox, this makes me happy, because in my mind, Mozilla is about making the internet work better for everyone, not just Firefox users, and in this case being browser-neutral is the right strategic play.”

The notion of making the web better for everyone, not just Firefox users, is one I’ve not picked up on until now. But I completely agree with Ascher. Few can argue that even Internet Explorer users are benefiting from Firefox’s efforts and Microsoft’s response.

If Mozilla is successful with BroswerID, which is certainly possible as developers increasingly grow weary of their reliance on Facebook or Google, users will get back control over their identity and information without having to sacrifice a personalized web experience.

Apps, if you can’t beat them, join them
Finally, Mozilla is addressing the “app-ifcation” of the web, not by fighting the trend it as may seem reasonable for a browser vendor, but by guiding how these apps are built, found, paid for and installed.

Mozilla’s Apps initiative aims to make web technologies the basis of building applications that can run across devices. Mozilla also wants to introduce a standard for application purchasing and installation that would allow users to consume applications from multiple app stores without restrictions. This initiative undoubtedly goes after the Apple App Store and Android Marketplace.

It would be interesting if Mozilla were to partner with Microsoft on this initiative as Microsoft builds out its app store.

Success isn’t guaranteed, but Mozilla knows about tough fights
Whether Mozilla can execute against all three of these initiatives while maintaining its efforts in the still important browser war is an open question. As a user, even if one of these three initiatives are successful, we’ll all be better off.

While Mozilla will face a lot of resistance on this front from the likes of Google, Facebook and Apple, fighting an uphill battle isn’t new territory for Mozilla.

should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

With Microsoft’s Windows Azure striving for greater relevance and adoption, a relatively unknown vendor, Tier 3, is providing a cloud alternative for Microsoft .NET applications. Tier 3 is using VMware’s open source code as the basis of its offering, which opens the door for direct competition amongst VMware and Microsoft for .NET cloud workloads in the future.

Tier 3’s .NET play
Colleague J. Peter Bruzzese recently provided an update on new pricing, open source support and a free trial of Windows Azure. Support for Node.js and Apache Hadoop for Azure are sure to attract the developer attention. Whether the attention, and the free trial, will turn into paying users is an open question. That said, Azure remains the leading cloud destination for Microsoft development shops seeking a platform as a service offering. That’ll change if Tier 3, and maybe VMware, has a say.

Tier 3 recently open sourced Iron Foundry, a platform for cloud applications built using Microsoft’s .NET Framework. Iron Foundry is a fork of VMware’s Cloud Foundry open source platform as a service. According to Tier 3,

we’ve been big supporters of Cloud Foundry–the VMware-led, open-source PaaS framework–from the beginning. That said, we’re a .NET shop and many of our customers’ most critical applications are .NET-based.

It seems to have been natural to start with the Cloud Foundry code and extend it to support .NET. Tier 3 is continuing its efforts to better align elements of the core Cloud Foundry code to better support Windows using .NET technologies in areas such as command line support on Windows, which Cloud Foundry supports through a Ruby application. Tier 3 is also working with the Cloud Foundry community to contribute elements of Iron Foundry back into Cloud Foundry and into other the Tier 3 led IronFoundry.org open source project.

Tier 3 offers users two routes to use Iron Foundry. Open source savvy users can download the Iron Foundry code from GitHub under the Apache 2 license and run it as they wish. Alternatively, users can use a test bed environment of Iron Foundry for 90 days at no charge. The test bed is hosted on Tier 3’s infrastructure. Pricing for the hosted offering has not been released. This should raise some concerns about committing to a platform prior to knowing what the cost will be as I’ve discussed before.

VMware’s path to .NET support
It’ll be interesting to see how Microsoft and VMware react to Iron Foundry over time. VMware appears to have the most to gain, and least to lose with Iron Foundry.

Since Iron Foundry is a fork from Cloud Foundry, there’s just enough of a relationship between the two that VMware can claim .NET support with Cloud Foundry. In fact, VMware can claim the support with very little direct development effort themselves, obviously a benefit of their open source and developer outreach strategy around Cloud Foundry.

VMware could, at a later time, take the open sourced Iron Foundry code and offer native .NET support within the base Cloud Foundry open source project and related commercial offerings from VMware. Considering that Microsoft is aggressively pushing HyperV into VMware ESX environments, there’s sure to be a desire within VMware to fight into Microsoft’s turf.

On the other hand, Iron Foundry is a third party offering, over which VMware holds little say. If it falls flat against Windows Azure, VMware loses very little, and didn’t have to divert its development attention away from their Java-based offerings on Cloud Foundry.

Microsoft on the other hand faces the threat of Iron Foundry attracting developer attention away from Windows Azure. Until now, Microsoft has been able to expand Windows Azure into areas such as Tomcat, Node.js and Hadoop support without having to worry about its bread and butter offering, support for .NET based applications in the cloud. Having to compete for .NET application workloads will take resources away from efforts to grow platform support for non-Microsoft technologies on Windows Azure.

Request details from Tier 3 and VMware
As a user, the recommendation to understand pricing before devoting time and resources holds true for Tier 3’s offering. The added dynamic of and established vendor like VMware potentially seizing the torch, either by acquisition or a competitive offer, from Tier 3, could prove attractive to some .NET customers seeking an alternative to Windows Azure.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

With the holiday season upon us, and tablets at the top of many gift lists, it’s all but certain that millions of new users will get exposed to an open source based Android Tablet. By all accounts, Amazon’s Kindle Fire is expected to leapfrog into, at least, number two position in the tablet market. While this would appear to be good news for Android tablets and the Android OS, it may actually be exactly what Apple and Microsoft had asked for Christmas (or any other holiday these companies choose to celebrate).

Great price and Amazon content versus clunky user experience
I’m not going to do a blow by blow review of the Kindle Fire. Glen has a good review of the Kindle Fire versus Apple iPad. I’d also recommend the Kindle Fire review from Instapaper developer Marco Arment from a user experience standpoint.

The first common thread across reviews is the price of a Kindle tablet, at $199, can’t be beat. Some have referred to the Kindle Fire as the people’s tablet.

Second, reviews are virtually unanimous that the Kindle Fire is great when restricted to Amazon’s content, even if some magazines aren’t optimal for a 7 inch screen. The Kindle Fire becomes less attractive as users venture outside of Amazon’s content garden. Even the new Silk browser, touted to speed on device browsing, appears to be a let down.

Finally, many reviews describe a less than delightful user experience while using the Kindle Fire operating system and user interface. The Kindle Fire OS responsiveness is said to lag user input, sometimes forcing users to redo an action only to find that the first input was in fact registered.

The 7 inch form factor, while easier to hold than a 10 inch tablet, presents the added complication of smaller targets for users to press in order to carry out their intended tasks. One of Arment’s issues with the Kindle Fire interface is that: “Many touch targets throughout the interface are too small, and I miss a lot. It’s often hard to distinguish a miss from interface lag.”

Like it or not, iPad is Kindle Fire’s comparison
There are many older users who don’t need a laptop and could benefit from a small and moderately priced tablet for email, browsing and reading. A Kindle Fire seems like a great solution. It’s likely that many of this cohort will receive a Kindle Fire from a well meaning family member or friend. In fact, my wife suggested getting a Kindle Fire for several retired members of our family.

However, the usability issues that Arment brings up, especially surrounding interface lag and smaller touch targets will undoubtedly have an impact on their desire to use the device, or store it away with that interesting looking tie received over the holidays.

It seems that a lack of comfort with new computing devices, fat thumbs and poor eyesight, something we all have to look forward to, aren’t great ingredients for being delighted with the Kindle Fire.

Even younger users, many who own or have used an iPod touch or iPhone are at risk of being annoyed with the lag and user interface roughness of the Kindle Fire.

Some have argued that you can’t compare a $499 iPad with a $199 Kindle Fire. That’s true, on paper. In practice, users are going to compare their Kindle Fire experience with an iPad. There isn’t a tablet market, there’s an iPad market. It’s the reason that most Kindle Fire reviews compare to the leading entry in the market, the iPad, and not other Android or 7 inch based tablet.

A poor Kindle Fire experience reflects on Android
When the Kindle Fire is perceived to deliver a less enjoyable experience than an iPad, the real risk is that the Android tablet market will be viewed in the same light as the Kindle Fire. That may not be fair considering Amazon has forked the Android OS, and Android continues to get better. However, since the Kindle Fire is expected to reach an order of vastly more users than other Android tablets, and considering Amazon’s technical reach, don’t be surprised if typical users generalize their Kindle Fire experience to Android tablets.

Earlier this week Bloomberg BusinessWeek’s Ashlee Vance wrote on his Twitter feed: “Just opened up the old Kindle Fire. Android sure has a Windows 3.0 feel, dunnit?”

That is exactly the type of comment that should make Apple happy and give Microsoft a faint hope in their tablet plans. If Amazon, with its great content and proven track record with Kindle devices, can’t pull off a device users prefer to an iPad, then what’s the likelihood that any Android vendor can?

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

While the DevOps movement is centered on reducing the friction between developers and operations teams and the ability for developers to deliver applications faster, neither outcome is possible without standardization. But are developers and operations teams ready to agree on standard environments?

Why developers operations teams are focused on DevOps
A recent post from Matt Asay, now at cloud vendor, Nodeable, links to a survey from Puppet Labs of seven hundred and fifty plus respondents. While DevOps is often considered a developer led movement, nearly 4 times as many operations and administrators responded to the survey as developers. Surprisingly, as Matt points out, operations teams appear to see the same potential benefits from DevOps as developers.

In the survey, Puppet Labs finds that fifty five percent of respondents ranked the automation of configuration and management tasks as the top benefit expected from the DevOps movement. Another thirteen percent, ranked this benefit in their top three benefits expected.

You can’t automate what you can’t standardize
I’ve been spending time learning more about the interaction between developers and operations teams. One thing I’ve come to understand is that these two groups tend to think differently, even if they are using the same words and nodding in agreement.

It’s no surprise that developers want adopt tools and processes that allow them to become more efficient in delivering net new applications and continuous updates to existing applications. Today, these two tasks are hindered, to a degree, by the operations teams who are responsible for production environments. As Matt points out in his post, developers seek automation, which, as an aside, is a reason enterprise developers have been very open to using public clouds.

Operations and administrations teams, as the Puppet Labs survey shows, are also drawn to the automation of manual tasks they must do today, some of which result in the delays that developers experience when they’re waiting on the operations team to provision new resources or promote applications into production.

While both side of the DevOps coin value automation, neither is explicitly calling out the fact that you can’t automate without standardization. Well, maybe can’t is too strong a word. In IT we can do pretty much anything with enough elbow grease.

But ask yourself, could you automate the provisioning of a development environment, or the deployment of a middleware stack, without standardizing what exactly is being automated? No, not really. Operations teams understand this; it’s why operations teams promote enterprise wide standards. In most cases, these enterprise wide standards, whether for things like operating systems, hypervisors, databases or application servers, etc., allow for some limited degree of variability. For instance, may IT shops support Linux and Windows, but if you want Solaris, sorry, you’re outside of the corporate standard.

This is the environment that developers are working in today. You can select amongst enterprise-wide standards. And yet, this is the exact environment that is frustrating developers. Sometimes the corporate standard doesn’t exactly fit the developer’s skills or interests or the project’s needs. Hearing, “tough luck champ”, is what’s driving developers towards DevOps. However, developers automating IT stacks that don’t fit into the corporate standard won’t help bridge the gap between development and operations.

What’s needed is a joint agreement, across development and operations, on the acceptable environments that make up the corporate standard, and which are, in turn, automated.

This isn’t a technical issue. It’s a cultural issue. Before your teams push forward a DevOps initiative, get them beyond the buzzwords and into the nitty gritty around corporate standards, degrees of variability and if or how to allow developers to experiment outside of the corporate standard. If you don’t you’ll have developers and operations expecting different outcomes while using similar words.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

Reading a recent interview with Eucalyptus CEO Marten Mickos, I’m beginning to reconsider my views on Eucalyptus versus OpenStack becoming the dominant open source cloud platform.

OpenStack’s rise
The vendor attention around OpenStack of late has been nothing short of amazing. Once a project controlled by Rackspace, vendors such as Dell, Citrix, and HP have joined the OpenStack open source community. Rackspace has given control of the project to the OpenStack foundation, apparently at the behest of large vendors contributing to the project.

However, as Mickos states, OpenStack is still a work in progress and not production ready – yet.

A tale of two open source projects
Like many, I’d assumed that the community around OpenStack gave it the critical mass required for OpenStack to become the leading open source cloud platform. I’m questioning that assumption now.

To explain why, let’s look back at two leading open source projects, the Linux and Apache HTTP Server. I use “Linux project” in the broadest sense, including the Linux kernel and all the various open source packages that round out a typical Linux distribution.

History has shown us that when an open source project is dealing with a valuable layer of the software stack, that project has tended to be controlled by a single vendor who can directly monetize the project. The term “value” is used to represent differentiation that can be monetized. While multiple implementations or distributions may result from the project, a single vendor becomes the dominant provider in the space. For Linux, that’s Red Hat with its Red Hat Enterprise Linux products. In the open source database space, MySQL would fit this model.

History also shows us that when an open source project is dealing with a commodity layer of the software stack, the project tends to be controlled by a foundation. In these cases, the project is used as a piece of a higher value product which provides differentiated value, and hence can be monetized. Said differently, the opens source project itself is indirectly monetized through the higher value product. The Apache HTTP Server, used within most commercial application server products, or Eclipse, used within many commercial application development products, are two examples.

Eucalyptus and OpenStack’s future:
If history is to repeat itself, then we need to consider whether an open source cloud platform is a valuable and directly monetizable part of the software stack or not. If it is, then a single vendor controlled open source project has a higher potential of success than a foundation-controlled project.

Open source foundations are great and play a valuable role with various open source projects. However, the mixture of ten or one hundred vendor motivations makes it increasingly difficult to meet the needs of the project and the monetization goals of each vendor.

Keep in mind that this only applies if the project is a directly monetizable layer of the software stack. As an outsider looking in, this appears to be the fundamental difference in Eucalyptus and the overall OpenStack community.

The OpenStack community, especially vendors such as Dell, HP and Rackspace, view OpenStack as addressing a part of the software stack that isn’t directly monetizable. These vendors would rather use OpenStack to build a higher value product that can be monetized. For instance, Dell and HP would likely sell “Cloud Platform Ready” hardware systems, rather than selling an OpenStack software product itself.

Clearly Eucalyptus disagrees, and Mickos claims to be growing customers “at an amazing rate”. Eucalyptus has grown from 15 to 70 employees over the past year and added a new headquarters in London to grow in EMEA.

In the end, IT buyers will decide whether Eucalyptus or the OpenStack community made the right bet. I tend to agree with Eucalyptus that a cloud platform is indeed a valuable, and hence directly monetizable, layer of the software stack.

What do you think?

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

The Open Compute Project Foundation recently announced results from Facebook’s attempts to build an efficient data center at the lowest possible cost. The foundation claims to have reduced the cost of building a data center by 24 percent, and improved ongoing efficiency by 38 percent versus state of the art data centers.

Open Compute Project design specifications
The Open Compute Project foundation released design specifications for servers and data center technology earlier this week.

The servers themselves fit into a chassis that is slightly taller than a 1.5U standard server chassis. The servers can use either an Intel or AMD motherboard. The v2.0 Intel specification provides double the compute density as v1.0 using two next generation Sandy Bridge based Intel processors per board. The v2.0 AMD specification also doubles the compute density with support for two AMD G34 Mangy Cours or Interlagos processors per board.

Open Compute servers are racked into three adjoining 42U racks, dubbed Triplets. Each rack column contains 30 Open Compute Project servers, for a total of 90 servers per Triplet. Each rack column has two top of rack switches.

A battery pack rack cabinet sits between a pair of Triplets providing DC power in the event of loss of AC power.

Bringing deep data center engineering skills to the masses
By releasing the cost savings figures, and more importantly, the underlying hardware specifications for the motherboards, power supply and chassis, the foundation hopes to bring efficiency and lower cost data centers to companies that don’t have the engineering depth of companies such as Facebook, Google, or Amazon.

Facebook deserves kudos for their work on the project. Getting together a board of directors including Andy Bechtolsheim from Arista Networks, Don Duet from Goldman Sachs, Mark Roenigk from Rackspace and Jason Waxman from Intel couldn’t have been easy. Although, cost reduction and efficiency figures upwards of 20 percent must have attracted attention from prospective board members and the long list of hardware, software and institutional partners, including the likes of Dell, Intel, Huawei, Red Hat, Netflix, and North Carolina Sate University, to name but a few.

Nothing to sell here? Ok, but where’s the certification?
At the Open Compute Project Summit this week, Andy Bechtolsheim was quoted saying “Open Compute Foundation is not a marketing org. There’s nothing to sell here”.

While the foundation has nothing to sell, it’s critical that hardware vendors quickly release Open Compute Project certified hardware. There isn’t a certification process for hardware as yet, but this is something the foundation needs to work on immediately.

As GigaOM reports, “when the effort launched in April Dell and Hewlett-Packard both showed off servers that incorporated some of the elements of Open Compute.” The term “some elements” should be worrisome to the Open Compute Project and to potential buyers. Otherwise “Open Compute Project based” hardware will proliferate without any standard comparison across vendor offerings as vendors rush to take advantage of the Open Compute Project’s buzz with existing offerings under a different marketing banner.

Silicon Mechanics, a rack mount server manufacturer and member of the Open Compute foundation, announced an Open Compute Triplet based on the Open Compute Project specifications. A 90 compute node Triplet with entry level processors, RAM and disk and without any operating system or software starts at $287,755 and can grow to $2 million and above.

Good progress so far, more work to do
In a post at Opencompute.org, Frank Frankovsky, Director of Technical Operations at Facebook and Chairman/President of the Open Compute Project foundation wrote “… what began a few short months ago as an audacious idea — what if hardware were open? — is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum. We are officially on our way.”

Yes, the Open Compute Project foundation is officially on its way.

You’re encouraged to read through the design specification and compare to your current or future data center plans. However, until the Open Compute Project foundation comes out with a certification process, buyers are urged to ask vendors which parts of the product align with the Open Compute Project specifications and which parts are outside of the specifications. In some ways, it’s buyer beware when it comes to products claiming to offer Open Compute Project-based products, for now at least.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

I’ve been on the road with clients and partners of late and one thing I can attest to, other than the fact that trains are a much more civilized form of travel versus planes, is that enterprise interest in cloud greatly outpaces actual cloud investments.

The second thing I can attest to is, at the highest levels of companies, there’s a realization that today’s approach to IT is suboptimal. Cloud computing is supposed to help, but C-level folks aren’t convinced. Why? Because IT is stuck in the weeds and still isn’t thinking about what end users care about, and how to serve end users through cloud computing.

IT values infrastructure, while end users value applications
Applications have value to end users; All the storage, networking, compute, operating systems, hypervisors and middleware that underpin these applications are, from an end user standpoint, irrelevant. We in IT find these piece parts incredibly relevant, sometimes even sexy. Many careers in IT are spent on going deep on one of these piece parts, and many services hours are spent integrating products from each piece part into a platform to run the application, you know, the thing the end user cares about.

It pains us as IT professionals to not have control over each and every layer of the stack I mention above. We want not only control; we want to tinker with each layer of the stack. Vendors provide best practices for their layer of the stack and ask us to follow these guidelines. Sometimes we do, but most times we think our particular environment is so different than others that we need these additional 5 configuration tweaks. We love the control.

Giving up a little control for a lot of benefit
I couldn’t fathom why any self respecting IT professional would buy an iPhone. Sure it was beautiful and easy to use, but could I install additional memory? Could I change the battery? Could I run any application I want? Simply put, would I have the same level of control over the device as I’d become accustomed to.

Some developers asked whether they had the same level of control and flexibility as they were accustomed to with Web and Windows applications when building an iOS application.

I couldn’t do any of those things above, and developers had to live within the confines of iOS APIs.

And yet, just look at how much better life is for end users and iOS developers as a result of Apple saying “no” to the degree of control, configuration and tinkering we’re all so accustomed to within any IT organization.

Cloud vendors still suck in IT weeds, for how much longer?
Try applying lessons from the iPhone to today’s cloud offerings. To date, the most successful cloud provider, Amazon, enables IT to remain stuck in the weeds, with virtually all of the control and complexity they’re used to. Is it any wonder that C-level folks aren’t rushing to approve a “cloud project”?

OpenStack, the open source cloud computing platform, is firmly rooted in the infrastructure as a service layer of the cloud computing spectrum. For all its aspirations, OpenStack doesn’t remove the complexity of piecing together storage, networking, compute resources and hypervisors from varying vendors.

Nebula, an OpenStack based startup that I’ve previously covered, tries to simplify the IT infrastructure piece through an appliance offering. But there’s still a lot of work to provision a platform for the things your end users, and your C-level managers, care about, applications.

In announcing Oracle’s public cloud offerings, Larry Ellison called out Salesforce.com as the “Roach Motel” of cloud services. While true, to a degree, what Larry neglected to mention is the immense value that Salesforce.com is providing to developers, and ultimately, end users, by providing a platform for applications. Sure the applications have to fit within the APIs supported by Salesforce.com. The fact that Salesforce.com’s platform as a service is not standards based, as Ellison pointed out in a roundabout fashion, should not be applied to platform as a service cloud offerings in general.

Make no mistake that enterprise vendors, many of whom are bringing out enterprise cloud offerings, are going to take a page out of the Apple playbook. In fact, some already are. IBM talks about workload optimized systems. Oracle talks about hardware and software engineered together.

These offerings take away much of the time and challenges of building IT environments from piece parts. These environments fast track the delivery of applications to end users. Some IT departments will resist these pre-integrated products, especially in the cloud arena. As I mentioned, we IT folk like control. The fact that an order of magnitude too much control leads to complexity and gets in the way of providing applications to end users is often an after thought. For how much longer?

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

Windows 8 demos at Microsoft’s BUILD developer and partner conference have been very compelling, inspiring even. But nothing about the UI will change the underlying challenges with Microsoft’s open ecosystem. Users will still have to deal with frustrating experiences, even if the blue screen of death is replaced with a blue frowny face.
Windows 8 looks promising
Our own Galen Gruman’s review of Windows 8 is quite glowing. Galen goes out on a limb and suggests that HP’s decision to jettison WebOS could have been due to Windows 8:

But if Windows 8 is nearly as good as the demos look, Microsoft could very well win the mobile wars, despite years of failures in Windows tablets and mediocre smartphone efforts. If Hewlett-Packard CEO Léo Apotheker had seen a preview of Windows 8 tablets, that would explain why he suddenly killed the WebOS-based TouchPad tablet last month.

Other reviews of Windows 8 have been cautiously optimistic that Microsoft may finally have an OS to combat Apple.

The only problem, software alone is not enough. The real test for Microsoft is how Windows 8 will demo on the hundreds or thousands of devices, PC and mobile, that will be “optimized” to run Windows 8. I stress optimized, because every hardware vendor will play that card, when in fact, no piece of software can be optimized for everything. That’s where marketing and reality depart.

Configurability versus design choices
John Gruber wrote a thought provoking post about Apple’s long term sustainable advantage residing not solely on their design, but their supply chain. The two points are related, and will impact Microsoft’s windows 8 strategy, especially as they grow beyond the desktop to tablets and mobile devices with a single operating system.

Gruber wrote:

Design is largely about making choices. The PC hardware market has historically focused on three factors: low prices, tech specs, and configurability. Configurability is another way of saying that you, the buyer, get a bigger say in the design of your computer. (Bright points out, for example, that Lenovo gives you the option of choosing which Wi-Fi adaptor goes into your laptop.) Apple offers far fewer configurations. Thus MacBooks are, to most minds, subjectively better-designed — but objectively, they’re more designed. Apple makes more of the choices than do PC makers.

I’ve been thinking of this more and more as part of my day job, and I can fully understand why making choices are hard for vendors. Clients tell us that they want to make choices, because a lack of choice can sometimes lead to vendor lock-in. But these same clients demonstrate higher satisfaction with products which have been, in Gruber’s words, more designed, and hence present fewer choices to buyers.

Microsoft’s issue with Windows is that their OEM partners offer a degree of configurability that, on the surface is helpful, but turns out to hurt user satisfaction with both Windows and the hardware OEM.

I hadn’t made this connection until I started to use Windows 7 in a VMware Fusion virtual machine on a new MacBook Air. Yes, I know, the horror. But I need to use Windows for work and will be travelling with the need for my work and personal machine. This was easier than lugging around two physical machines.

Even with the overhead of a hypervisor and the relatively mediocre Intel core i5 CPU, my work hypervisor, is a delight to use. I’ve had no issues with driver mismatches or blue screens of death. Windows startups, shutdowns and resume from sleep are speedy, thanks to the SSD drives. I actually like using Windows again. More importantly, my PC is no longer getting in the way of my productivity.

For once, a hardware provider that’s actually enhancing satisfaction with Windows. Unfortunately, Apple isn’t a Microsoft hardware partner.

What’s Microsoft to do?
It’s difficult to know how Microsoft will address this issue going forward.

Microsoft could get very, very, restrictive about configurations and testing before allowing hardware OEMs to use Windows 8. This would require the same level of testing for fixes and upgrades to drivers used by the hardware configuration. However, considering the billion odd users of Microsoft Windows, with vastly different amounts to spend on PCs, a very restrictive policy will be at odds with Microsoft’s business goals.

Increased restrictions could encourage Windows OEMs to build with Linux OS, or more likely, Google’s Chrome OS. Microsoft is in a difficult spot of being the undisputed market share leader, but at risk of market share loss to Apple at the high end and Chrome and Linux at the low end. Until recently, the high end and low end competition was theoretical at best, but no longer.

It’ll be interesting to see what Microsoft and its partners will do if Apple uses its supply chain and lower configurability to offer a much lower price point entry to their desktops and laptops. In some respects, the iPad is doing just this as it eats into existing PC share.

Whether Windows 8 will be enough to stop the share loss is an open question. The real question however is how well Windows 8 will be configured and optimized for the hardware you’ll be asked to buy. Keep that in mind as you purchase new machines for your teams and employees.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.