October 2011


The Open Compute Project Foundation recently announced results from Facebook’s attempts to build an efficient data center at the lowest possible cost. The foundation claims to have reduced the cost of building a data center by 24 percent, and improved ongoing efficiency by 38 percent versus state of the art data centers.

Open Compute Project design specifications
The Open Compute Project foundation released design specifications for servers and data center technology earlier this week.

The servers themselves fit into a chassis that is slightly taller than a 1.5U standard server chassis. The servers can use either an Intel or AMD motherboard. The v2.0 Intel specification provides double the compute density as v1.0 using two next generation Sandy Bridge based Intel processors per board. The v2.0 AMD specification also doubles the compute density with support for two AMD G34 Mangy Cours or Interlagos processors per board.

Open Compute servers are racked into three adjoining 42U racks, dubbed Triplets. Each rack column contains 30 Open Compute Project servers, for a total of 90 servers per Triplet. Each rack column has two top of rack switches.

A battery pack rack cabinet sits between a pair of Triplets providing DC power in the event of loss of AC power.

Bringing deep data center engineering skills to the masses
By releasing the cost savings figures, and more importantly, the underlying hardware specifications for the motherboards, power supply and chassis, the foundation hopes to bring efficiency and lower cost data centers to companies that don’t have the engineering depth of companies such as Facebook, Google, or Amazon.

Facebook deserves kudos for their work on the project. Getting together a board of directors including Andy Bechtolsheim from Arista Networks, Don Duet from Goldman Sachs, Mark Roenigk from Rackspace and Jason Waxman from Intel couldn’t have been easy. Although, cost reduction and efficiency figures upwards of 20 percent must have attracted attention from prospective board members and the long list of hardware, software and institutional partners, including the likes of Dell, Intel, Huawei, Red Hat, Netflix, and North Carolina Sate University, to name but a few.

Nothing to sell here? Ok, but where’s the certification?
At the Open Compute Project Summit this week, Andy Bechtolsheim was quoted saying “Open Compute Foundation is not a marketing org. There’s nothing to sell here”.

While the foundation has nothing to sell, it’s critical that hardware vendors quickly release Open Compute Project certified hardware. There isn’t a certification process for hardware as yet, but this is something the foundation needs to work on immediately.

As GigaOM reports, “when the effort launched in April Dell and Hewlett-Packard both showed off servers that incorporated some of the elements of Open Compute.” The term “some elements” should be worrisome to the Open Compute Project and to potential buyers. Otherwise “Open Compute Project based” hardware will proliferate without any standard comparison across vendor offerings as vendors rush to take advantage of the Open Compute Project’s buzz with existing offerings under a different marketing banner.

Silicon Mechanics, a rack mount server manufacturer and member of the Open Compute foundation, announced an Open Compute Triplet based on the Open Compute Project specifications. A 90 compute node Triplet with entry level processors, RAM and disk and without any operating system or software starts at $287,755 and can grow to $2 million and above.

Good progress so far, more work to do
In a post at Opencompute.org, Frank Frankovsky, Director of Technical Operations at Facebook and Chairman/President of the Open Compute Project foundation wrote “… what began a few short months ago as an audacious idea — what if hardware were open? — is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum. We are officially on our way.”

Yes, the Open Compute Project foundation is officially on its way.

You’re encouraged to read through the design specification and compare to your current or future data center plans. However, until the Open Compute Project foundation comes out with a certification process, buyers are urged to ask vendors which parts of the product align with the Open Compute Project specifications and which parts are outside of the specifications. In some ways, it’s buyer beware when it comes to products claiming to offer Open Compute Project-based products, for now at least.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

I’ve been on the road with clients and partners of late and one thing I can attest to, other than the fact that trains are a much more civilized form of travel versus planes, is that enterprise interest in cloud greatly outpaces actual cloud investments.

The second thing I can attest to is, at the highest levels of companies, there’s a realization that today’s approach to IT is suboptimal. Cloud computing is supposed to help, but C-level folks aren’t convinced. Why? Because IT is stuck in the weeds and still isn’t thinking about what end users care about, and how to serve end users through cloud computing.

IT values infrastructure, while end users value applications
Applications have value to end users; All the storage, networking, compute, operating systems, hypervisors and middleware that underpin these applications are, from an end user standpoint, irrelevant. We in IT find these piece parts incredibly relevant, sometimes even sexy. Many careers in IT are spent on going deep on one of these piece parts, and many services hours are spent integrating products from each piece part into a platform to run the application, you know, the thing the end user cares about.

It pains us as IT professionals to not have control over each and every layer of the stack I mention above. We want not only control; we want to tinker with each layer of the stack. Vendors provide best practices for their layer of the stack and ask us to follow these guidelines. Sometimes we do, but most times we think our particular environment is so different than others that we need these additional 5 configuration tweaks. We love the control.

Giving up a little control for a lot of benefit
I couldn’t fathom why any self respecting IT professional would buy an iPhone. Sure it was beautiful and easy to use, but could I install additional memory? Could I change the battery? Could I run any application I want? Simply put, would I have the same level of control over the device as I’d become accustomed to.

Some developers asked whether they had the same level of control and flexibility as they were accustomed to with Web and Windows applications when building an iOS application.

I couldn’t do any of those things above, and developers had to live within the confines of iOS APIs.

And yet, just look at how much better life is for end users and iOS developers as a result of Apple saying “no” to the degree of control, configuration and tinkering we’re all so accustomed to within any IT organization.

Cloud vendors still suck in IT weeds, for how much longer?
Try applying lessons from the iPhone to today’s cloud offerings. To date, the most successful cloud provider, Amazon, enables IT to remain stuck in the weeds, with virtually all of the control and complexity they’re used to. Is it any wonder that C-level folks aren’t rushing to approve a “cloud project”?

OpenStack, the open source cloud computing platform, is firmly rooted in the infrastructure as a service layer of the cloud computing spectrum. For all its aspirations, OpenStack doesn’t remove the complexity of piecing together storage, networking, compute resources and hypervisors from varying vendors.

Nebula, an OpenStack based startup that I’ve previously covered, tries to simplify the IT infrastructure piece through an appliance offering. But there’s still a lot of work to provision a platform for the things your end users, and your C-level managers, care about, applications.

In announcing Oracle’s public cloud offerings, Larry Ellison called out Salesforce.com as the “Roach Motel” of cloud services. While true, to a degree, what Larry neglected to mention is the immense value that Salesforce.com is providing to developers, and ultimately, end users, by providing a platform for applications. Sure the applications have to fit within the APIs supported by Salesforce.com. The fact that Salesforce.com’s platform as a service is not standards based, as Ellison pointed out in a roundabout fashion, should not be applied to platform as a service cloud offerings in general.

Make no mistake that enterprise vendors, many of whom are bringing out enterprise cloud offerings, are going to take a page out of the Apple playbook. In fact, some already are. IBM talks about workload optimized systems. Oracle talks about hardware and software engineered together.

These offerings take away much of the time and challenges of building IT environments from piece parts. These environments fast track the delivery of applications to end users. Some IT departments will resist these pre-integrated products, especially in the cloud arena. As I mentioned, we IT folk like control. The fact that an order of magnitude too much control leads to complexity and gets in the way of providing applications to end users is often an after thought. For how much longer?

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.