With open source at its core, you’d be hard put to argue against the freely available code being one of the reasons for OpenStack’s success. It could be debated that other open cloud platforms haven’t seen as much uptake compared to OpenStack because of this – it’s now one of the largest collaborative programming projects on the planet.

There are hundreds of core developers who contribute code on a daily basis and thousands who make less regular contributions. The results of the last OpenStack user survey saw the start of a major shift away from Proof of Concept and test environments into production environments. It now underpins huge corporate infrastructure networks and the list of companies joining the foundation, which governs OpenStack, is continuing to grow.

Early attempts to add value to this were vendors who tried to create OpenStack appliances and we still see a continuation of this where vendors release reference architectures. But, this has always struck me as running completely contrary to the idea of the open cloud.

One of the key advantages is to be able to choose exactly the networking, storage, compute and other components and construct a cloud infrastructure to work with your data in your own way. In the High Performance Computing (HPC) sphere different users have very different requirements and open source cloud offers the ability to provide services that match these varying needs.

When it comes to which flavour of OpenStack to use and the decision to use being either a commercial or freely available distribution, the choices are usually financially driven. However, there are advantages as well as drawbacks to both routes, which aren’t always immediately considered.

OCF partners with RedHat who back the RDO project, a freely available community supported distribution of OpenStack that runs on RedHat Enterprise Linux (RHEL), as well as providing a commercially supported RHEL operating system projects distribution.

Coming back to HPC and research computing, we often find that cutting edge hardware requires cutting edge software. Therefore, because the latest version of RDO tends to be released in only a matter of days after the upstream release itself, it may make more sense to test and deploy with this distribution than the commercially supported distribution – but this is reliant on the user being willing to self-support.

Conversely, although RedHat arrived relatively late to the OpenStack party, they are now a major contributor and as partners we are able to draw on that resource to get issues resolved in short order. They are also working to close the stabilisation gap between upstream and OSP releases.

In RDO, the build process for compute or controller node images varies considerably with various options to build from development, tested or stable branches. It is one of the areas is the most challenging as bugs fixed in one branch don’t necessarily appear in other branches. Some branches purely receive the updates from upstream with no backporting – taking newer updates and porting them to older versions of the system. So the time that is spent backporting bugs and applying fixes – work generally carried out by RedHat engineers in OSP – often takes place during deployment and inevitably more time is consumed.

Another fundamental aspect is the support coverage provided by Red Hat – core components such as Nova, Keystone and Neutron will be covered – but other, newer or more esoteric projects may not. Currently, for example, support for Murano and Magnum doesn’t exist in OSP.

If containers are an important aspect of deployment (which we’re increasingly finding they are) then customers are more likely to be drive down the self-support route. Indeed, I see containers as the defining technology for OpenStack and its continuing success will, in part, depend on its ability to provide a service for this popular tool.

If you’re willing to accept a private cloud with the odd rough edge and you have the in-house capabilities to provide a measure of infrastructure support, RDO will be a good choice. If your needs are for stability and the ability to call on many eyes to rapidly debug a problem then OSP is the product to consider.

As a systems integrator, we take all this (and more) into account when designing our high performance clouds. Either route will bring users the joy of working with infrastructure on demand and scalable computing. It’s our job to ensure the outcome meets the design.

Christopher Brown is an OpenStack engineer at OCF, a high performance computing provider based in the UK.