New adopters are encouraged to use the most current supported version of OpenStack (release details). Several federation members are using Red Hat’s Openstack, but this is not a requirement to support an OpenStack cloud which can join the federation.
Controller Cluster
For sites desiring high availability, a 3 controller node cluster is recommended. All Openstack components except for nova compute run on all 3 nodes of the controller cluster. Cinder (volume) runs in A/P mode whereas all other components run in A/A mode.
Each controller node needs two ethernet interfaces: one for internal/management network and one for public/provider network. It also needs enough disk space for Horizon to save uploaded images temporarily before they are uploaded to Glance.
Compute Elements
Compute nodes can have a broad range of specifications but instances deployed on the compute services will need to at least have sufficient hardware specifications to meet the instance flavors the site will offer and support the CPU/RAM oversubscription rates.
Each compute node requires two ethernet interfaces: one for internal/management network and one for the public/provider network.
A Ceph cluster provides storage for
A minimum of 3 monitors on their own physical servers per cluster is needed. If the cluster has more than 400 OSDs, use 5 monitors. The monitors need to establish a quorum to update maps. Each monitor node should have:
A minimum of 3 OSD nodes is required for high availability and fault tolerance. Objects are distributed across OSDs according to CRUSH rules. More OSD nodes means more bandwidth to data and fault tolerance.
For good block storage (volumes, images, VM boot volumes) performance, each OSD node should have at least:
The Ceph cluster at Cornell University Center for Advanced Computing has 12 of the following OSD nodes:
For maximum bandwidth, the server should have many CPU cores/threads as possible. For high availability, install multiple RADOS Gateway servers behind a load balancer. Swift/S3 API are RESTful.