STATUS

Overview

OpenStack is a free and open software suite for providing Infrastructure as a Service (IaaS) both for private clouds like we're implementing and large public clouds like Rackspace. We are currently using "Nova" compute services, "Cinder" volume service, and "Glance" image management and "Neutron" networking service. We are currently using the "Havana" release (aka 2013.2 released October 2013) on Ubuntu 12.04LTS "Precise" with KVM as the virtualization layer.

We are running primarily on hardware donated by Quanta Computer in the summer of 2011 with a total of 768 physical cores using Intel Xeon L5640 2.27GHz chips (this presents as aprox. 6,000 virtual cores) and over 3 Terabytes of RAM. This is delivered in 64 dual-socket hex-core systems ( 12 physical cores presenting as 24vCPUs) 56 of which have 48G of RAM with the remaining 8 having 96G of RAM each.

In Spring 2014 we added 10 Dell r420 servers. Each of these is dual socket 8 core E5-2450L chips and 192G RAM. This expansion added about 1200 virtual cores at the current oversubscription ratio and nearly 2T of RAM (our current bottleneck)

-- JonProulx - 19 Mar 2014

Contents

Caveats

By default all VMs are ephemeral and their state is deleted on shutdown please read this documentation to find out how to snapshot your instances and how to create and attach persistent storage if you need to preserve local state.

Join the Openstack-Users email list! this is where announcements of potentially disruptive changes will go, and a place where you can ask questions and suggest improvements to the system.

The cloud systems do not and will not have battery or generator backup, if we lose power that's too bad. Individual physical nodes are single points of failure for the virtual instances running on them, at this point it is left as an exercise for the user to implement HA Clustering across multiple VMs or to monitor their services and restart them if they fail.

Getting a CSAIL OpenStack Account

If your group doesn't have an OpenStack project yet email help@csail.mit.edu with the name you'd like, the name of a CSAIL PI (probably your supervisor) , and ideally some description of how you plan to use OpenStack so we can make a reasonable guess at an initial quota. All work in OpenStack is done in a "project" (or tenant depending on which documents you're reading but they are the same thing), quotas are assigned by group, operating system images are shared within groups, etc.

If your group already has a project (or after we create one for you) new members can create accounts at https://nimbus.csail.mit.edu/register/register.cgi (requires CSAIL Web Certificate). After you have an existing account this same link can be used to reset your password. Since the API calls are not encrypted and this password flies around in plain text we auto generate a random string as your password and email it to you, this prevents people from reusing passwords that are used for secure and important things in this insecure environment. Note the current version of OpenStack does have support for encrypting all client conenctions, we expect to implement secure connections Fall 2013.

When a new user account is created through the web interface all existing members get an email notifying them. This postgating is the only insurance against random people joining your project (within the lab, not the whole internet), so do keep an eye on it.

User can belong to multiple groups even though our web wrapper creates a one to one mapping. If you need to be in multiple groups email help@csail.mit.edu, for existing groups have a current member or sponsoring PI make the request for you so we know it's authorized.

-- JonProulx - 19 Mar 2014

Quick Start

  • Walk through launching an Ephemeral VM -- in progress 19 Mar 2014
  • Walk through launching a Persistent VM -- in progress 19 Mar 2014

Basic tools

A public login server named ubuntu-login.csail.mit.edu running CSAIL/Ubuntu 12.04 is available with all the latest OpenStack related command line tools installed. This system is running on OpenStack and should be considered Beta in terms of stability, if you plan on regularly using the CLI it's best to install them on a group server or your workstation.

http://nimbus.csail.mit.edu provides a basic WebUI to our OpenStack cluster and, under "Access and Security" -> "API Access", links to download credentials for use with command line tools.

OpenStack provides a native API client called nova which is available for Ubuntu 12.04 and newer systems in the python-novaclient package.

To install the command line tools on Mac OSX first install pip sudo easy_install pip the nova cli sudo pip install python-novaclient.

The pip version provides access to the latest features, though sometimes these are not yet implemented in our deployment. Since OpenStack APIs are versioned newer clients are backwards compatible with our install for all features that are available.

Full CLI docs at http://docs.openstack.org/cli/quick-start/content/index.html we use "nova", "glance", "quantum" (in process of being renamed to "neutron"), and "keystone" projects, but "nova" is the one you want to read about first.

It is also possible to use AWS EC2 compatible tools like the euca2ools package to interact with OpenStack. This covers most common actions but only the intersection of OpenStack and EC2 features. This is convenient if you are using both our OpenStack and EC2, but if you are only using OpenStack the nova CLI is the the most feature complete tool.

Hybridfox is a Firefox browser add on to provide a GUI interface to the cloud system. The HybridfoxAtCSAIL page has details on how to configure Hybridfox to talk to our cloud.

Your favorite programing language probably has a library or module for talking to OpenStack. Python users should look at Boto which uses the Amazon EC2 compatiblity API to to talk to OpenStack. Ruby hackers can look at the Fog gem which supports a variety of cloud APIs. If you do hack around with these or other programatic interfaces it would be great to create a wiki page describing your setup and experiences and link to it from here...

Stock Images

TIG provides a number of stock images. A full current listing can be found in the WebUI http://nimbus.csail.mit.edu/ under "Images & Snapshots". Currently all stock images are Linux based as that is what people seem to want, it is possible to run other operating systems such as FreeBSD or Windows (it is against the terms of licensing to run MacOS in a non-Apple virtual host).

While the versions may change from time to time we generally provide the following classes of image:

  • CSAIL Ubuntu <version> - A 64bit ubuntu cloud image with CSAIL accounts, AFS, and configuration management
  • CSAIL Ubuntu <version> + autofs - As above but with access to CSAIL NFS storage via the auto mounter
  • ubuntu-<version>-amd64 - 64bit ubuntu cloud image as distributed by canonical from http://cloud-images.ubuntu.com/
  • ubuntu-<version>-i386 - 32bit version of the above

Custom Images

The easiest thing to do to create a custom image is to boot one of the provided generic images, make the changes you need, then take a snapshot with either the web or cli tools (cli instructions). You can then use that snapshot as the basis for launching new instances. Note that creating a snapshot suspends the running instance for a minute or two to ensure a consistent copy.

For documentation on image creation from scratch see http://docs.openstack.org/trunk/openstack-image/content/

Instance Types

OpenStack requires the use of predefined "Instance Types" also referred to as "Flavors". These define the virtual hardware including number of cpus, memory size, size of the root disk and optionally additional ephemeral disk space. We've defined instance types using the following scheme:

  • s1.<N>core - N cores, N x 512m RAM, 10G root disk
  • m1.<N>core - N cores, N x 1024m RAM, 16G root disk
  • lg.<N>core - N cores, N x 2048m RAM, 32G root disk
  • xl.<N>core - N cores, N x 4048m RAM, 64G root disk

These standard types run on cluster nodes without redundant components and without UPS power and are best suited to tasks where the uptime of an individual component is not critical such as compute nodes or worker nodes behind a load balancer (see LBaaS section).

We also provide a selection of instance types that run on a (much) smaller pool or less performant but more redundant hardware (including UPS and Generator backed power, redundant power supplies and mirrored hard drives). This is the smae configuration used by TIG for hosting our virtualized services on OpenStack such as http://people.csail.mit.edu. These instance types all have 32G root disks and 2G swap drives with hte number of cores (c) and gigabytes of RAM (g) encoded in the name:

* ups.1c1g * ups.1c2g * ups.2c2g * ups.2c4g * ups.4c4g * ups.4c8g

It is possible to create custom instance types for specific projects, so your project may have extra types available, ask someone in your group why that is and what they are for. If you need a custom size created email help@csail.mit.edu.

Network

Currently the is only one publicly available network defined in our OpenStack world called "inet". This network puts your instance directly on a publicly accessible IPv4 network. Unlike our previous configuration which used private IP space and NAT, the IP your operating systems sees is the same as it's public IP.

Security Groups

By default all network access to instances is blocked

Each project can edit it's own default security group or create additional security groups to open up network access. Forgetting to assign the correct security group is one of the most common mistakes when starting instances and results in them being unreachable. It is possible to add or remove security groups from running instances using the "Actions" drop down menu on the "Instances" page of the web interface or the nova add-secgroup command.

Using Fixed IP Addresses

Instances are dynamically assigned publicly routable IP addresses and this is sufficient in many cases.

If you need a fixed IP address one may be reserved using WebDNS and selecting "OpenStack public network (128.52.128.0/18)" as the subnet.

Once your fixed IP is registered in WebDNS, you should assign it to your instance in the WebUI. When launching an instance, fill in your fixed IP in the eth0 Fixed IP field under the Details tab. Currently the web interface for this only works if you are booting a single instance with a single connected network. That is by far the most common case but if you need to specify a fixed IP and require multiple networks you will need to use the CLI.

Load Balancer as a Service (LBaaS)

LBaaS is a relatively new feature as such it is not well known or tested in our environment yet, feel free to try it out and if you do please share your experiences and expand this documentation.

To use LBaaS you will need a Fixed IP for the load balancer pool's virtual IP this will be the public address of your application (well unless you put a reverse proxy in front of the load balancer, but let's not get that crazy just yet...) The instances that are balanced behind it can have dynamic IPs (though dynamic or fixed you need to manually add them to the pool) and may be dynamically added and removed from the pool.

The basic steps to creating a load balancer pool are:

  1. Get IP and DNS as mentioned above
  2. Create the pool
  3. Add servers to the pool
  4. Create a healthmonitor & associate it with the pool to make sure the servers are up
  5. Create a virtual IP (vip) and associate it with the pool using the address obtained in step 1

http://docs.openstack.org/admin-guide-cloud/content/lbaas_workflow.html provides a very bare bones walk through of the CLI work flow. The Dashboard also provides an interface to setting up loadbalancer pools which should be better documented here...but knowing the steps above you can probably find them.

Defining your own networks

Quantum/Neutron makes it easy for us to allow groups to create their own private networks, so we did. There is no particular use case in mind so if you have one let us know and we can help configure the best solution. Here is the current state of things if you want to play with this part.

Right now you can definitely define your own networks that are private to your project. These are implemented as GRE tunnels, which you needn't think about except that it means they are isolated and you don't have to worry about coordinating IP addressing with other projects. If you have defined your own networks it is possible to add any or all of them to a given instance.

You should be able to use OpenStack's built in DHCP service to serve dynamic addresses from a ranch of your own choosing to hosts on that network. This is the intended case though not well tested

You might be able to use OpenStack's "router" implementation to build routers between multiple private networks. I don't know why you would want to and this is not at all tested but the documentation suggests it is possible.

You cannot use OpenStack's "router" implementation to provide NAT and connection to the public IP network, if you want this functionality you'd need to build an instance with multiple interfaces and configure that system manually to do routing and NAT. If you want to do this we might be able to configure things so it works but there are a number of performance bottlenecks that make it seem like a bad idea.

Understanding Storage

What stays and what goes away

The root disk and ephemeral disk of an instance are ephemeral, when you shut down they go away anything you want to save needs to be written to either a persistent volume or to network Storage. They are however local disk on the physical system hosting your instance and thus are moderately fast and don't contend for network bandwidth.

Persistent volumes, as currently implemented, are iSCSI volumes on a Dell EqualLogic SAN (though they appear as local storage when attached to VMs). While redundant this storage is not backed up. We only have 10T of storage in this pool for everyone to share so it is best for relatively small volumes such as bootable operating system images for persistant (rather then ephemeral) virtual machines. These volumes also can only be connected to one instance at a time, they are not shared storage. So if you need to share files among instances or have larger data needs TIG hosted NFS Storage which is backed up and support concurrent access from multiple clients is the better choice.

A note on snapshots

OpenStack allows you to snapshot your volumes, but no automated snapshots are taken. If you are about to make a potentially disruptive change it's probably a good idea to take a snapshot. If you want periodic snapshots that's also possible but you will need to script it yourself. Snapshots are copy on write so they don't consume any space until changes are made to the base volume. A 50% additional space allotment is automatically made for snapshots so a 20G volume will actually reserve 30G of space on the storage server. Snapshots are not backups when your snapshot space usage exceeds this reserve the system automatically frees space by deleting the oldest snapshot(s), if you need backed up storage please look at TIG hosted network Storage options.

Presentation Materials

  • CSAIL OpenStack Beta Announcement (Video) Nov 14, 2012.
  • CSAIL-Openstack-Beta.svg: Slides from above presentation. These will render in your browser use left and right arrows to navigate with animated transitions, use up and down arrows if the animations make you sea sick...not sure how helpful they are without the commentary but people have asked for them

Upstream Documentation

WTFM

Please help Write The Fine Manual ... this is a wiki go for it add use cases, command line example, fix my spelling and grammar, have fun!

-- JonProulx - 28 Aug 2013
Topic revision: 28 May 2014, JonProulx
 

MIT Computer Science and Artificial Intelligence Laboratory

 

  • About CSAIL
  • Research
  • News + Events
  • Resources
  • People

This site is powered by Foswiki MIT: Massachusetts Institute of Technology