STATUS

2014-11-06 17:03: Issues with the OpenStack Web user interface have been fixed.

Overview

OpenStack is a free and open software suite for providing Infrastructure as a Service (IaaS) both for private clouds like we're implementing and large public clouds like Rackspace. We are currently using "Nova" compute services, "Keystone" identity service, "Horizon" web dashboard, "Cinder" volume service,"Glance" image management,"Neutron" networking service, and "Heat" orchestration service. We are currently using the "Icehouse" release (aka 2014.21 released April 2014) on Ubuntu 12.04LTS "Precise" and 14.04LTS "Trusty" with KVM as the virtualization layer.

We are running primarily on hardware donated by Quanta Computer in the summer of 2011 with a total of 768 physical cores using Intel Xeon L5640 2.27GHz chips (this presents as aprox. 6,000 virtual cores) and over 3 Terabytes of RAM. This is delivered in 64 dual-socket hex-core systems ( 12 physical cores presenting as 24vCPUs) 56 of which have 48G of RAM with the remaining 8 having 96G of RAM each.

In Spring 2014 we added 10 Dell r420 servers. Each of these is dual socket 8 core E5-2450L chips and 192G RAM. This expansion added about 1200 virtual cores at the current oversubscription ratio and nearly 2T of RAM (our current bottleneck).

-- JonProulx - 15 October 2014

Contents

Caveats

Join the Openstack-Users email list! this is where announcements of potentially disruptive changes will go, and a place where you can ask questions and suggest improvements to the system.

By default all VMs are ephemeral and their state is deleted on shutdown please read this documentation to find out how to snapshot your instances and how to create and attach persistent storage if you need to preserve local state.

By default all network access to instances is blocked, the Security Groups section below describes how to open up access to your systems.

Except for UPS instance types, the cloud systems do not and will not have battery or generator backup, if we lose power that's too bad. Individual physical nodes are single points of failure for the virtual instances running on them, at this point it is left as an exercise for the user to implement HA Clustering across multiple VMs or to monitor their services and restart them if they fail. If your system is a single VM and is not fault tolerant be sure to use one of the ups instance types.

Resizing instances does not work. Unfortunately, we can't disable the UI element that supposedly offers the resizing feature. Resizing requires migration between hypervisors, which isn't currently working in our environment. We are currently deploying a new storage backend for OpenStack which will make the migration piece possible. But it's not ready yet. Stay tuned!

Getting a CSAIL OpenStack Account

If your group doesn't have an OpenStack project yet email help@csail.mit.edu with the name you'd like, the name of a CSAIL PI (probably your supervisor) , and ideally some description of how you plan to use OpenStack so we can make a reasonable guess at an initial quota. All work in OpenStack is done in a "project" (or tenant depending on which documents you're reading but they are the same thing), quotas are assigned by group, operating system images are shared within groups, etc.

If your group already has a project (or after we create one for you) new members can create accounts at https://nimbus.csail.mit.edu/register/register.cgi (requires CSAIL Web Certificate). After you have an existing account this same link can be used to reset your password.

When a new user account is created through the web interface all existing members get an email notifying them. This postgating is the only insurance against random people joining your project (within the lab, not the whole internet), so do keep an eye on it.

While most projects are comfortable with this rather loose gating, since it allows self sign up. Upon request we can create projects that do not allow self sign up, this is more secure but does mean all new users will need be maunually created by TIG.

Users can belong to multiple groups even though our web wrapper creates a one to one mapping. If you need to be in multiple groups email help@csail.mit.edu, for existing groups have a current member or sponsoring PI make the request for you so we know it's authorized.

-- JonProulx - 15 October 2014

Quick Start

  • Walk through launching an Ephemeral VM -- in progress 19 Mar 2014
  • Walk through launching a Persistent VM -- in progress 19 Mar 2014

Logging into your instance

Once you've booted an instance as described above and the appropriate Security Group Rules have been applied, it can be accessed remotely via SSH. With the Ubuntu and CSAIL Ubuntu images we provide, you can log into the instance as the 'ubuntu' user, provided that you've associated an SSH key with the instance (see OpenStackSSHKey).

Logging in as the 'ubuntu' user is the only way to log into instances newly booted from the standard Ubuntu cloud image. You can access CSAIL Ubuntu images either via your SSH key and the 'ubuntu' user, or with your CSAIL Kerberos account. More information on the base images provided by TIG is available below.

Lastly, If you happen to have launched your instance using the Heat Orchestration service, you may need to log into your instance as the 'ec2_user' rather than the 'ubuntu' user.

Got Root?

For privileged access to Ubuntu cloud images, and the CSAIL images based on them, you must provide an ssh public key when you boot your instances and connect via ssh as the user "ubuntu". This user is configured with passwordless sudo access so you can run commands with root privilege by prefixing them with 'sudo' for example:

ubuntu@my-vm:~$ sudo apt-get install mit-scheme

Details of how to setup and use public keys with OpenStack are on the OpenStackSSHKey wiki page.

Basic tools

The easiest way to interact with OpenStack is through the web dashboard at https://nimbus.csail.mit.edu this provides most common features, though does not expose all functionality. To get advanced functions it is sometimes necessary to use the command line or write your own code to drive the API.

Command line access

A public login server named ubuntu-login.csail.mit.edu running CSAIL/Ubuntu 14.04 is available with all the latest OpenStack related command line tools installed. This system is running on OpenStack and should be considered Beta in terms of stability, if you plan on regularly using the CLI it's best to install them on a group server or your workstation.

http://nimbus.csail.mit.edu provides a basic WebUI to our OpenStack cluster and, under "Access and Security" -> "API Access", links to download credentials for use with command line tools.

OpenStack provides a native API client called nova which is available for Ubuntu 12.04 and newer systems in the python-novaclient package.

To install the command line tools on Mac OSX first install pip sudo easy_install pip the nova cli sudo pip install python-novaclient.

Full CLI docs at http://docs.openstack.org/cli/quick-start/content/index.html we use "nova", "glance", "quantum" (in process of being renamed to "neutron"), and "keystone" projects, but "nova" is the one you want to read about first.

It is also possible to use AWS EC2 compatible tools like the euca2ools package to interact with OpenStack. This covers most common actions but only the intersection of OpenStack and EC2 features. This is convenient if you are using both our OpenStack and EC2, but if you are only using OpenStack the nova CLI is the the most feature complete tool.

Using templated orchestration

The Heat Orchestration system provides a templated method for dealing with more complex sets of resources. this is documented separately (and still a bit sparsely) on the OpenStackTemplates page. This is analogous to Amazon's "CloudFormation" product an provides some compatibility with the Amazon template language.

API access

Complete API reference is available at http://docs.openstack.org/

Your favorite programming language probably has a library or module for talking to OpenStack. Python users should look at Boto which uses the Amazon EC2 compatibility API to to talk to OpenStack. Ruby hackers can look at the Fog gem which supports a variety of cloud APIs. If you do hack around with these or other programmatic interfaces it would be great to create a wiki page describing your setup and experiences and link to it from here...

Stock Images

TIG provides a number of stock images. A full current listing can be found in the WebUI http://nimbus.csail.mit.edu/ under "Images & Snapshots". Currently all stock images are Linux based as that is what people seem to want, it is possible to run other operating systems such as FreeBSD or Windows (it is against the terms of licensing to run MacOS in a non-Apple virtual host).

While the versions may change from time to time we generally provide the following classes of image:

  • CSAIL-Ubuntu-<version> - A 64bit ubuntu cloud image with CSAIL accounts, AFS, and configuration management
  • CSAIL-Ubuntu-<version> + autofs - As above but with access to CSAIL NFS storage via the auto mounter
  • Ubuntu-<version>-amd64 - 64bit ubuntu cloud image as distributed by canonical from http://cloud-images.ubuntu.com/
  • Ubuntu-<version>-i386 - 32bit version of the above

Custom Images

Note that you will get best performance from "raw" images. We are using Ceph as our storage back end, this allows VMs to use copy on write clones of the stored images for instant start up and space efficiency, but only if the image or snapshot is in "raw" format.

Image snapshots

The easiest thing to do to create a custom image is to boot one of the provided generic images, make the changes you need, then take a snapshot with either the web or cli tools (cli instructions). You can then use that snapshot as the basis for launching new instances.

Note that creating a snapshot suspends the running instance, this is required to create a consistent snapshot image. How long the instance is unreachable is directly proportional to the size of the root volume. It takes much longer to copy 32G than 2G. For this reason it is strongly recommended that you use the for.snapshot instance type when working on new images in this way. While usually a bit faster, you should plan on about 1-2 minutes of down time per gigabyte.

Custom images from scratch

For documentation on image creation from scratch see http://docs.openstack.org/trunk/openstack-image/content/

Essentially you make a KVM based VM on your local system and then import the virtual disk using the glance cli once you have it set to your liking. Remember to use "raw" type virtual disk rather than the default "qcow2" type.

Converting existing images to raw

As described above storing your images in 'raw" format is currently the best choice. Prior to about Oct 6th 2014 we were storing images differently and recommending "qcow2", though even if you up loaded images prior to Oct 6th in 'raw' format the will be in the old image store and will need to be downloaded and re-uploaded to move to the new Ceph storage (just skip the conversion step below).

The easiest thing is to just launch a new VM (to be sure it is Ceph backed) and take a Snapshot. This will put everything in the right place and format.

There no particular advantage to manual conversion but for the curious this is the long form process:

  1. Configure your CLI environment
  2. Download the existing image:
    glance image-download --progress --file <image-name>.qcow2 image-name
  3. Convert using the qemu-img command from qemu-utils package:
    qemu-img convert -O raw <image-name>.qcow2 <image-name>.raw
  4. Upload new image:
    glance image-create --disk-format raw --container-format bare \
       --progress --name <image-name> --file <image-name>.raw
  5. If you are using a shared system like ubuntu-login.csail.mit.edu be sure you
    rm <image-name>.raw
  6. Optional: go to https://nimbus.csail.mit.edu and remove or rename the old qcow2 version of the image

Instance Types

OpenStack requires the use of predefined "Instance Types" also referred to as "Flavors". These define the virtual hardware including number of cpus, memory size, size of the root disk and optionally additional ephemeral disk space. We've defined instance types using the following scheme:

  • s1.<N>core - N cores, N x 512m RAM, 10G root disk
  • m1.<N>core - N cores, N x 1024m RAM, 16G root disk
  • lg.<N>core - N cores, N x 2048m RAM, 32G root disk
  • xl.<N>core - N cores, N x 4096m RAM, 64G root disk

These standard types run on cluster nodes without redundant components and without UPS power and are best suited to tasks where the uptime of an individual component is not critical such as compute nodes or worker nodes behind a load balancer (see LBaaS section).

There's one special flavor for instances launched specifically for creating image snapshots:

  • for.snapshot - 2 cores, 4096M RAM, 0G root disk

Obviously it's not really 0G, that is magic for whatever the minimum size of the base image is. This is ideal for making snapshots as you want them to be as small as possible.

We also provide a selection of instance types that run on a (much) smaller pool or less performant but more redundant hardware (including UPS and Generator backed power, redundant power supplies and mirrored hard drives). This is the same configuration used by TIG for hosting our virtualized services on OpenStack such as http://people.csail.mit.edu. These instance types all have 32G root disks with the number of cores (c) and gigabytes of RAM (g) encoded in the name:

  • ups.1c1g
  • ups.1c2g
  • ups.2c2g
  • ups.2c4g
  • ups.4c4g
  • ups.4c8g

It is possible to create custom instance types for specific projects, so your project may have extra types available, ask someone in your group why that is and what they are for. If you need a custom size created email help@csail.mit.edu.

Network

Currently the is only one publicly available network defined in our OpenStack world called "inet". This network puts your instance directly on a publicly accessible IPv4 network. Unlike our previous configuration which used private IP space and NAT, the IP your operating systems sees is the same as it's public IP.

Security Groups

By default all network access to instances is blocked

More specifically outbound ("egress") and related inbound ("ingress") traffic is allowed, all other traffic is blocked. This means your instance will be able to run package updates, but you won't be able to ping it or ssh to it.

Each project can edit its own default security group or create additional security groups to open up network access. Forgetting to assign the correct security group is one of the most common mistakes when starting instances and results in them being unreachable. It is possible to add or remove security groups from running instances using the "Actions" drop down menu on the "Instances" page of the web interface or the nova add-secgroup command.

To define rules:

In the web interface select the Access & Security tab, there should be a button on the right-hand side of the interface called "+ Create Security Group".

Click that, and give the group a name and description. Once it's in the list of security groups, click the "Manage Rules" button.

By default, groups that you create will automatically allow all outgoing (Egress) traffic to all ports and IPs. To set up a similar incoming (Ingress) rule, click the "+ Add Rule" button in the rules management interface.

There are a number of predefined rules available in the "Rules" drop down. Most likely you will find what you need there either a specific application such as "ssh" or open all of a given protocol "all tcp", "all udp", or "all icmp" for example. It is possible to add multiple rules in a group and apply multiple groups to an instance.

It is also possible to add specific custom ports or port ranges. Since all egress traffic is allowed by default in a new rule set you will most likely want to set "Ingress" rules.

Using Fixed IP Addresses

Instances are dynamically assigned publicly routable IP addresses and this is sufficient in many cases.

If you need a fixed IP address one may be reserved using WebDNS and selecting "OpenStack public network (128.52.128.0/18)" as the subnet.

Once your fixed IP is registered in WebDNS, you should assign it to your instance in the WebUI. When launching an instance, fill in your fixed IP in the eth0 Fixed IP field under the Details tab. Currently the web interface for this only works if you are booting a single instance with a single connected network. That is by far the most common case but if you need to specify a fixed IP and require multiple networks you will need to use the CLI.

Load Balancer as a Service (LBaaS)

LBaaS is a relatively new feature as such it is not well known or tested in our environment yet, feel free to try it out and if you do please share your experiences and expand this documentation.

To use LBaaS you will need a Fixed IP for the load balancer pool's virtual IP this will be the public address of your application (well unless you put a reverse proxy in front of the load balancer, but let's not get that crazy just yet...) The instances that are balanced behind it can have dynamic IPs (though dynamic or fixed you need to manually add them to the pool) and may be dynamically added and removed from the pool.

The basic steps to creating a load balancer pool are:

  1. Get IP and DNS as mentioned above
  2. Create the pool
  3. Add servers to the pool
  4. Create a healthmonitor & associate it with the pool to make sure the servers are up
  5. Create a virtual IP (vip) and associate it with the pool using the address obtained in step 1

http://docs.openstack.org/admin-guide-cloud/content/lbaas_workflow.html provides a very bare bones walk through of the CLI work flow. The Dashboard also provides an interface to setting up loadbalancer pools which should be better documented here...but knowing the steps above you can probably find them.

Defining your own networks

Quantum/Neutron makes it easy for us to allow groups to create their own private networks, so we did. There is no particular use case in mind so if you have one let us know and we can help configure the best solution. Here is the current state of things if you want to play with this part.

Right now you can definitely define your own networks that are private to your project. These are implemented as GRE tunnels, which you needn't think about except that it means they are isolated and you don't have to worry about coordinating IP addressing with other projects. If you have defined your own networks it is possible to add any or all of them to a given instance.

You should be able to use OpenStack's built in DHCP service to serve dynamic addresses from a ranch of your own choosing to hosts on that network. This is the intended case though not well tested

You might be able to use OpenStack's "router" implementation to build routers between multiple private networks. I don't know why you would want to and this is not at all tested but the documentation suggests it is possible.

You cannot use OpenStack's "router" implementation to provide NAT and connection to the public IP network, if you want this functionality you'd need to build an instance with multiple interfaces and configure that system manually to do routing and NAT. If you want to do this we might be able to configure things so it works but there are a number of performance bottlenecks that make it seem like a bad idea.

Understanding Storage

What stays and what goes away

The root disk and ephemeral disk of an instance are ephemeral, when you shut down they go away anything you want to save needs to be written to either a persistent volume or to network Storage.

Persistent volumes, as currently implemented, are iSCSI volumes on a Dell EqualLogic SAN (though they appear as local storage when attached to VMs). While redundant this storage is not backed up. We only have 10T of storage in this pool for everyone to share so it is best for relatively small volumes such as bootable operating system images for persistent (rather then ephemeral) virtual machines. These volumes also can only be connected to one instance at a time, they are not shared storage. So if you need to share files among instances or have larger data needs TIG hosted NFS Storage which is backed up and support concurrent access from multiple clients is the better choice.

A note on snapshots

OpenStack allows you to snapshot your volumes, but no automated snapshots are taken. If you are about to make a potentially disruptive change it's probably a good idea to take a snapshot. If you want periodic snapshots that's also possible but you will need to script it yourself. Snapshots are copy on write so they don't consume any space until changes are made to the base volume. A 50% additional space allotment is automatically made for snapshots so a 20G volume will actually reserve 30G of space on the storage server. Snapshots are not backups when your snapshot space usage exceeds this reserve the system automatically frees space by deleting the oldest snapshot(s), if you need backed up storage please look at TIG hosted network Storage options.

Presentation Materials

  • CSAIL OpenStack Beta Announcement (Video) Nov 14, 2012.
  • CSAIL-Openstack-Beta.svg: Slides from above presentation. These will render in your browser use left and right arrows to navigate with animated transitions, use up and down arrows if the animations make you sea sick...not sure how helpful they are without the commentary but people have asked for them

Upstream Documentation

WTFM

Please help Write The Fine Manual ... this is a wiki go for it add use cases, command line example, fix my spelling and grammar, have fun!

-- JonProulx - 28 Aug 2013
Topic revision: 14 Nov 2014, StephenJahl
 

MIT Computer Science and Artificial Intelligence Laboratory

 

  • About CSAIL
  • Research
  • News + Events
  • Resources
  • People

This site is powered by Foswiki MIT: Massachusetts Institute of Technology