Shared Computing

General purpose shell access (more information) is available for general purpose access. It should be used for computationally non-intensive tasks, such as reading email, editing text files, compilation of small source packages, etc. It should not be used for long running or computationally demanding processes.

Do not run MATLAB on login.csail


TIG maintains an OpenStack based ‘cloud’ platform for all lab members.

OpenStack is a free and open software suite for providing Infrastructure as a Service (IaaS) both for private clouds like we’re implementing and large public clouds. We are currently using “Nova” compute services, “Keystone” identity service, “Horizon” web dashboard, “Cinder” volume service,“Glance” image management,“Neutron” networking service, and “Heat” orchestration service. We are currently using the “Mitaka” release (aka 2016.1 released May 2016) 16.04LTS “Xenial” with KVM as the virtualization layer. We also have an OpenStack “Swift” and AWS S3 compatible object store based on [Ceph] with 168TB available storage (505TB raw less 3x replication)

As of June 2018 we have 40 physical nodes, with 1720 cores and 11T RAM.

Detailed information on account self-signup and usage is available on the OpenStack page.

Public PC and MAC

During Research Ramp-Up Phase 2, the doors to TIG will remain closed and access to the public machines and poster printer are not permitted. Any on-site walk-in service to TIG will be by appointment only.

TIG maintains walkup machines for general use in the 2nd floor Gates Tower area

You must have a valid CSAIL account to log into any of these machines. All of these machines are set up to print to the plotter.

There is limited software on the machines, and users cannot put software on the machines themselves. If you need additional software installed on these machines and believe there to be a benefit to the lab with such an installation, please drop us a note.

SLURM Compute Cluster

TIG maintains a SLURM compute cluster for automated allocation of GPU and CPU compute resources. Usage information is available on the SLURM page.