Eligible: Instructors, Researchers
Hours: Monday - Friday, 8:30 a.m. - 6:30 p.m.
Research and Woodruff Health Sciences IT (R-WIT) supports High Performance Computing (HPC) on campus by maintaining some of Emory's largest HPC resources. The ELLIPSE cluster, a 768 CPU-core high performance supercomputing cluster, was acquired by Emory in the Summer of 2007 and is housed in one of Emory's data centers. It is managed by the High Performance Computing Group within R-WIT in collaboration with scientific centers, investigators, and service cores. Policies and rates for use of the ELLIPSE are established by the Executive Committee of the Emory High Performance Compute Cluster (EHPCC).
ELLIPSE is a high-performance supercomputing cluster with an estimated throughput of 3 teraflops. It is a loosely-coupled cluster designed primarily for serial/batch processing. The cluster consists of AMD Opteron-based systems connected via a carrier-grade gigabit ethernet switch, with a 21 TB IBRIX parallel file system used for computational space. The Sun Grid Engine scheduler is used for managing jobs.
ELLIPSE was designed as a general use cluster, and has been tuned to perform optimally on jobs varying across a number of disciplines with both large and small data sets. Access to the cluster is granted to Emory-affiliated PIs based on a CPU/hour charge model. For detailed information on how to register for cluster use, please see the related HPC User Accounts document.
Cluster Usage Rates - Billing for the use of the ELLIPSE cluster is based on the wall-clock time during which a particular job is associated with a cluster run queue, and all disk space consumed by the end user in the global file system beyond the user's base (free) disk quota. Wall-clock time refers to the amount of time, measured in hours, that a submitted job is active on a run queue (as oppose to the wait queue). It may, therefore, include not only CPU time on a system, but also time associated with I/O and network delay. When users submit jobs for execution on the cluster, their work is submitted to one of the queues below. Each queue has different runtime priorities and is associated with its own rate. If no queue is specified, jobs will be submitted to the all.q queue by default.
The "all.q" cluster queue: (Rate: $.03/CPU hour)
This is the default or "general" queue; it spans all compute nodes and provides 4 job slots per standard compute node (only one slot is provided for the all.q on each of the high memory nodes with 32G of memory). All users can access this queue. Jobs in the all.q are limited to 12 hours. The queue instance will close/subordinate if there are 1 or more jobs running in the bigmem.q (see below).
The "bigmem.q" cluster queue: (Rate: $.035/CPU hour)
This queue is limited to the eight high memory nodes with 32GB of memory, so only eight jobs can run concurrently in the bigmem.q. Each job has exclusive access to the 32GB system. All users can access this queue, and there are no time limits on jobs. The bigmem.q is not subordinate to any other queue.
The "express.q" cluster queue: (Rate: $.05/CPU hour)
This queue spans all compute nodes, and provides 2 job slots per standard compute node (only one slot is provided for the express.q on each of the high memory nodes with 32G of memory). The express.q is currently only accessible by users on the expressUsers access list. Jobs in the express.q are limited to 2 hours. The queue instance will close/subordinate if there are 1 or more jobs running in the bigmem.q. Contact the HPC group (email@example.com) to request access to this queue.
The "long.q" cluster queue: (Rate: $.025/CPU hour)
This queue spans all compute nodes and provides 1 job slot per standard compute node. All users can access this queue, and there are no time limits on jobs. All jobs are run with a Unix NICE level of +15. The queue instance will close/subordinate if there are 1 or more jobs running in the express.q, if there are 1 or more jobs running in the bigmem.q, or if there are 2 or more jobs running in the all.q.
Upon creation, all new user accounts are established with a quota of 5 GB of disk space. This is provided to the end user at no charge. Beyond the initial 5GB allotment, additional space is provided at a cost of $0.35/GB; credit towards storage is earned at the rate of 10 GB free for every 100 CPU hours consumed. Quota increases can be ordered at any time and remain in place for a minimum of one monthly billing cycle. Quota decreases are implemented at the beginning of the following billing cycle.
All billing rates are established by the EHPCC Executive Committee.
|Hardware||192 dual-socket, dual-core AMD 2.6 GHz compute nodes (for 768 concurrently usable cores); 8 GB of RAM per standard compute node, and 8 high memory nodes with 32 GB of RAM. Foundry BI-RX Gigabit Ethernet Interconnect. Sun 6140 disk array, Nexan SASBeast.|
|Software||CentOS (Linux) operating system. IBRIX parallel file system (with 21 TB of usable space).|
|Scheduler||Sun Grid Engine|
|Applications||R, GSL, and others|
Ongoing Cost: Subsidized: See above for CPU time and Storage costs.