Project Updates

Blackboard Infrastructure Completely Overhauled

Blackboard's infrastructure had to be re-architected.

The recent upgrade of BlackBoard (Bb) to version 9.1 was more than a simple software installation. Significant hardware and other infrastructure had to be developed and re-architected. It was a tremendous undertaking that showcased the efficiencies that can be achieved by OIT.

Typically, a Bb software upgrade is completed in twelve months. This time, the Bb project team finished a multi-step software and hardware upgrade in only five months. The actual end of the project is not until August when a service makeover is to be completed that will include ServiceNow training and developing the new request fulfillment process. In the past, users sent their requests to and but now they will use ServiceNow.

Moving from physical application servers for production and QA to Virtual Machines (VMs) was another significant achievement for this project. The Bb database was upgraded from 10g to 11g Real Application Clusters (RAC). Using RAC wasn't possible until we upgraded to Bb 9.1 and as a result, the Bb databases are now clustered and the database administrators have more control. Performance has improved and excellent scalability options are now available.

In terms of scalability, the Bb team has incorporated trigger VMs. In the event of a decrease in performance, they can easily apply additional standby VMs, thanks to the assistance form Steve Siegelman's infrastructure team. Another improvement was the upgrade from a version of Apache server that was bundled with the older version of Bb to vendor-supported Apache 2.2, which now aligns Bb with current UTS standards.

The upgrade itself transformed Bb 9.0, service pack 3 to Bb 9.1, service pack 8. This was not a one-step upgrade but instead involved seven steps, with each one being tested as though it were the end of the upgrade process. It was like performing seven upgrades. Another difficulty of the upgrade involved add-on, vendor-supported applications called Building Blocks. Emory's version of Bb uses roughly 20 Building Blocks and the team had to review each of them to determine if it required an upgrade or any tweaking. Six of them had to be fully upgraded. The team also reviewed the Bb-Opus integration by performing unit and end-to-end testing.

With Bb upgrades, users depend on near constant up-time. Through careful planning, the team managed to condense the downtime for the service to less than six hours, when normal downtime is usually close to a week. They were able to take a snapshot of the Bb 9.0 environment and return it to users so they could continue working. Meanwhile, the team used the snapshot to perform the 9.1 upgrade on the new architecture.

The team did some innovative things to ensure efficiency:

  1. Tested different functionality in different environments, e.g. connectivity in DEV, user testing in QA, etc.
  2. User training and focus groups instead of pilot testing to get instructors/staff input
  3. Ran multiple trials in the PROD environment

Sandra Butler, the Blackboard Architect, said "I've been involved in many upgrades. This was the shortest window ever. I am really proud of our team effort. We delivered an improved product with a new look and feel with an eye on the future based on our new infrastructure. Response from users has been positive."

For a project of the magnitude, the team was not just performing an upgrade...they were reinventing Bb using totally new hardware and software models. And that's what goes into a simple upgrade! Many thanks to all who made it such a rousing success.

Surpassing 500 VMs

VMs offer much greener ways to manage computing power.

When Blackboard went live this month, the Infrastructure team reached a milestone of over 500 Virtual Machines (VMs) within our infrastructure (552, to be exact). This represents a huge cost savings as far as hardware and power. It is greener because we have 20 servers instead 552.

A Virtual Machine shares a physical server with another VM. It looks like its own stand-alone server, but it shares. In the old days, servers ran to about 10% of capacity. Now with the VMS, we have much better use of our machines and dramatically lower cost.

The effort started four years ago and has really taken off since last year because we went to larger physical servers with more memory so that we could load the bigger applications, such as PeopleSoft HR, PeopleSoft Student, Blackboard, and in two months Kronos. Therefore the footprint in the Datacenter continues to consolidate. With each refresh, we check to see if we can VM this applications.

We are seeing campus-hosted VMs. Customers on campus are now using this service and if they need a new server, we have them purchasing VMs (which are HP C-Class blades).

The big thing is that we have total buy-in across the division because everything is going virtual these days. We are purchasing much fewer physical machines than in the past. Mike Lewis (Systems Engineering, Infrastructure) the main architect of the project, has done commendable work.

- Steve Siegelman, Manager, Infrastructure