by D. Soriano and Gianni De Fabritiis, PhD.
We started using GPU computing for molecular dynamics simulations in late 2007, shortly after NVIDIA introduced their first fully-programmable graphics card. GPU adoption into our molecular dynamics program was a natural transition as we had spent the previous two years working with Sony’s Cell processor, the first widely-available heterogeneous processor, and had obtained very promising results (see 2007 papers cited). While working towards ACEMD, our molecular dynamics software, we were unable to find a stable GPU cluster from any of the major vendors that matched high GPU density with a cost effective specification. We were thus forced to build our own.
Development of the first GPU servers for MD simulations by Acellera
We started the hardware development work that ultimately led to Acellera’s Metrocubo by cherry-picking the appropriate components from a scarce pool. We knew we wanted to build cost effective GPU computing systems of high density, and so we focused on researching and developing 4xGPU nodes built around consumer hardware. After evaluating the possibilities we found a single 1500W power supply, and picked one motherboard of the very few which at the time supported 4 double-width GPUs. We then complemented this with CUDA GPUs from NVIDIA, and after deciding on the remaining components we obtained a system that allowed us to build a small GPU cluster, to which we tailored what would ultimately become ACEMD.
Development of a computer chassis specifically designed for GPU accelerated MD
Over the next two years, the range of consumer hardware choices available for GPU computing improved significantly. Nevertheless, we could not consistently get our hands on a commercial GPU server that had a suitably designed chassis, and consequently some of our systems required frequent support. For the most part, available chassis did not have the right combination of GPU cooling, low noise, and GPU density, and brand name options came with custom designs of power supplies and mainboards, which complicated updates. To address these issues, and be able to standardize hardware configurations so that we could confidently have robust and efficient machines optimized for GPU accelerated molecular dynamics simulations, we decided to design a new GPU chassis. To date this chassis remains the only one to be optimized for use in MD simulation work and computer aided drug design.
First generation GPU chassis: design and prototyping
The first prototype of the GPU computer chassis, which was made of stainless steel, already included some of the key signature features of the final product. First, we placed the power supply at the front, and fixed the dimensions of the box for exclusive single socket, 4-GPU motherboard support,as multiple sockets increased the cost and decreased performance by about 10% due to PCIe switch latency. The result was a versatile and compact design that permitted facile repurposing of workstations into rackmount solutions, and offered the possibility of high GPU density. While troubleshooting or testing new configurations we frequently moved machines from the server room to the office and vice versa, and therefore we preferred to design a machine that fitted in both environments. Second, we placed two large-radius fans with high quality bearings at the front, instead of the center or the back. Given the small volume of the box, we felt this would be enough to ensure proper cooling and curb noise to manageable levels. As one might expect the first prototype required revision, but minor modifications primarily introduced in the front panel of our second generation design produced a GPU chassis that fitted all of our requirements.
Second generation GPU chassis: results and final touch ups
The results obtained with GPU nodes built with the second generation chassis were most satisfactory. The cooling was much better than anything we had ever tested before and the temperature of the GPUs never exceed 78C, even after running the machine for extended periods of time with both actively and passively cooled GPUs. Furthermore, the amount of noise produced by the machines was very acceptable, and did not exceed the background noise produced by the lab’s AC. Some of Acellera’s customers have now six of these systems running satisfactorily full blast in their offices. Furthermore, the compact size of the chassis – each 8U tall and less than ⅓ of 19in wide – allowed fitting 3 units per computer rack shelve, on a tray, thus facilitating GPU cluster assembly.
Next, we shifted our attention to the unit’s cosmetics. GPU clusters and workstations are ubiquitously characterized by dull colors and unimaginative designs, and we wanted something more attractive to see in the lab every day. We ordered ten prototypes of various colors, and we decided we liked the green, the orange and the blue versions, but we settled for the last as it coincided with Acellera’s logo color. The feet for the workstations we made ourselves, printed in orange on our own 3D printer.
GPU clusters and workstations built with Acellera’s chassis are marketed in the USA, Canada, and Europe
Since then, Acellera has shipped over a hundred Metrocubo units built on this chassis design. In order to have the flexibility to adapt to hardware changes only small batches are made and modifications introduced as needed. Developing standardized hardware configurations allows Acellera to minimize hardware problems. Acellera sells the same standard configurations in the Europe and the US thanks to partnerships with companies such as Silicon Mechanics, or Azken. None of our MetroCubo machines ship until they have passed a battery of taxing stress tests, including running extensive ACEMD simulations. Our Metrocubo GPU workstations are plug and play devices that come with the OS (CentOS or Ubuntu) and molecular dynamics simulation software all fully installed and ready for use. For GPU cluster configurations this is up to the customers’ discretion, but nevertheless Acellera offers assistance should they prefer to do it on-site. Should you be interested in more photos of our GPU systems or details on a current sample configuration visit our Google+ profile, see more information on Metrocubo, or contact us directly.
Finally, if you prefer to build your own, here is a sample spec that we currently use:
- Acellera Metrocubo Chassis
- Intel Xeon™ E3-1245v3 Quad Core 3.4Ghz, 22nm, 8MB, 84W + P4600 GPU
- MB ASUS Z87 WS
- 32 GB ECC RAM
- Silverstone 1500W power supply
- Hard disk WD 4TB RED
- 4 GPUs of choice (Tesla K40, GTX 780 TI)
Note that E3 processors have a built in GPU that can be used while all 4 Nvidia GPUs are computing. Enjoy.