History

History

The history of high performance computing at the University of Cambridge 

A Journey into High Performance: Research Computing at Cambridge

From a small, specialist facility to a national-scale powerhouse, Cambridge’s research computing story is one of bold ambition, constant reinvention, and world-class achievement.

 

1996–2005: The Foundations

In the late 1990s, the original High Performance Computing Facility (HPCF) quietly began supporting select researchers across the University. Working with proprietary systems, it laid the groundwork for what would become one of the UK’s most influential academic computing services.

In 2005, a defining moment arrived: Dr Paul Calleja joined full-time as Director of the Cambridge-Cranfield High Performance Computing Facility (CCHPCF) in the Department of Applied Mathematics and Theoretical Physics (DAMTP). A clear vision was set — to build a high performance computing service for the entire University.

 

2006: Darwin I and National Recognition

In 2006, the facility was renamed the High Performance Computing Service (HPCS) — and its doors opened to all University researchers.

That same year, the arrival of Darwin I transformed Cambridge’s computing capabilities. With 585 nodes and 2,340 Intel Woodcrest cores delivering 18.27 teraflops, Darwin I propelled Cambridge to:

  • 20th fastest HPC system in the world

  • Fastest academic HPC system in the UK

For the first time, Cambridge was competing on the global supercomputing stage.

 

2007–2009: Partnerships and Space Science

Momentum accelerated quickly:

  • A strategic partnership with Dell created the Cambridge–Dell Solution Centre.

  • In 2009, HPCS moved from DAMTP to the School of Physical Sciences.

  • Cambridge researchers, powered by HPCS, supported the European Space Agency’s Planck satellite project, helping to unlock insights into the origins of the universe.

 

2010–2012: The GPU Revolution and DiRAC

In 2010, HPCS became part of the national DiRAC high performance computing facility. This marked Cambridge’s integration into a UK-wide infrastructure for advanced computational science.

Key milestones followed:

  • Darwin II expanded the original system with 128 additional nodes.

  • Cambridge installed its first GPU cluster, embracing parallel computing acceleration.

  • In 2012, Darwin III delivered a tenfold performance increase:

    • 600 nodes

    • 9,600 Intel Sandy Bridge cores

    • 183.38 teraflops

    • Ranked 93rd in the world (Top500, June 2012)

Darwin III also extended services nationally through DiRAC2 (funded by STFC) and Cambridge joined the Square Kilometre Array Science Data Processor consortium — preparing for one of the most data-intensive science projects ever attempted.

 

2013: Energy Efficiency and Wilkes I

Demand for computing power surged.

  • Darwin III was relocated to expand data centre capacity.

  • A new GPU cluster, Wilkes I, added 128 GPU nodes (256 NVIDIA K20c Tesla GPUs).

  • Overall performance doubled.

Wilkes achieved:

  • 2nd place globally on the Green500 list

  • The title of world’s most energy-efficient air-cooled cluster

Performance and sustainability were advancing hand in hand.

 

2014: A New Identity

In 2014, HPCS became part of the newly formed University Information Services (UIS). The service was renamed Research Computing Services (RCS) and established as a dedicated division led by Dr Calleja.

The mission evolved from providing infrastructure to delivering strategic, institution-wide research capability.

 

2015: Entering the Petascale Era

 

WCDC

 

All research computing infrastructure relocated to the newly built West Cambridge Data Centre (WCDC) — a £20 million, state-of-the-art facility designed for resilience, security, and scale.

With resilient power, advanced cooling, and high-level security, WCDC enabled Cambridge to:

  • Meet rapidly growing demand

  • Provide national-level research services

  • Secure its entry into the petascale supercomputing era

 

2016: Secure Computing for Clinical Discovery

RCS launched the Clinical Cloud service, enabling the Wolfson Brain Imaging Centre and Clinical School researchers to accelerate their work using secure, cutting-edge computing and storage platforms.

This marked a major expansion into sensitive, clinically governed research environments.

 

2017: Multi-Petaflop Scale and UK Leadership

2017 was transformative.

Research Storage Services Launched

A new petabyte-scale high performance storage platform became available to all Cambridge researchers, supporting the explosive growth of research data.

Cambridge Service for Data Driven Discovery (CSD3)

RCS reached multi-petaflop scale with the installation of CSD3 — combining CPU, GPU, many-core, and big data analytics capability.

Peta4 Breaks Records

Peta4 achieved 1,696.7 petaflops, ranking 75th in the November 2017 Top500 list.

It became the fastest academic high performance computing system in the UK.

 

From Specialist Facility to National Powerhouse

What began as a small facility serving select researchers evolved into a world-leading research computing service:

  • Supporting global space missions

  • Enabling world-class astrophysics and cosmology

  • Powering brain imaging and clinical research

  • Scaling from gigaflops to multi-petaflops

  • Expanding from local support to national infrastructure

Today, Research Computing Services stands as a cornerstone of data-driven discovery at Cambridge — enabling researchers to ask bigger questions, process larger datasets, and move faster than ever before.