ASPCS
 
Back to Volume
Paper: HST in the Clouds: 25 Years of HST Processing
Volume: 512, Astronomical Data Analysis Software and Systems XXV
Page: 351
Authors: Durand, D.; Haase, J.; Swam, M.; Fabbro, S.; Goliath, S.
Abstract: The Hubble Space Telescope (HST) archive system at the CADC, ESAC and STScI has been evolving constantly since they started to archive HST data in 1990. After basic upgrades to the associated storage system (optical disks, CDs, DVDs, magnetic disks) and implementing multiple processing systems (On the Fly calibration, CACHE), the HST archive system at CADC is now running in a cloud based processing system. After multiple hurdles mostly caused by the way the HST calibration system had been designed many years ago, we are now reporting a working system under the CANFAR cloud, Gaudet et al. (2009), designed and operated by CADC and hosted in Compute Canada Cloud infrastructure. Although not very large, the HST collection needs constant recalibration to take advantage of new software and calibration files. Here we describe the unique challenges in bringing legacy pipeline software to run in a massive cloud computing system. The HST processing system can, in principle, be easily scaled. Presently more than 200 cores could be used to process the HST images, and this could potentially grow to thousands of cores, allowing a very uniformly calibrated archive since any perturbation to the system could be dealt with within a few hours. We will discuss why this might be not possible and will try to propose solutions.
Back to Volume