Back to Volume
Paper: Automated Image Quality Assessment for the GONG Project
Volume: 61, Astronomical Data Analysis Software and Systems III
Page: 312
Authors: Williams, W. E.; Goodrich, J.; Toussaint, R.
Abstract: The GONG (Global Oscillation Network Group) project will observe the sun nearly constantly for three years, from six sites placed around the globe. Approximately 1+ terabyte of image data will be acquired during the course of the project. This massive amount of data--an estimated four million images--must be processed in a small fraction of data acquisition time. A major obstacle to efficient data reduction is the presence of "bad" images in the data stream. These images must be identified and removed very early on in the data processing. GONG data consists of velocity, modulation amplitude and mean velocity images derived from three amplitude-modulated intensity observations of the sun obtained in the light of NiI at 677 nm. The relative Doppler velocity is directly proportional to the phase of the modulated signal. Since the helioseismic modes are studied using a small fraction of the mean Doppler velocity, both obviously and subtly bad data can have an equally undesirable impact on the results. Obvious "bad" data consists of partial images, misshapen images, images with gross systematic signal variations on the solar disc, images with anomalous spikes on the solar disc and images with poor definition of the limb of the sun. Subtly "bad" data includes images with slight systematic signal variations across the solar disk and anomalous signal levels on the solar disk. The modulation amplitude images have been found to be the most sensitive indicator of subtly bad data. The decision to remove data from further processing is made by the operator. The operator's decision must be made rapidly and accurately. To assist the operator in this decision, software has been developed to automate the identification of possible bad data. The criteria used for automated identification of bad data are based on quantities computed for individual images. Some of these quantities must be compared to values for images adjacent in time to identify bad data. Others can be used to identify bad data by finding inconsistencies within the images themselves. Use of automated bad image identification procedures developed over the course of the project have practically eliminated data reprocessing due to bad images. Time required for bad data identification and removal has been reduced to less than ten percent of data aquisition time. Increasingly sophisticated criteria for automated bad data identification promises further reduction in time required to identify and remove bad data.
Back to Volume