Oxford University Press (OUP), Monthly Notices of the Royal Astronomical Society, 4(493), p. 6071-6078, 2020
Full text: Unavailable
ABSTRACT With ever-increasing data rates produced by modern radio telescopes like LOFAR and future telescopes like the SKA, many data-processing steps are overwhelmed by the amount of data that needs to be handled using limited compute resources. Calibration is one such operation that dominates the overall data processing computational cost; none the less, it is an essential operation to reach many science goals. Calibration algorithms do exist that scale well with the number of stations of an array and the number of directions being calibrated. However, the remaining bottleneck is the raw data volume, which scales with the number of baselines, and which is proportional to the square of the number of stations. We propose a ‘stochastic’ calibration strategy where we read only in a mini-batch of data for obtaining calibration solutions, as opposed to reading the full batch of data being calibrated. None the less, we obtain solutions that are valid for the full batch of data. Normally, data need to be averaged before calibration is performed to accommodate the data in size-limited compute memory. Stochastic calibration overcomes the need for data averaging before any calibration can be performed, and offers many advantages, including: enabling the mitigation of faint radio frequency interference; better removal of strong celestial sources from the data; and better detection and spatial localization of fast radio transients.