Inspiral Group S2 Investigation: sim page 26 of 26
Add Edit Upload Delete Annotate Notarize

First Previous Next Last Contents Search
Entry Title Hardware injections compared to software
Date and Author(s)

Hardware injections compared to software

To perform a more systematic study of the hardware injections, I decided to inject the standard set of ten 1.4-1.4 and 1.0-1.0 injections in software. They were injected with the same amplitude and time separation as was used in the hardware injections and were injected into every playground segment. This gives us a way of separating out issues of calibration from any other issues since in software, the injections are done and recovered with precisely the same calibration. However, issues such as mismatch between template and injection (due to time domain vs stationary phase), the effect of the noise etc. will be present in both hardware and software. Furthermore, we can mimic the effects of a mismatch between injected and recovered calibration in software. To do this, we use the normal alpha averaged over 2048 seconds for the analysis, but use the first alpha of the chunk to do the injections. Note that in this study, all filtering was done only with templates whose masses matched exactly the injected masses. In all the graphs that follow, the black + symbols correspond to software injections recovered with same calibration, blue + symbols correspond to software injections done with a different alpha and red x symbols are hardware injections.

We start by looking at the accuracy with which we can measure the effective disances:

L1 distance errors H1 distance errors H2 distance errors

It is clear that for the closest injections (at 15, 31 and 62 kpc) the distance errors in hardware are much greater than those in software with correct calibration. However, they appear to be consistent with the alpha varied software injections. For the more distant injections, noise is the dominant factor in the distance errors so there is little difference between hardware and software. More concretely, we can calculate the mean and standard deviation of the error in effective distance for the three interferometers for the near injections. We obtain the following:


mean error in distance
standard deviation of error

HW
SW
SW alpha
HW
SW
SW alpha
L1
-1.0276
-0.2388
-0.3839
3.7051
0.4394
2.6068
H1
3.6298
0.5263
1.2234
2.8753
0.6868
5.3511
H2
11.4108
1.1724
1.0168
6.3240
1.5366
6.5200

We see that there is a systematic error in the distance recorded for the hardware injections, particularly for H2. However, at least part of this is due to the fact that the actuation function used to create the injections differed from the actuation function used to recover the injections (in particular some of the point calibrations were different in our injection file than those appearing in the final S2 calibration). Additionally, we see that the standard deviation of the distance error in the hardware injections is much greater than in software. However, it is more consistent with what we obtain from varying the value of alpha between injection and recovery in software. We see that doing this gives a very good agreement in H2, however, for H1 we get a greater standard deviation in software than hardware while in L1 it's the other way round. An explaination for the effect in H1 may be that the variation in alpha is dominated by measurement errors and not by true changes in the calibration. Our greater errors in L1 may be due to varying calibration during an injection.

Next, we turn to end time and coalescence phase:

L1 phase vs end time

First, we notice that both the hardware and software triggers are found at the same times (note that the end time of the injection is .429 so there is a difference of a couple of sample points, probably due to stationary phase approx). For L1, the variation of the end time is only 2 sample points or 0.50 msec. For H1 and H2, this spread is somehwhat larger, up to 1 msec, most likely because H1 and H2 were less sensitive during S2. Furthermore, we see that the phase varies linearly with end time. This change matches well with what is found by simply time offsetting the chirp slightly relative to the template. Lastly, note that the hardware injections have a phase which is systematically 0.2 radians less that in software. This is most likely due to the fact that we did not use the full actuation function for the injections, but only an approximation to the pendulum part.

Lastly, we turn our attention to the chi squared. We would expect that as the signals get louder, the noise becomes insignificant and the major contribution to the chi squared comes from a mismatch between the injection and the template. If this is the case, the chi squared will grow linearly with SNR squared. It is clear from the graphs below that this happens:

L1 chisq vs snrsq L1 chisq/snrsq

Notice that the values of the chi squared for the hardware injections are somewhat greater than for the software injections. This is to be expected since in software we use the exact calibration. However, even when we vary the software alpha, the values of chi squared are still smaller than in hardware. This is in contrast to what we see for H1 where varying alpha in software gives chisq values comparable to hardware injections:

H1 chisq over snrsq

At present, it is not clear why the L1 hardware injection chi squared values are slightly high. It could be that our naive variation of alpha does not adequately capture the variation of the calibration. Or, part of the effect could be due to the fact that in L1 the calibration was varying during the chirp. Before worrying too much about this, however, we must point out that the contribution to the chi squared due to mismatch between waveform and template (when using a template bank) will be of the same magnitude or larger.