Key to reading the table of fit parameters
Using the baseline analysis (no porosity) case as an example
At the top of the main page, to the right of the thumb-nail image of the data and best fit, we have:
powerlaw continuum, n=2; norm=1.91e-3
taustar=1.97 +/- (1.63:2.35)
uo=0.655 +/- (0.605:0.725)
norm=5.24e-4 +/- (5.04e-4:5.51e-4)
rejection probability = 19% (C=95.11; N=102)
General comments: parameters with "+/-" uncertainties listed are parameters that were allowed to be free in the fit. They are derived parameter values. Parameters (like, in this case, q), which can in principle be free, but which don't in this case have unceratinties listed, were held fixed at the listed value during the fit. Sometimes, when we are comparing two related fits, we'll highlight (boldface) some of the parameters. These are usually the important free parameters and/or the one or more relevant parameters (possibly free, possibly fixed) that are different between the two fits being compared. The highlighting doesn't indicate any special treatment of the parameters in question, it is just for emphasis.
Notes on each of the lines in the above table:
- [14.87:15.13] The wavelength range over which the data were fit. Generally (but not always) this will correspond to the range of data shown in the plot. Unless otherwise noted, we are simultaneously fitting the -1 and +1 order data in the MEG spectrum, without coadding the two orders. But for the purspose of data display, we do coadd the data and models. Note that on the main page, we link to a subpage that explores the sensitivity of the fits to the chosen wavelength range. They are generally quite insensitive.
- vinf=2250 The wind terminal velocity. Note that the red and blue shifts associated with this velocity are indicated by the dotted vertical lines on the data plots showing the best-fit models. The laboratory rest wavelength is indicated by the dashed vertical line. Note that on the main page, we link to a subpage that explores the sensitivity of the fits (and especially the derived taustar values) to the choice of the terminal velocity.
- β=1 The standard wind velocity law parameter. We generally fix this at β=1. Note that on the main page, we link to a subpage that explores the sensitivity of the fits to the value of beta. There's a fair amount of sensitivity, in fact, as the velocity law affects both the density (and column density) of the wind as well as the line-of-sight velocty as a function of radius. Note also that non-integer values of β require an extra numerical integration.
- power-law continuum, n=2; norm=1.91e-3 We include a flat (n=2 in ν vs. Fν space) continuum whose normalization we fit prior to fitting the wind profile model. We fit a nearby, line-free portion of the continuum separately for each line. Once we determine the continuum level in this manner, we generally fix it at that value while fitting the line itself (over a narrow wavelength range). Note that on the main page, we link to a subpage that explores the sensitivity of the fits to the assumed continuum level. They are generally insensitive to reasonable variations in the continuum level.
- q The standard Owocki & Cohen (2001) parameter that describes the radial dependence of the filling factor. A value of zero means that the filling factor has no radial dependence. Both in order to keep the number of model free parameters manageable once we start to include porosity and because fits of non-porous models to several different stars and many specific lines generally show that values close to zero are preferred (Kramer et al. 2003, Cohen et al. 2006), we tend to fix the value of q at zero, and not allow it to be a free parameter. However, if you see formal uncertainties listed after a non-zero value, then you know that it was treated as a free parameter. We do explore the effects of allowing q to be a free parameter on this subpage and find, as expected, that values near q=0 are preferred. This doesn't provide much of a test of how a free q affects other parameters, in the case that non-zero values are preferred (or even just acceptable), however.
- hinf The terminal porosity length (assuming that the porosity length has a radial dependence given by: h = hinf (1 - Rstar/r)β). If a value of hinf = 0 is shown without a formal uncertainty, that indicates a non-porous model, where we did not even allow porosity to be included in the profile model. For some fits, hinf = 0 will be listed along with an uncertainty. In these cases, porosity was allowed in the model, but the best-fit value of the porosity length was found to be zero (i.e. a non-porous wind is preferred by the data over a porous model). Of course, we extensively explore the effects of allowing the porosity length to be a free parameter, on many different subpages.
- taustar The usual optical depth parameterization from Owocki & Cohen (2001). Opacity due to bound-free transitions in the bulk, cold wind gives rise to this term. This is the term that's sensitive to the mass-loss rate (and the opacity of the wind).
- uo The reciprocal of the minimum radius of X-ray emission from the standard Owocki & Cohen (2001) model. It is expressed in units of (inverse) stellar radii: uo=R/Ro - and so 0 < uo < 1.
- norm The normalization of the line profile model. The units are photons/cm2/s.
- rejection probability This is the goodness of fit indicator. Low percentages imply good fits. It is determined (within XSPEC) by generating a large ensemble of Monte Carlo simulated datasets from the best-fit model and with the same noise properties as the actual data, and then fitting each of these fake datasets with the same model that was actually derived from the data. A goodness of fit statistic is calculated for each fake dataset, and the value of the fit statistic from the actual data is compared to this Monte-Carlo-generated distribution. The percentage listed here is the percentage of all MC fake datasets that give better statistic values than the value given by the fit to the real data. If a very high percentage (say, 99%) of the MC datasets give better fit statistic values than does the actual data, then that implies that the actual data were unlikely (in this example, with 99% confidence) to have been produced by the best-fit model. Note: Some of the fits we show also list the value of the C statistic and N, the number of bins in the data. These aren't too meaningful in and of themselves, but the difference in the C value between two fits that are comparable (e.g. the data have the same range for both fits, same type of model, same number of free model parameters (more or less)...) can be interpreted as a confidence level for preferring one fit over the other. Finally, we note that even egregiously bad looking fits don't have huge formal rejection probabilities. In some sense, summed fit statistics (like C or chi square) that don't take the rank order of data-model deviations into account (e.g. four bins in a row in which the model underpredicts the data have a different implication than four bins scattered randomly across the profile which have the same data-model deviation do).
Back to main page.
last modified: 29 April 2008