soundofheaven.info Fitness STATISTICS AT SQUARE ONE PDF

Statistics at square one pdf

Thursday, June 6, 2019 admin Comments(0)

PDF | 5 minutes read | R Newcombe and others published Statistics at Square One. 10th edition. PDF | On Jul 1, , D. J. Pereira Gray and others published STATISTICS AT SQUARE ONE. PDF | On Feb 1, , Richard Edwards and others published Statistics at Square One.


Author: JANITA BRINTNALL
Language: English, Spanish, German
Country: Ivory Coast
Genre: Technology
Pages: 530
Published (Last): 26.07.2016
ISBN: 375-1-28569-588-8
ePub File Size: 25.84 MB
PDF File Size: 19.67 MB
Distribution: Free* [*Regsitration Required]
Downloads: 27479
Uploaded by: PAMALA

NB: Readers occasionally point out errors in this book and remind us that there have been several revised editions since this one, which we would refer our. Statistics at. Square One. Eleventh edition. M J Campbell. Professor of Medical Statistics. Medical Statistics Group. School of Health and Related Research. The popular self-testing exercises at the end of every chapter are strengthened by the addition of new sections on reading and reporting statistics and formula.

From Table 3. ComiXology Thousands of Digital Comics. Amazon Drive Cloud storage from Amazon. Chapter 4 Survival analysis Summary When the dependent variable is a survival time, we need to allow for censored observations. Chapter 2 Summary statistics for quantitative data.

Read more. Product details Paperback: English ISBN Start reading Statistics at Square One on your Kindle in under a minute. Don't have a Kindle? Try the Kindle edition and experience these great reading features: Share your thoughts with other customers. Write a customer review. Top Reviews Most recent Top Reviews. There was a problem filtering reviews right now. Please try again later.

Paperback Verified Purchase. A little too tangential for my learning type. The book summarizes in a friendly manner basic statistical concepts and points out many frequently made statistical errors. It would be helpful if the data came in an electronic manner, via a CD or web address in order to help for doing the described calculations. See both reviews. Amazon Giveaway allows you to run promotional giveaways in order to create buzz, reward your audience, and attract new followers and customers.

Learn more about Amazon Giveaway. This item: Statistics at Square One. Set up a giveaway. Customers who viewed this item also viewed.

Naked Statistics: Stripping the Dread from the Data. Data and Models 4th Edition. Richard D. De Veaux. Ann Intern Med ; For simple linear regression we have one continuous input variable. We will discuss the use of dummy or indicator variables to model categories and investigate the sensitivity of models to individual data points using concepts such as leverage and influence. Multiple regression is a generalisation of the analysis of variance and analysis of covariance.

The modelling techniques used here will be useful for the subsequent chapters. In terms of the model structure described in Chapter 1, the link is a linear one and the error term is Normal.

Here yi is the output for unit or subject i and there are k input variables Xi1, Xi2, …, Xip. Often yi is termed the dependent variable and the input variables Xi1, Xi2, …, Xip are termed the independent variables. The latter can be continuous or nominal. Sometimes they are called the explanatory or predictor variables. These are the model parameters. The models are fitted by choosing estimates b0, b1, …, bp, which minimise the sum of squares SS of the predicted error. These estimates are termed ordinary least squares estimates.

Here it is clear that the residuals estimate the error term. Further details are given in Draper and Smith2. For example, to investigate the effect of diet on weight allowing for smoking habits.

Here the dependent variable is the outcome from a clinical trial. The multiple regression model allows one to compare the outcome between groups, having adjusted for differences in baseline weight and smoking habit. This is also known as analysis of covariance. An alternative technique is the analysis of variance but the same results can be achieved using multiple regression. There are three possibilities: We will anchor the examples in some real data.

The data are given in Table 2. Here we might ask, is there a different relationship between deadspace and height for asthmatics than for non-asthmatics?

Suppose the two independent variables are height and asthma status. There are a number of possible models: This is the simple linear regression model described in Swinscow and Campbell. It can be seen from model 2. It is the difference in deadspace between asthmatics and Non-asthmatics Asthmatics Deadspace ml 90 80 70 60 50 40 30 Height cm Figure 2. Thus if we thought that the only reason that asthmatics and non-asthmatics in our sample differed in the deadspace was because of a difference in height, and this is the sort of model we would fit.

This type of model is termed an analysis of covariance. It is very common in the medical literature. An important assumption is that the slope is the same for the two groups.

Thus x3 is the same as height when the subject is asthmatic and is 0 otherwise. The variable x3 measures the interaction between asthma status and height.

It measures by how much the slope between deadspace and height is affected by being an asthmatic. It is now the slope of the expected line for non-asthmatics. We have to imagine that we have a whole variety of subjects all of the same age, but of different heights.

Statistics at Square One, 11th Edition

We also have to imagine a group of subjects, all of the same height, but different ages. The nice feature of this model is that we can estimate these coefficients reasonably even if none of the subjects has exactly the same age or height.

This model is commonly used in prediction as described in Section 2. There are two levels, asthmatic and non-asthmatic, and just one dummy variable, the coefficient of which measures the difference in the y variable between asthmatics and normals. For inference it does not matter 16 Statistics at square two Table 2. The only effect is to change the sign of the coefficient; the P-value will remain the same. However, Table 2. Table 2. We now have three possible contrasts: Knowing two of the contrasts we can deduce the third if you are not asthmatic or bronchitic, then you must be normal!

Thus we need to choose two of the three contrasts to include in the regression and thus two dummy variables to include in the regression. If we included all three variables, most regression programs would inform us politely that x1, x2 and x3 were aliased i. The dummy variable that is omitted from the regression is the one that the coefficients for the other variables are contrasted with, and is known as the baseline variable.

Thus if x3 is omitted in the regression that includes x1 and x2 in Table 2. Another way of looking at it is that the coefficient associated with the baseline is constrained to be 0. Most statistical packages produce an output similar to this one. The models are fitted using the principle of least squares, as explained in Appendix 2, and is equivalent to maximum likelihood when the error distribution is Normal.

The estimate of the standard error SE is more sensitive to the Normality assumption than the estimate of the coefficients. There are two options available which do not require this assumption; these are the bootstrap and the robust standard error. Many computer packages have options for using these procedures. They are described in Appendix 3.

Multiple linear regression 17 2. The computer program gives two sections of output. The first part refers to the fit of the overall model. This is given as 0. An important statistic is the value R2, which is the proportion of variance of the original data explained by the model and in this model it is 0.

It is the ratio of the sum of squares SS due to the model and the total SS For models with only one independent variable, as in this case, it is simply the square of the correlation coefficient described in Swinscow and Campbell. The second part examines the coefficients in the model. It is the slope of the line in Figure 2. This is the predicted value of deadspace for someone with no height and is clearly a nonsense value.

However, the parameter is necessary for correct interpretation of the model. This gives model 2. In the top part of the output, the F-statistic now has 2 and 12 d. The P-value is given as 0. It means that fitting both variables simultaneously gives a highly significant fit. It does not tell us about individual variables.

It is the slope of each of the parallel lines in Figure 2. It can be seen that because non-asthmatics have a higher deadspace forcing a single line through the data gives a greater slope. As we coded asthma as 1 Table 2. The results of fitting these variables using a computer program are given in Table 2. We fit three independent variables: Height, Asthma and AsthmaHt on Deadspace.

This is equivalent to model 2. The Root MSE has the value 8. There are no terms to drop from the model. Note, even if one of the main terms, asthma or height was not significant, we would not drop it from the model if the interaction was significant, since the interaction cannot be interpreted in the absence of the main effects, which in this case are asthma and height.

The two lines of best fit are: Interval] 0. This is the best fit model for the data. Using model 2. It is important, when considering which is the best model to look at the R2 adjusted as well as the P-values. Sometimes a term can be added that gives a significant P-value, but only a marginal improvement in R2 adjusted, and for the sake of simplicity may not be included as the best model.

The analysis is given in Table 2. The interpretation of this model is described in Section 2. Note a peculiar feature of this output. This occurs because age and height are strongly correlated, and highlights the importance of looking at the overall fit of a model. Dropping either will leave the other as a significant predictor in the model. The analysis is given in the first half of Table 2.

Here the two independent variables are x1 and x2 refer to Table 2. As we noted before an important point to check is that, in general, one should see that the overall model is significant, before looking at the individual contrasts.

This has a SE of If we wished to contrast asthmatics and bronchitics, we need to make one of them the baseline. Thus we make x1 and x3 the independent variables to make bronchitics the baseline and the output is shown in the second half of Table 2.

Asthma Interval] Thus the only significant difference is between asthmatics and normals. This method of analysis is also known as one-way analysis of variance. It is a generalisation of the t-test referred to in Swinscow and Campbell.

In fact, the analysis of variance accomplishes two extra refinements. Firstly, the overall P-value controls for the problem of multiple testing referred to in Swinscow and Campbell. The overall P-value in the F-test allows for this and since it is significant, we know that some of the contrasts must be significant. The second improvement is that in order to calculate a t-test we must find the pooled SE.

In the t-test this is done from two groups, whereas in the analysis of variance it is calculated from all three, which is based on more subjects and so is more precise. To see its application in a clinical trial consider the results of Llewellyn-Jones et al.

This study was a randomised-controlled trial of the effectiveness of a shared care intervention for depression in subjects over the age of 65 years. Depression was measured using the Geriatric Depression Scale, taken at baseline and after 9.

One pdf statistics at square

The figure that helps the interpretation is Figure 2. Here y is the depression scale after 9. Thus the interpretation of the standardised regression coefficient is the amount the y changes for 1 standard deviation increase in x.

One can see that the baseline values are highly correlated with the follow-up values of the score. The intervention resulted, on average, in patients with a score 1. This analysis assumes that the treatment effect is the same for all subjects and is not related to values of their baseline scores. This possibility could be checked by the methods discussed earlier. When two groups are balanced with respect to the baseline value, one might assume that including the baseline value in the analysis will not affect the comparison of treatment groups.

However, it is often worthwhile including because it can improve the precision of the estimate of the treatment effect; that is, the SEs of the treatment effects may be smaller when the baseline covariate is included.

In a multiple linear regression they found an association between birth weight coded in units of g and BMI allowing for confounders , regression coefficient 0. Thus, for every increase in birth weight of g, the BMI increases on average by 0.

The authors suggest that in utero factors that affect birth weight continue to have an affect even into adulthood, even allowing for factors, such as gestational age. The most fundamental assumption is that the model is linear.

This means that each increase by one unit of an x variable is associated by a fixed increase in the y variable, irrespective of the starting value of the x variable. There are a number of ways of checking this when x is continuous: There is not a simple significance test for one transformation against another, but a good guide would be if the R2 value gets larger.

This model is the one where we fit two continuous variables, x and x2. A significant coefficient for x2 indicates a lack of linearity. Fit separate dummy variables for the four largest quintile groups and examine the coefficients.

For a linear relationship, the coefficients themselves will increase linearly. Another fundamental assumption is that the error terms are independent of each other. An example of where this is unlikely is when the data form a time series. A simple check for sequential data for independent errors is whether the residuals are correlated, and a test known as the Durbin—Watson test is available in many packages.

Further details are given in Chapter 6, on time series analysis. A further example of lack of independence is where the main unit of measurement is the individual, but several observations are made on each individual, and these are treated as if they came from different individuals. This is the problem of repeated measures. A similar type of problem occurs when groups of patients are randomised, rather than individual patients.

These are discussed in Chapter 5, on repeated measures. The model also assumes that the error terms are independent of the x variables and variance of the error term is constant the latter goes under the more complicated term of heteroscedascity.

A common alternative is when the error increases as one of the x variables increases, so one way of checking this assumption would be to plot the residuals, ei, against each of the independent variables and also against the fitted values. If the model were correct one would expect to see the scatter of residuals evenly spread about the horizontal axis and not showing any pattern.

A common departure from this is when the residuals fan out; that is, the scatter gets larger as the x variable gets larger. This is often also associated with nonlinearity as well, and so attempts at transforming the x variable may resolve this issue. The final assumption is that the error term is Normally distributed. The assumption of Normality is important mainly so that we can use normal theory to estimate confidence intervals CIs around the coefficients, but luckily with reasonably large sample sizes, the estimation method is robust to departures from Normality.

Thus moderate departures from Normality are allowable. If one was concerned, then one could also use bootstrap methods and the robust standard error described in Appendix 3.

Multiple linear regression 25 It is important to remember that the main purpose of this analysis is to assess a relationship, not test assumptions, so often we can come to a useful conclusion even when the assumptions are not perfectly satisfied. Suppose we had fitted a simple regression model 2. This is important, because we like to think that the model applies generally, and we do not wish to find that we should have different models for different subgroups of patients.

The residuals are the difference between the observed and fitted data: A point with a large residual is called an outlier. In general, we are interested in outliers because they may influence the estimates, but it is possible to have a large outlier which is not influential. Another way that a point can be an outlier is if the values of the xi are a long way from the mass of x. For a single variable, this means if xi is a long way from x—. Imagine a scatter plot of y against x, with a mass of points in the bottom-left-hand corner and a single point in the top right.

It is possible that this individual has unique characteristics that relate to both the x and y variables. A regression line fitted to the data will go close, or even through the isolated point. This isolated point will not have a large residual, yet if this point was deleted the regression coefficient might change dramatically. Such a point is said to have high leverage and this can be measured by a number, often denoted hi; large values of hi indicate a high leverage.

An influential point is one that has a large effect on an estimate. Effectively one fits the model with and without that point and finds the effect of the regression coefficient. One might look for points that have a large effect on b0, or on b1 or on other estimates such as SE b1. The usual output is the difference in the regression coefficient for a particular variable when the point is included or excluded, scaled by the estimated SE of the coefficient.

The problem is that different parameters may have different influential points. Most computer packages now produce residuals, leverages and influential points as a matter of routine. It is the task for an analyst to examine these and to identify important cases.

However, just because a point is influential or has a large residual it does not follow that it should be deleted, although the data should be examined 26 Statistics at square two carefully for possible measurement or transcription errors. A proper analysis of such data would report such sensitivities to individual points. Figure 2. We could plot a similar graph for deadspace and age.

The standard diagnostic plot is a plot of the residuals against the fitted values, and for the model fitted in Table 2. There is no apparent pattern, which gives us reassurance about the error term being relatively constant and further reassurance about the linearity of the model.

The diagnostic statistics are shown in Table 2. As one might expect the children with the highest leverages are the youngest who is also the shortest and the oldest who is also the tallest. Note that the largest residuals are associated with small leverages. This is because points with large leverage will tend to force the line close to them. The child with the most influence on the age coefficient is also the oldest, and removal of that child would change the standardised regression coefficient by 0.

The child with the most influence on height is the shortest child. Multiple linear regression 27 Table 2. A strong reason may be if it was discovered the child had some relevant disease, such as cystic fibrosis. To answer this, one may use stepwise regression that is available in a number of packages.

Step-down or backwards regression starts by fitting all available variables and then discarding sequentially those that are not significant.

Stepup or forwards regression starts by fitting an overall mean, and then selecting variables to add to the model according to their significance. Stepwise regression is a mixture of the two, where one can specify a P-value for a variable to be entered into the model, and then a P-value for a variable to be discarded. Usually one chooses a larger P-value for entry say, 0.

This also favours step-down regression. As an example consider an outcome variable being the amount a person limps.

The length of the left or right legs is not predictive, but the difference in lengths is highly predictive. Stepwise regression is best used in the exploratory phase of an analysis see Chapter 1 , to identify a few predictors in a mass of data, the association of which can be verified by further data collection. One way of trying to counter this is to split a large data set into two, and run the stepwise procedure on both separately. Choose the variables that are common to both data sets, and fit these to the combined data set as the final model.

With stepwise regression, usually only the subjects who have no missing values on any of the variables under consideration are chosen.

Statistics at Square One

The final model may contain only a few variables, but if one refits the model, the parameters change because now the model is being fitted to those subjects who have no missing values on only the few chosen variables, which may be a considerably larger data set than the original.

Thus, if we fitted x1 and x2 from Table 2. Thus stepwise regression is useful in the exploratory phase of an analysis, but not the confirmatory one. In particular is linearity plausible? For a stepwise regression, report all the variables that could have entered the model. With a large study, the coefficients in the model can be highly significant, but only explain a low proportion of the variability of the outcome variable. Thus they may be of no use for prediction.

Are there any boundaries, which may cause the slope to flatten? Is this plausible and has it been tested? If you have only one binary variable, then coding the dummy variable 0 and 1 is the most convenient. Coding it 1 and 2 is commonly the method in questionnaires. It will make no difference to the coefficient estimate or P-value. Thus in Figure 2. If you have a categorical variable with, say, three groups, then this will be coded with two dummy variables. As shown earlier, the overall F-statistic will be unchanged no matter which two groups are chosen to be represented by dummies, but the coefficient of group 2, say, will be dependent on whether group 1 or 3 is the omitted variable.

Most packages assume that the predictor variable, X, in a regression model is either continuous or binary.

Thus one has a number of options: This incorporates into the model the fact that the categories are ordered, but also assumes that equal changes in X mean equal changes in y.

This loses the fact that the predictor is ordinal, but makes no assumption about linearity. The cut-point should be chosen on external grounds and not because it gives the best fit to the data. Which of these options you choose depends on a number of factors. With a large amount of data, the loss of information by ignoring the ordinality in option continued 30 Statistics at square two ii is not critical and especially if the X variable is a confounder and not of prime interest.

For example, if X is age grouped in year intervals, it might be better to fit dummy variables, than assume a linear relation with the y-variable. Often the assumptions underlying multiple regression are not checked, partly because the investigator is confident that they hold true and partly because mild departures are unlikely to invalidate an analysis.

However, lack of independence may be obvious on empirical grounds the data form repeated measures or a time series and so the analysis should accommodate this from the outset.

Linearity is important for inference and so may be checked by fitting transformations of the independent variables. Lack of homogeneity of variance and lack of Normality may affect the SEs and often indicate the need for a transformation of the dependent variable.

The most common departure from Normality is when outliers are identified, and these should be carefully checked, particularly those with high leverage. Should I include it in the analysis? There are certain variables such as age or sex for which one might have strong grounds for believing that they could be confounders, but in any particular analysis might emerge as not significant.

These should be retained in the analysis because, even if not significantly related to the outcome themselves, they may modify the effect of the prime independent variable.

When the dependent variable is 0 or 1 then the coefficients from a linear regression are proportional to what is known as the linear discriminant function. This can be useful for discriminating between groups, even if the assumption about Normality of the residuals is violated. However discrimination is normally carried out now using logistic regression Chapter 3.

Analysing change does not properly control for baseline imbalance because of what is known as regression to the mean; baseline values are negatively correlated with change and subjects with low scores at baseline will tend to increase more than those with high values.

Note that if the change score is the dependent variable and baseline is included as an independent variable, then the results will be the same as an analysis of covariance. Partial results are given in the following table. Results from Melchart et al. Difference between groups after treatment: Analysis of covariance adjusting for baseline value — Difference between groups after treatment: Draper NR, Smith H.

Applied Regression Analysis, 3rd edn. New York: Multifaceted shared care intervention for late life depression in residential care: Br Med J ; Relation between weight and length at birth and body mass index in young adulthood: Acupuncture in patients with tension-type headache: Chapter 3 Logistic regression Summary The Chi-squared test is used for testing the association between two binary variables. Logistic regression is also useful for analysing case— control studies.

Matched case—control studies require a particular analysis known as conditional logistic regression. Thus, an event might be the presence of a disease in a survey or cure from disease in a clinical trial. We wish to examine factors associated with the event. Since we can rarely predict exactly whether an event will happen or not, what we in fact look for are factors associated with the probability of an event happening.

There are two situations to be considered: As a consequence one can calculate the proportion of subjects for whom an event happens. For example, one might wish to examine the presence or absence of a disease by gender two categories and social class five categories. Thus, one could form a table with the 10 social class-by-gender categories and examine the proportion of subjects with disease in each grouping.

Square one pdf statistics at

It is possible that each individual has a unique set of predictors and we may not wish to group them. If the data are in the form of tables most computer packages will provide a separate set of commands to carry out an analysis. Preserving the individual cases leads to the same regression estimates and allows a more flexible analysis. This is discussed further in Section 3. The purpose of statistical analysis is to take samples to estimate population parameters.

Thus, the odds of a head to a tail are 1 to 1. The term on the left-hand side of the equation is the log odds of success, and is often called the logistic or logit transform.

The reason why model 3. Suppose we had only one covariate X, which was binary and simply takes the values 0 or 1. The main justification for the logit transformation is that the OR is a natural parameter to use for binary outcomes, and the logit transformation relates this to the independent variables in a convenient manner.

It can also be justified as follows. The right-hand side of equation 3. On the left-hand side, a probability must lie between 0 and 1. An OR must lie between 0 and infinity. A log OR, or logit, is unbounded and has the same potential range as the right-hand side of equation 3. Note that at this stage, the observed values of the dependent variable are not in the equation.

They are linked to the model by the Binomial distribution 34 Statistics at square two described in Appendix 2. The parameters in the model are estimated by maximum likelihood also discussed in Appendix 2. This misses out on the second part of a model, the error distribution, that links the two. One could, in fact, use the observed proportions and fit the model by least squares as in multiple regression. In the cases where the pis are not close to 0 or 1, this will often do well — although the interpretation of the model is different from that of equation 3.

With modern computers the method of maximum likelihood is easy and is also to be preferred. This may lead some people to believe that logistic regression is impossible in these circumstances. We may wish to calculate the probability of an event. Suppose we have estimated the coefficients in 3.

Then equation 3. These are the predicted or fitted values for yi. Further details of logistic regression are given in Collett2 and Hosmer and Lemeshow. Thus, we would use logistic regression to investigate the relationship between a causal variable and a binary output variable, allowing for confounding variables which can be categorical or continuous.

Logistic regression 35 2 As a discriminant analysis, to try and find factors that discriminate two groups. Here the outcome would be a binary variable indicating membership to a group. For example one might want to discriminate men and women on psychological test results. It is usually easier to store data on an individual basis, since it can be used for a variety of purposes. In general, it is easier to examine the goodness-of-fit of the model in the grouped case. The data are given in Table 3.

Table 3. Total 36 Statistics at square two We can rewrite Table 3. The reason for the discrepancy is that when the outcome, in this case breastfeeding, is common the relative risks and ORs tend to differ.

When the outcome is rare e. For a discussion of the relative merits of ORs and relative risk see Swinscow and Campbell. The output for a logistic regression for these data in the second form is shown in Table 3.

The first section of the Table gives the fit for a constant term and the second the fit of the term occupation. The output also gives the log-likelihood values, which are derived in Appendix 2, and can be thought of as a sort of residual sum of squares. This can be interpreted as a Chi-squared statistic with 1 degree of freedom d.

This is further described in Appendix 2. It can be seen that the LR Chi-squared statistic for this model has 1 d. It is analogous to the R2 term in linear regression which gives the proportion of variance accounted for by the model. This is less easy to interpret in the binary case, and it is suggested that one considers only the rough magnitude of the Pseudo R2. In this case, a 38 Statistics at square two value of 0.

The square of this, 3. The conventional Chi-squared statistic, described in Swinscow and Campbell,1 is neither the Wald nor the LR and is in fact the third of the statistics derived from likelihood theory, the score statistic see Appendix 2. This has value 3. The coefficient in the model is 0. This is sometimes known as a Wald confidence interval see Section 3. This is asymmetric about OR, in contrast to the CIs in linear regression.

For example, from Table 3.

Statistics at Square One

In general, this will hold true, but there can be slight discrepancies with the significance test especially if the OR is large because the test of significance may be based on the LR test or the score test, whereas the CI is usually based on the Wald test.

The explanation is that age is a confounding factor, since noninsulin-dependent diabetes is predominantly a disease of older age, and of course old people are more likely to die than young people. In this case the confounding is so strong it reverses the apparent association.

To analyse this we code the data for a grouped analysis as follows: In the first part of the analysis the OR associated with group is 0. When age is included as a factor, this changes to an OR of 1. It should be stressed that this no way proves that starting to take insulin for diabetes causes a higher mortality; other confounding factors, as yet unmeasured, may also be important.

Here the covariate, age, is binary, but it could be included as a continuous covariate and Julious and Mullee4 show that including age as a continuous variable changes the relative risk from 0.

At pdf statistics square one

As usual one has to be aware of assumptions. The main one here is that the OR for insulin dependence is the same in the younger and older groups. There are not enough data to test that here. Another assumption is that the cut-off point at age 40 years was chosen on clinical grounds and not by looking at the data to file the best possible result for the investigator. One would need some reassurance of this in the text!

They developed an apnoea severity index, and related this to the presence or absence of hypertension. They wished to answer two questions: The results are given in Table 3. The coefficient associated with the dummy variable sex is 0. Note that this includes 1 as we would expect Table 3. We interpret the age coefficient by saying that, if we had two people of the same sex, and given that their BMI and apnoea index were also the same, but one subject was 10 years elder than the other, then we would predict that the older subject would be 2.

The reason for the choice of 10 years is because that is how age was scaled. Note that factors that are additive on the log scale are multiplicative on the odds scale. Thus a man who is 10 years older than a woman is predicted to be 2.

Permissions Request permission to reuse content from this site. Table of contents Preface. Chapter 1 Data display and summary. Chapter 2 Summary statistics for quantitative data. Chapter 3 Summary statistics for binary data.

Statistics at Square Two: Understanding Modern Statistical Applications in Medicine

Chapter 4 Populations and samples. Chapter 5 Statements of probability and confidence intervals. Chapter 6 P-values, power, type I and type II errors. Chapter 7 The t tests. Chapter 9 Diagnostic tests. Chapter 10 Rank score tests.