Chapter 4 Logistic Regression Models

If you want to predict a binary categorical variable (only 2 possible outcomes), the standard linear regression models don’t apply. If you let the two possible outcome values be \(Y=0\) and \(Y=1\), you’ll never get a straight line relationship with any \(X\) variable.

Throughout this section, we will refer to one outcome as “success” (denoted \(Y=1\)) and “failure” (denoted \(Y=0\)). Depending on the context of the data, the success could be a negative thing such as “heart attack” or “20 year mortality” or it could be a positive thing such as “passing a course”.

We will denote \(p\) to be the chance of success and \(1-p\) be the chance of failure. You can think of \(p\) as the expected outcome, \(E[Y]\). For example, if we consider flipping a coin with 2 equally likely sides, we’d expected a one side (say Heads) 50% of the time, which is the chance of getting that side (Heads). Yes, it is not possible for \(Y\) to be equal to 0.5 because in this context, \(Y=0\) or \(Y=1\), but the \(E[Y]\) gives the relative frequency of successes in the long run.

We want to build a model to explain why the chance of one outcome (success) may be higher for one group of people in comparison to another.