Skip to main content


<?xml version="1.0"?>

<!DOCTYPE TEI.2 SYSTEM "base.dtd">





<publicationStmt><distributor>BASE and Oxford Text Archive</distributor>


<availability><p>The British Academic Spoken English (BASE) corpus was developed at the

Universities of Warwick and Reading, under the directorship of Hilary Nesi

(Centre for English Language Teacher Education, Warwick) and Paul Thompson

(Department of Applied Linguistics, Reading), with funding from BALEAP,

EURALEX, the British Academy and the Arts and Humanities Research Board. The

original recordings are held at the Universities of Warwick and Reading, and

at the Oxford Text Archive and may be consulted by bona fide researchers

upon written application to any of the holding bodies.

The BASE corpus is freely available to researchers who agree to the

following conditions:</p>

<p>1. The recordings and transcriptions should not be modified in any


<p>2. The recordings and transcriptions should be used for research purposes

only; they should not be reproduced in teaching materials</p>

<p>3. The recordings and transcriptions should not be reproduced in full for

a wider audience/readership, although researchers are free to quote short

passages of text (up to 200 running words from any given speech event)</p>

<p>4. The corpus developers should be informed of all presentations or

publications arising from analysis of the corpus</p><p>

Researchers should acknowledge their use of the corpus using the following

form of words:

The recordings and transcriptions used in this study come from the British

Academic Spoken English (BASE) corpus, which was developed at the

Universities of Warwick and Reading under the directorship of Hilary Nesi

(Warwick) and Paul Thompson (Reading). Corpus development was assisted by

funding from the Universities of Warwick and Reading, BALEAP, EURALEX, the

British Academy and the Arts and Humanities Research Board. </p></availability>




<recording dur="00:44:49" n="7412">


<respStmt><name>BASE team</name>



<langUsage><language id="en">English</language>



<person id="nm0929" role="main speaker" n="n" sex="m"><p>nm0929, main speaker, non-student, male</p></person>

<personGrp id="ss" role="audience" size="l"><p>ss, audience, large group </p></personGrp>

<personGrp id="sl" role="all" size="l"><p>sl, all, large group</p></personGrp>

<personGrp role="speakers" size="3"><p>number of speakers: 3</p></personGrp>





<item n="speechevent">Lecture</item>

<item n="acaddept">Economics</item>

<item n="acaddiv">ps</item>

<item n="partlevel">UG1</item>

<item n="module">Quantitative Economics</item>





<u who="nm0929"> right shall we make a start them please <pause dur="0.6"/> am i switched on yeah <pause dur="0.3"/> good <pause dur="1.2"/> # <pause dur="0.3"/> just a reminder as always <pause dur="0.5"/> # i have my office hours people are beginning to come and see me which is good i get the feeling that the assessed exercise is focusing minds which is great <pause dur="0.4"/> # <pause dur="0.2"/> notice when these hours are i <trunc>sh</trunc> i'm <pause dur="0.2"/> i'm available then <pause dur="0.5"/> with a caveat from yesterday unfortunately Murphy's law <pause dur="0.2"/> but these are times when you can come anyway <pause dur="0.3"/> you don't need to ask me first can i come and see you in your office hours just turn up <pause dur="0.9"/> and the same is true of other lecturers and class tutors <pause dur="0.4"/> # as well <pause dur="0.2"/> so do use the <pause dur="0.2"/> office hours if you need to <pause dur="0.6"/> and you will of course find as you start revising for your examinations that you need <pause dur="0.4"/> you need to so get used to it earlier <pause dur="0.3"/> rather than later <pause dur="0.8"/> where are we in the things that we have to discuss <pause dur="0.4"/> well we've got three more <trunc>lect</trunc> <pause dur="0.2"/> three more weeks of lectures this week <pause dur="0.6"/> # is which is seven <pause dur="0.2"/> eight and nine we've got two lectures each week in those weeks there are no lectures after

week nine <pause dur="0.6"/> in this course <pause dur="0.7"/> and we're going to be finishing off with probability distributions today <pause dur="0.3"/> and we're going to move on to the applications <pause dur="0.5"/> # in in the five remaining <pause dur="0.3"/> lectures <pause dur="1.2"/> so this is where we are at the moment <pause dur="7.8"/> right when we finished # last week <pause dur="0.3"/> we were talking about the topic of <pause dur="0.2"/> # covariance joint distributions for <pause dur="0.2"/> random variables and talking about working out the relationship between variables <pause dur="0.7"/> and this was the example that i was using it's all in your notes but i've written it out again <pause dur="0.4"/> to illustrate <pause dur="1.0"/> and what we have here <pause dur="0.6"/> is <pause dur="0.4"/> a joint distribution if you look at the <pause dur="0.2"/> if you look at the table we have two variables <pause dur="0.3"/> one which was related to the level of demand <pause dur="0.3"/> and one which related to <pause dur="0.2"/> the number of advertisements and the story was <pause dur="0.2"/> that a company was placing weekly advertisements <pause dur="0.3"/> and it was generating demand as a result of those <pause dur="0.4"/> and we had <pause dur="0.3"/> an experimental probability distribution <pause dur="0.2"/> where we've got probabilities in blue <pause dur="0.2"/> attached to each pair of possible

values for the random variable <pause dur="1.8"/> and then we said <pause dur="0.2"/> well given that you've got these blue probabilities which we call joint probabilities <pause dur="0.4"/> you could also work out <pause dur="0.4"/> marginal probabilities if you know <pause dur="0.4"/> all the probabilities associated with X-equals-nought <pause dur="0.2"/> you can add them up <pause dur="0.3"/> to find the overall probability that X-equals-zero <pause dur="0.3"/> and that was a column sum <pause dur="1.4"/> and we've moved on to thinking <pause dur="0.2"/> not just about things relating to marginal distribution such as expected values or variances <pause dur="0.4"/> but on to the relationships between variables <pause dur="0.4"/> where we needed to look at <pause dur="0.2"/> these joint probabilities <pause dur="0.9"/> so this was our <pause dur="0.4"/> basic data basic example <pause dur="0.2"/> that's driving the <pause dur="0.7"/> subject of the lecture <pause dur="2.1"/> and we finished last week <pause dur="0.9"/> by <pause dur="0.3"/> looking at the <pause dur="1.1"/> formula for the covariance <pause dur="0.5"/> between random variables <pause dur="1.1"/> just cover that up for the moment <pause dur="1.1"/> and <pause dur="0.2"/> it looked fairly messy but we dealt with some motivation for it <pause dur="0.7"/> and <pause dur="0.2"/> remember that this is <pause dur="0.2"/> covariance for random variables there's a distinction between what goes on for random variables

which is a theoretical kind of concept <pause dur="0.3"/> and what goes on for real data <pause dur="0.9"/> but the story here is <pause dur="0.3"/> we are looking at <pause dur="0.6"/> effectively average deviations of each variable from its mean <pause dur="0.4"/> but <pause dur="0.2"/> particularly <pause dur="0.2"/> how they do that together whether when X is below its mean Y tends to be or not <pause dur="0.3"/> we're looking at the <pause dur="0.2"/> average of these things in the sense that we're looking at how important these joint deviations are <pause dur="0.4"/> as measured by how likely those occurrences are <pause dur="0.3"/> corresponding occurrences are <pause dur="1.2"/> so we take a weighted average of these paired deviations <pause dur="0.4"/> and this double sum that you see here <pause dur="0.2"/> simply means <pause dur="0.3"/> that you have to add across all possible pairs of values of the two random variables <pause dur="0.5"/> so this was the story <pause dur="1.5"/> so the double sum there <pause dur="0.2"/> across X and across Y simply means that what you're doing is looking at all possible pairs of values of the random variable<pause dur="1.2"/>s <pause dur="0.4"/> concerned <pause dur="1.3"/> and what we've got is a weighted average <pause dur="0.2"/> of these paired deviations from the mean <pause dur="0.7"/> the weighted average is obtained by multiplying

each paired deviation <pause dur="0.4"/> by the probability of getting that deviation <pause dur="0.6"/> and all those probabilities lie between nought and one so we call it a weighted average <pause dur="1.7"/> and that's covariance for <pause dur="0.2"/> random variables <pause dur="0.8"/> and of course if you want to calculate it as we're about to see <pause dur="0.3"/> it can be rather tedious <pause dur="0.2"/> but the main thing is to get in your ideas the motivations for what we're doing we're looking at <pause dur="0.3"/> the relationship between variables <pause dur="0.3"/> by looking at <pause dur="0.2"/> how likely <pause dur="0.2"/> the two variables seem to be <pause dur="0.3"/> both below or above their means together <pause dur="0.2"/> or opposite sides of their means together <pause dur="0.5"/> the E <pause dur="0.2"/> is the expected value that we discussed last week that's the weighted average <pause dur="0.2"/> of possible values of the random variable <pause dur="0.3"/> using probabilities as the weights <pause dur="2.2"/> so that's <pause dur="0.2"/> covariance <pause dur="0.7"/> for random variables if you want to compare it with what goes on with real data <pause dur="0.3"/> in the assessed exercise you've got to calculate some correlations <pause dur="0.9"/> # for which you'll need covariances <pause dur="0.3"/> just to <pause dur="0.6"/> compare <pause dur="0.2"/> if you were looking at covariance

for real data <pause dur="0.2"/> here's what you would do <pause dur="0.3"/> and it's possible just to identify the different points in the formulae <pause dur="0.5"/> # so that you can see where the <pause dur="0.4"/> points of comparison occur <pause dur="0.5"/> so <pause dur="0.2"/> first of all we're not using random variables we're using real data <pause dur="0.3"/> and our real data when we look at covariance occurs in pairs <pause dur="1.0"/> pairs of values for two <pause dur="0.2"/> variables <pause dur="0.6"/> and <pause dur="0.2"/> when we've got the real data formula <pause dur="0.2"/> again we've got deviations of # the X variable from its sample mean <pause dur="0.2"/> and the Y variable from its sample mean instead of expected value <pause dur="2.2"/> and instead of a probability weighting <pause dur="0.2"/> the weighted average we've got a real average <pause dur="0.2"/> although use N-minus-one instead of N <pause dur="0.9"/> so we're averaging <pause dur="0.4"/> the <pause dur="0.9"/> cross-product of the deviations from the mean just as we're doing with the random variable case <pause dur="0.6"/> you only see one sum here <pause dur="1.5"/> because this is enough <pause dur="0.3"/> to do the summing across all pairs of values because the random variables come in the data come in pairs a value for the X a value for the Y <pause dur="0.9"/> but you can see it in

the same way <pause dur="0.2"/> we're summing across all possible pairs <pause dur="0.2"/> and these are actually the weights <pause dur="0.2"/> every time we put in something into here we give it a weight of one over N-minus-one <pause dur="0.3"/> it's the same sort of story <pause dur="0.6"/> but it's for real data <pause dur="0.3"/> rather than for <pause dur="0.2"/> random variables <pause dur="5.1"/> so <pause dur="2.6"/> covariance for random variables then <pause dur="0.7"/> is a rather messy looking thing but it's doing basically the same thing <pause dur="0.7"/> summing across all possible pairs <pause dur="0.2"/> comparing deviations from the means of the two random variables <pause dur="0.2"/> and weighting the lot <pause dur="0.6"/> taking an average <pause dur="0.2"/> so it's the same story <pause dur="1.3"/> but as with real data <pause dur="1.0"/> the problem with the covariance measure to capture the relationship between random variables <pause dur="0.7"/> is that the units of the thing <pause dur="0.5"/> <trunc>a</trunc> whether it's big or small <pause dur="0.2"/> will essentially depend on whether the <trunc>n</trunc> units of measurement of X and Y are big or small <pause dur="0.4"/> so <pause dur="0.2"/> it won't do for comparing the strength of relationships between <pause dur="0.3"/> different pairs of random variables <pause dur="0.4"/> because its size <pause dur="0.2"/> will change with the size of the random variables it's

not a standardized measure <pause dur="0.3"/> and we know what we do with real data <pause dur="0.5"/> we take the covariance <pause dur="0.5"/> and we divide by the <pause dur="0.4"/> standard deviations of the two variables involved <pause dur="0.6"/> that's what we do with sample data <pause dur="0.2"/> and it's exactly what we do <pause dur="0.3"/> with random variables <pause dur="0.3"/> so <pause dur="0.2"/> we have covariance which doesn't necessarily work for us in the sense that it's <pause dur="0.2"/> not properly scaled <pause dur="0.6"/> and instead of covariance <pause dur="0.5"/> we use <pause dur="0.7"/> correlation <pause dur="1.0"/> and the correlation story <pause dur="0.2"/> is related to the covariance story <pause dur="0.3"/> in just the same way as it is for <pause dur="0.7"/> real data <pause dur="0.2"/> that's to say <pause dur="0.4"/> the correlation <pause dur="0.5"/> is the covariance divided by the two <pause dur="0.5"/> can see a mistake in the slide <pause dur="0.5"/> divided by the two <pause dur="0.3"/> standard deviations <pause dur="0.2"/> that should be the standard deviation of <pause dur="0.9"/> Y <pause dur="5.6"/> and a standard deviation definition of course is the standard deviation definition that we use for random variables <pause dur="0.3"/> it's the <pause dur="0.3"/> weighted average of the deviation of the random variable of its mean squared <pause dur="0.2"/> where the weights to the probabilities for that random variable alone they're the

marginal probabilities <pause dur="1.0"/> if you don't take the square root it's the variance to get the standard deviation you must take the square root <pause dur="0.5"/> but the relationship is the same as before correlation <pause dur="0.4"/> is covariance <pause dur="0.3"/> divided by the two standard deviations <pause dur="0.9"/> and that gives us a measure <pause dur="0.6"/> that must lie between minus-one and plus-one and just with the product moment correlation coefficient <pause dur="0.6"/> minus-one or plus-one indicates <pause dur="0.5"/> a perfect linear relationship between the random variables <pause dur="0.4"/> something in between is weaker <pause dur="0.3"/> the closer it is to zero the weaker is the relationship <pause dur="1.4"/> so what we want to do in # in this lecture <pause dur="0.3"/> is just run through some calculations <pause dur="0.5"/> of covariance and hence of correlation <pause dur="0.6"/> and finish off with some <pause dur="0.4"/> more general stories about <pause dur="0.5"/> # expected value and variances <pause dur="1.1"/> so <pause dur="0.4"/> we're now moving into section four-point-five of the notes <pause dur="0.2"/> if you look in section four-point-five and thereabouts you'll see the details of the calculations that i'm going to go through with you now <pause dur="3.3"/> so it's a <pause dur="0.2"/> sensible place to be looking <pause dur="17.7"/>

so <pause dur="0.4"/> first of all the <pause dur="0.9"/> numbers that we're actually going to be feeding into this calculation <pause dur="0.3"/> are the numbers from this joint distribution for demand and advertising <pause dur="0.6"/> so <pause dur="0.7"/> all the information that we can possibly have about these two random variables is encapsulated in the joint distribution <pause dur="0.3"/> we're then going to summarize this joint distribution by looking at the covariance and the correlation between these random variables <pause dur="0.7"/> in order to do that <pause dur="0.3"/> we are going to have to look at the <trunc>indi</trunc> the properties individually of X and Y <pause dur="0.2"/> 'cause we're going to need want to know what their expected values are <pause dur="0.4"/> and we're going to need to know what the variances are so we can get the standard deviations <pause dur="0.4"/> so we are going to need <pause dur="0.3"/> the so-called marginal distributions the probabilities associated with specific values of each variable <pause dur="0.4"/> alone <pause dur="0.8"/> and that's what you've got <pause dur="0.3"/> in the column and the row sums which are <pause dur="0.2"/> illustrated in the notes <pause dur="0.3"/> so we're going to need the joint probabilities to

work out the covariance and hence the correlation but on the way <pause dur="0.3"/> we're going to need <pause dur="0.5"/> the <trunc>m</trunc> so-called marginal probabilities the probabilities associated with <pause dur="0.4"/> values of the <trunc>r</trunc> each random variable taken <pause dur="0.4"/> individually <pause dur="0.6"/> 'cause we're going to look at deviations from the mean of each random variable <pause dur="0.9"/> so this is the basic information that we're going to use <pause dur="13.5"/> so to get at the covariance here's the covariance formula to get at the covariances we're going to need <pause dur="0.3"/> the expected value of X <pause dur="0.2"/> and the expected value of Y <pause dur="0.5"/> and then to get at the correlations we're also going to need <pause dur="0.4"/> the standard deviation of X and the standard deviation of Y <pause dur="0.4"/> so <pause dur="0.2"/> we <trunc>o</trunc> to get those individual quantities the expected value and the standard deviation <pause dur="0.3"/> we need the individual distribution <pause dur="1.1"/> the <trunc>margin</trunc> so-called marginal distributions <pause dur="2.5"/> so here's the <trunc>s</trunc> here's the story of the calculations made out in <pause dur="0.3"/> tabular form <pause dur="10.1"/> so this is all to do with our <pause dur="0.4"/> advertising demand example <pause dur="0.5"/> so here's the

story <pause dur="0.4"/> on the marginal distribution of X just looking at X alone there are two things we want to know about X alone <pause dur="0.4"/> we want to know what its expected value is <pause dur="0.3"/> and we want to know what its variance is <pause dur="0.2"/> so we can take its square root to get its standard deviation to use in the correlation <pause dur="0.9"/> so <pause dur="0.3"/> here we've got <pause dur="0.2"/> the value possible values of the random variable capital-X which we denote by little-X <pause dur="0.4"/> and we get nought one and two <pause dur="0.9"/> from the joint distribution we were able to calculate the marginal probabilities associated with each of those values of X <pause dur="0.3"/> and there they are <pause dur="1.6"/> to calculate the expected value <pause dur="0.2"/> it is the weighted average of the <pause dur="0.2"/> possible values of the random variable where the weights are the probabilities that is to say <pause dur="0.5"/> it's each X times its probability <pause dur="0.5"/> multiplied together <pause dur="0.4"/> and then <pause dur="0.2"/> those individual products summed <pause dur="0.3"/> that will be the expected value <pause dur="0.6"/> so the expected value of X here <pause dur="0.3"/> is one-point-one-one <pause dur="3.6"/> and that's for the <pause dur="0.2"/> fine that gives us one thing that we need for the <pause dur="0.4"/> about

the marginal distribution about the individual X distribution <pause dur="0.9"/> the other thing we need <pause dur="0.4"/> is the variance <pause dur="0.4"/> and the contributions to the variance <pause dur="0.5"/> are <pause dur="1.5"/> the deviations that of the random variable from its expected value how far away is does it <pause dur="0.4"/> get from its expected value how spread out is the distribution <pause dur="0.5"/> and in the end what is the average of those sorts of that sort of spread <pause dur="0.4"/> so what we look at <pause dur="0.3"/> is the difference between each individual value <pause dur="0.5"/> and the measure of central tendency the expectation <pause dur="0.8"/> we square that <pause dur="0.5"/> and we take the weighted average <pause dur="0.3"/> of these numbers <pause dur="0.4"/> where the weights again are the probabilities <pause dur="0.3"/> and <pause dur="0.2"/> this <pause dur="0.5"/> squared deviation <pause dur="0.4"/> is much more likely because it has a probability of point-five-one <pause dur="0.4"/> than this squared deviation which only has a probability of point-one-nine <pause dur="1.9"/> so again <pause dur="0.4"/> we've got to <pause dur="0.2"/> multiply <pause dur="0.6"/> each element in this column by its corresponding element in this column <pause dur="0.3"/> and add the things together <pause dur="1.9"/> which is what we see <pause dur="0.4"/> there <pause dur="0.4"/> in the final column except you

don't see it <pause dur="2.7"/> okay so <pause dur="0.3"/> in here <pause dur="0.2"/> point-two-three-four-one <pause dur="0.4"/> is <pause dur="0.2"/> this number <pause dur="0.3"/> multiplied by this one <pause dur="0.5"/> and then the variance is just the sum of those things <pause dur="2.9"/> so as long as there aren't too many numbers it's # <pause dur="0.3"/> tedious but not to the point of exhaustion <pause dur="3.1"/> and that's the variance that's the weighted average in terms of the random variable the way that the random variable deviates around its mean <pause dur="0.3"/> how spread out the distribution of the random variable is <pause dur="1.7"/> just remember <pause dur="0.3"/> that when we finally get to feeding this into the formula for correlation <pause dur="0.5"/> we'll want the square root of this quantity <pause dur="1.0"/> so <pause dur="0.2"/> although the variance is about point-four-eight <pause dur="0.3"/> the standard deviation is going to be bigger than that the square root of the number that's less than one <pause dur="0.5"/> is bigger than the original number <pause dur="0.6"/> so it's about point-seven <pause dur="4.3"/> and you have to repeat these this sequence of calculations <pause dur="0.8"/> for the Y variable <pause dur="0.9"/> so you want the same sorts of ranges of columns <pause dur="0.3"/> for the Y variable <pause dur="0.8"/> and <pause dur="0.4"/> to complete the calculations <pause dur="0.2"/>

so <pause dur="0.3"/> you need to know the values that the random variable can take the probabilities that it can take that value <pause dur="0.7"/> that eventually gives you the expected value <pause dur="0.3"/> hence you can calculate the square of the deviation of the values <pause dur="0.3"/> from the expected value <pause dur="1.4"/> weight those by the probabilities again <pause dur="0.4"/> add them up <pause dur="0.3"/> that gives you the variance <pause dur="0.3"/> the square root of which <pause dur="0.4"/> is the standard deviation <pause dur="0.8"/> so <pause dur="0.3"/> in black there <pause dur="0.2"/> you have the key quantities that you need to obtain <pause dur="0.5"/> from the <pause dur="0.3"/> marginal distributions they themselves of course are coming from the joint distributions everything is in the joint distribution <pause dur="0.3"/> the joint distribution is everything what we're doing is calculating some <pause dur="0.2"/> summary descriptors <pause dur="0.4"/> of that distribution <pause dur="0.3"/> both individually <pause dur="0.5"/> expected <pause dur="0.4"/> value and variance and eventually <pause dur="0.3"/> in terms of the relationship between the variables <pause dur="2.6"/> so those are the numbers that we need <pause dur="1.6"/> these calculations are laid out <pause dur="0.5"/> but <pause dur="0.2"/> we've just been through one to make it clear <pause dur="0.8"/> it's going <pause dur="3.3"/> so <pause dur="0.2"/> we now need to

turn back to the joint distribution <pause dur="0.6"/> to think about <pause dur="0.6"/> these <pause dur="0.3"/> the way that the variables <pause dur="1.1"/> change together <pause dur="0.2"/> rather than individually <pause dur="11.3"/> okay <pause dur="1.2"/> this table is also in your notes it comes out rather <pause dur="0.2"/> small here <pause dur="0.6"/> but <pause dur="0.3"/> what we need to do <pause dur="0.4"/> is to start thinking about <pause dur="0.3"/> the relationship between <pause dur="0.5"/> the random variables so we've got <pause dur="0.3"/> to <pause dur="0.2"/> to move on now to the joint distribution <pause dur="0.9"/> so <pause dur="0.4"/> the first thing <pause dur="0.2"/> to think about is <pause dur="0.2"/> how to organize the pairs of values that can occur <pause dur="0.6"/> well if you just look at the first and the third column here you can see how i've organized it i've tried to organize it systematically <pause dur="0.5"/> i've said <pause dur="0.3"/> fix the Y value at one <pause dur="0.5"/> work through the possible X values <pause dur="0.2"/> fix the <trunc>vy</trunc> Y value at two <pause dur="0.3"/> work through the <trunc>corr</trunc> possible X values <pause dur="0.2"/> fix the Y value at three <pause dur="0.3"/> work through the possible X values just do it systematically you could lay it out differently <pause dur="0.4"/> but we have got to cover <pause dur="0.5"/> all possible pairs <pause dur="0.5"/> of values <pause dur="0.7"/> so <pause dur="0.8"/> every row in this table corresponds to a pair of values of X and Y <pause dur="0.3"/> so when we're trying

to sum across all possible pairs what we're doing again <pause dur="0.3"/> is summing the column <pause dur="3.1"/> so in the middle <pause dur="0.2"/> column there two <pause dur="0.3"/> i've just <trunc>u</trunc> i've put <pause dur="0.2"/> the deviation of X from its mean why do i need that <pause dur="0.2"/> because that term feeds in <pause dur="0.3"/> to my covariance formula <pause dur="0.7"/> not on its own <pause dur="0.4"/> but it's still in there <pause dur="0.6"/> so <pause dur="0.2"/> here i've got all my deviations i'd already calculated these numbers for the purposes of calculating the variance of X <pause dur="0.2"/> so it's not although i've written them in again <pause dur="0.6"/> <trunc>w</trunc> you would have already calculated them <pause dur="0.9"/> and of course <pause dur="0.2"/> it repeats so <pause dur="0.3"/> as an an as X-equals-nought the deviation is always minus-one-point-one-one <pause dur="0.5"/> at X-equals-two the deviation is always nought-point-eight-nine <pause dur="0.8"/> so it reappears <pause dur="2.0"/> and the same will be true <pause dur="0.7"/> of the way <pause dur="0.2"/> Y deviates it from its mean only because of the way i've got it organized <pause dur="0.2"/> i've got three <pause dur="0.2"/> Y-equals-one values so i've got three deviations the same <pause dur="0.7"/> three two values together <pause dur="0.3"/> so i'll get three values the same there <pause dur="0.5"/> but again i i would have already

calculated these deviations to get at the variance <pause dur="0.3"/> i'm going to use them again to get at the covariance <pause dur="0.4"/> but to get at the covariance <pause dur="0.5"/> i'm going to <pause dur="0.2"/> multiply these <pause dur="0.2"/> elements from these two columns together <pause dur="0.2"/> i'm interested in the <pause dur="0.4"/> relationship between the deviation of X from its mean <pause dur="0.4"/> and Y from its mean <pause dur="0.5"/> that was what the formula said <pause dur="2.4"/>

so <pause dur="0.3"/> i need <pause dur="0.2"/> a fifth column which would be a brand new column <pause dur="0.4"/> which is the product <pause dur="0.2"/> row by row <pause dur="0.3"/> of what's in the second column and what's in the fourth column <pause dur="0.4"/> so my one-point-o-four here <pause dur="0.3"/> is minus-one-point-one-one <pause dur="0.6"/> multiplied by minus-nought-point-nine-four <pause dur="1.0"/> both numbers <pause dur="0.4"/> tend to be <pause dur="0.2"/> # both numbers there are below their mean <pause dur="1.0"/> next time <pause dur="0.2"/> we're multiplying <pause dur="0.3"/> minus-point-one-one <pause dur="0.2"/> by minus-point-nine-four again <pause dur="0.3"/> both numbers tend to be below their mean that's evidence for positive covariance <pause dur="1.0"/> they vary together <pause dur="0.2"/> however the next number <pause dur="0.4"/> point-eight-nine <pause dur="0.4"/> times minus-point-nine-four <pause dur="0.4"/> X is above its mean but Y is below <pause dur="0.5"/><vocal desc="sniff" iterated="n"/><pause dur="0.9"/> how important is that <pause dur="0.5"/> well we it depends

on the probability of getting that pair of values <pause dur="0.4"/> so we've got two <pause dur="0.3"/> pairs of values that are both below their mean <pause dur="0.3"/> and they both have positive probabilities <pause dur="0.3"/> one where one's above one's below <pause dur="0.2"/> and it has a positive probability but it's relatively small <pause dur="0.6"/> so on balance when we look at that <pause dur="0.3"/> it would appear that we've got <pause dur="0.4"/> # favour for a positive <pause dur="1.3"/> covariance or evidence for positive covariance <pause dur="0.2"/> and then we work our way down <pause dur="0.4"/> that's below that's above multiply them together <pause dur="0.3"/> get a positive number <pause dur="0.4"/> again <pause dur="0.6"/> that's actually sorry we get a <trunc>n</trunc> we get the negative <pause dur="0.3"/> number here i've moved this across to the probability column now <pause dur="0.2"/> we're getting one below <pause dur="0.2"/> one above they're opposite <pause dur="0.6"/> opposite locations with respect to the mean <pause dur="0.3"/> and so on through <pause dur="1.2"/> so that's this column here <pause dur="0.2"/> this column here <pause dur="0.2"/> is a deviation of X from its mean <pause dur="0.3"/> times the deviation of Y from its mean for each pair <pause dur="0.3"/> of values <pause dur="2.4"/> we want to find out what the average <pause dur="0.6"/> <trunc>de</trunc> <pause dur="0.2"/> average combination of these <pause dur="0.3"/> deviations is when multiplied

together <pause dur="0.2"/> so we're going to multiply each product by its probability <pause dur="0.2"/> of occurring <pause dur="0.6"/> that's the joint probability of getting the underlying values of X and Y <pause dur="0.6"/> that will then give us <pause dur="0.4"/> eventually when we add all those things together it'll give us a weighted average <pause dur="0.3"/> so <pause dur="0.3"/> what we want to do <pause dur="0.3"/> is multiply <pause dur="0.4"/> this column <pause dur="0.2"/> which itself is the product of these two <pause dur="1.0"/> by the probability <pause dur="1.2"/> and that's the final column <pause dur="0.5"/> that we've got in that table <pause dur="1.1"/> so the final column there <pause dur="0.4"/> is <pause dur="1.2"/> the element in column two <pause dur="0.3"/> times the element in column four <pause dur="0.4"/> times the element in column six <pause dur="2.6"/> but that's for each so we've got one row <pause dur="0.3"/> corresponds to one pair of values of the random variables <pause dur="0.3"/> we want to sum across all pairs <pause dur="0.3"/> we were expressed that as a double sum but all you have to think of is summing across all possible combinations <pause dur="0.3"/> so what we have to do <pause dur="0.2"/> is sum up this column of <trunc>num</trunc> numbers here <pause dur="0.8"/> and find out what we get <pause dur="0.3"/> and that will be the covariance <pause dur="0.4"/> between these two <pause dur="0.3"/> random variables <pause dur="1.3"/> now the <pause dur="0.5"/> that's <pause dur="0.7"/> just a matter

of hitting the buttons on a calculator then it's about nought-point-one <pause dur="2.1"/> we can't read anything at all <pause dur="0.5"/> into the numerical value <pause dur="0.6"/> of that number <pause dur="0.5"/> because the number that you get out of that <pause dur="0.2"/> is clearly going to depend <pause dur="0.3"/> on the sizes of the numbers that went in in the first place and if these sizes are large <pause dur="0.4"/> then that number is going to tend to be large <pause dur="2.2"/> what we can do however is take something from the sign the sign is positive so <pause dur="0.3"/> whatever kind of relationship there is here <pause dur="0.2"/> or linear relationship of course specifically we are talking about <pause dur="0.5"/> is positive whatever there is <pause dur="0.6"/> it would represent <pause dur="0.3"/> a situation where <pause dur="0.5"/> if X is above its mean <pause dur="0.2"/> Y would tend to be above its mean <pause dur="0.3"/> if <trunc>w</trunc> X is below its mean <pause dur="0.2"/> Y would tend to be <pause dur="0.2"/> below its mean <pause dur="0.3"/> whether they're below or above or not <pause dur="0.3"/> they're paired <pause dur="0.3"/> in that sense <pause dur="3.1"/> so <pause dur="0.3"/> what we've done then <pause dur="0.2"/> is to compute the formula at the bottom of the screen <pause dur="2.1"/> we've taken <pause dur="0.3"/> all the possible pairs of X-minus-E-of-X Y-minus-E-of-Y multiplied them together multiplied

them by their probabilities <pause dur="0.2"/> and added the lot together <pause dur="0.3"/> and that's the covariance <pause dur="2.7"/> however <pause dur="0.2"/> we want the correlation we want this scaled version that's going to tell us <pause dur="0.2"/> is that <pause dur="0.4"/> # <pause dur="0.5"/> a very strong relationship or isn't it we've got to remove the <pause dur="0.4"/> size effect <pause dur="0.4"/> from the <pause dur="1.3"/> # measure <pause dur="0.2"/> and we do that <pause dur="0.5"/> by dividing by the standard deviations <pause dur="13.5"/> we've worked out what the standard deviations are <pause dur="2.0"/> on the last slide but one <pause dur="1.0"/> so <pause dur="0.8"/> all we have to do <pause dur="0.2"/> is to substitute them into the formula all the bits of the formula are now calculated all we got to do is substitute them <pause dur="0.4"/> so the correlation is the number we worked out by the covariance <pause dur="0.5"/> divided by both the standard deviation of X <pause dur="0.4"/> and the standard deviation of Y <pause dur="0.6"/> and that will give us a standardized measure that must lie between minus-one and plus-one <pause dur="0.5"/> the closer it is to zero the weaker is the relationship <pause dur="0.4"/> the closer it is to minus-one the closer it is to a perfect downward sloping relationship <pause dur="0.3"/> the closer it is to plus-one <pause dur="0.5"/> the closer it

is to a perfect upward sloping <pause dur="0.5"/> relationship <pause dur="1.0"/> so <pause dur="0.2"/> there are the numbers that we've calculated on the previous slides <pause dur="0.4"/> they simply have to be substituted <pause dur="0.3"/> into the formula <pause dur="0.9"/> you can multiply these two together first <pause dur="0.4"/> before dividing them into that one if you prefer or can do it sequentially <pause dur="0.5"/> divide covariance by standard deviation of X <pause dur="0.4"/> and then divide that number by the standard deviation of Y it's the same result <pause dur="0.2"/> will <pause dur="0.7"/> occur <pause dur="1.4"/> and <pause dur="0.3"/> the bottom line <pause dur="0.6"/> for the correlation between these two random variables <pause dur="0.8"/> is <pause dur="0.4"/> that <pause dur="0.7"/> we get a number that's about nought-point-two <pause dur="1.7"/> that's rounded to four decimal places <pause dur="0.3"/> the number that you see up there <pause dur="0.5"/> so the correlation is positive but it's weak <pause dur="1.1"/> # there doesn't seem to be a strong <pause dur="0.3"/> relationship between <pause dur="0.6"/> # the <pause dur="0.2"/> advertising <pause dur="0.6"/> and the demand levels <pause dur="0.7"/> over the period that this empirical probability distribution has been constructed <pause dur="3.2"/> we shouldn't perhaps be <pause dur="0.2"/> too surprised about that <pause dur="0.6"/> but this number for some distributions this number will be <pause dur="0.4"/> a lot bigger <pause dur="0.9"/>

but we've got <pause dur="0.7"/> now a number of summary statistics for our joint distribution <pause dur="0.8"/> we've got <pause dur="0.7"/> <trunc>b</trunc> in particular we got individually for the Ys and the Xs <pause dur="0.6"/> their expected value which is a measure of the <trunc>s</trunc> <pause dur="0.2"/> of the # central tendency or the general location of the distribution of each individually <pause dur="0.6"/> we've got the variance which is a measure of their spread <pause dur="0.3"/> about that measure of central tendency how spread out are they about that standard <pause dur="0.2"/> that # <pause dur="1.6"/> overall measure of location of the distribution <pause dur="0.7"/> and then we've got two measures <pause dur="0.4"/> of the relationship between the variables we've got the <pause dur="0.2"/> covariance <pause dur="0.6"/> and then we've got the correlation <pause dur="2.6"/> so the numbers are are # # somewhat tedious but what i'd really like you to take away from this is <pause dur="0.3"/> # <pause dur="0.3"/> you are you do have for the purposes of examinations to be able to perform such calculations <pause dur="0.4"/> but in the longer term <pause dur="0.3"/> what you need to develop is some intuition about what's going on with these definitions what it what is really

being delivered to you when you make these calculations <pause dur="1.0"/> # because in practice if you do upgrade any much more statistics of course <pause dur="0.4"/> you won't be handling the individual calculations <pause dur="0.2"/> you'll leave that to a machine <pause dur="0.3"/> but if you don't know what the formula is doing <pause dur="0.4"/> you really can't interpret the answers very reliably <pause dur="2.7"/> so <pause dur="0.3"/> that's joint distributions then <pause dur="0.2"/> and summary statistics for distributions of random variables <pause dur="0.2"/> and each one of those <pause dur="0.2"/> has an analogue with real data <pause dur="0.4"/> we start off with real data get an understanding of what we're doing with real data <pause dur="0.2"/> and then to develop more statistical techniques we develop tools <pause dur="0.3"/> random variables and probability distributions <pause dur="0.3"/> which allow us to <pause dur="0.3"/> become more sophisticated in our analysis of real data there's a feedback effect <pause dur="2.0"/>

the last # few things that i want to talk about in this section of the lectures <pause dur="0.6"/> # <pause dur="0.2"/> harp back <pause dur="0.4"/> to <pause dur="0.2"/> looking at individual variables looking at individual random variables <pause dur="3.4"/> and i want to talk then again about expected value and variance <pause dur="0.6"/> but i want to <pause dur="0.4"/> point out to you a very important feature of these <pause dur="0.2"/> # <pause dur="0.3"/> calculations <pause dur="0.8"/> very often <pause dur="0.5"/> you'll have some information about some variable <pause dur="0.8"/> that's not itself directly of interest <pause dur="0.7"/> the thing that you want to think about <pause dur="0.5"/> is is related to the <pause dur="0.6"/> # that random variable about which you have information <pause dur="0.3"/> but is not the same as <pause dur="0.3"/> that <pause dur="0.6"/> random variable <pause dur="2.2"/> so you might have a probability distribution about <trunc>s</trunc> <pause dur="0.2"/> about in this case it's going to be a <pause dur="0.4"/> about sales in this

illustrative example i'm going to introduce <pause dur="0.2"/> but you may not want to know about sales you may not want to know what the expected value of sales is you may not want to know what the variance of sales is <pause dur="0.8"/> what you may want to know <pause dur="0.4"/> is <pause dur="0.5"/> what happens to profits what's the properties of profits not what are the properties of sales <pause dur="1.0"/> but <trunc>i</trunc> what you know about is sales how can you move from one to the other <pause dur="1.5"/> so here's a story <pause dur="0.6"/> that is <pause dur="0.5"/> very useful and this story also applies although we're going to introduce it for random variables and talk about expected values and variances <pause dur="0.9"/> exactly the same rules apply for real data <pause dur="0.3"/> in other words to averages <pause dur="0.3"/> and to sample variances <pause dur="1.2"/> so supposing we've got <pause dur="0.7"/> two <pause dur="0.3"/> we've got a random variable we've got information on sales <pause dur="0.4"/> but what we want to know <pause dur="0.5"/> is profits <pause dur="0.4"/> we want to know about <pause dur="0.3"/> profits <pause dur="1.8"/> so <pause dur="0.2"/> here's the relationship <pause dur="0.5"/> that has been found to exist <pause dur="0.4"/> between <pause dur="0.3"/> profits <pause dur="0.3"/> capital-P <pause dur="0.3"/> and sales capital-X i'm using capital letters because i want you at this stage to

think about these things as <pause dur="0.3"/> random variables in general <pause dur="0.5"/> they will have probability distributions <pause dur="0.5"/> we won't <pause dur="0.4"/> we will be able to get <pause dur="0.8"/> realizations or particular values in practice <pause dur="0.2"/> but when we're thinking about them as random variables we want to think about them in the abstract <pause dur="0.5"/> so this says <pause dur="0.3"/> that whatever value of X you happen to have from your distribution <pause dur="0.2"/> <trunc>i</trunc> you've the corresponding value of P can be calculated <pause dur="0.3"/> and so it can <pause dur="0.2"/> be stated as a general rule as the relationship between <pause dur="0.5"/> random variables <pause dur="0.5"/> the units here <pause dur="0.5"/> are <pause dur="0.2"/> that # everything's been made in # is measured in thousands of pounds <pause dur="0.7"/> # <pause dur="0.2"/> and in some cases thousands of pounds <pause dur="0.2"/> per day <pause dur="0.9"/> and you can interpret the <trunc>u</trunc> the terms that you see here it's a linear equation it's simple <pause dur="0.2"/> and the rules that i'm going to deal with <pause dur="0.3"/> are specific to linear equations linear equations are ones where <pause dur="0.3"/> one variable <pause dur="0.3"/> is a constant multiplied by the other <pause dur="0.2"/> with another constant added or subtracted you've seen that before <pause dur="0.3"/> with what <gap reason="name" extent="2 words"/> did

for elasticity and so on <pause dur="1.1"/> and you can interpret these coefficients as i have done there <pause dur="0.3"/> the three <pause dur="0.3"/> is giving you some measure <pause dur="0.2"/> of the profit per car <pause dur="0.4"/> in thousands of pounds <pause dur="0.5"/> the minus-two <pause dur="0.4"/> is <pause dur="0.4"/> some fixed costs per day <pause dur="0.4"/> in thousands of pounds so what you've got here <pause dur="0.4"/> are <pause dur="0.3"/> the amount of <pause dur="0.6"/> profits that you're going to get <pause dur="0.2"/> per car <pause dur="0.3"/> minus the amount of money that you're going to lose anyway as a result of say keeping your showroom going or something of that sort <pause dur="2.9"/> so this relationship is in terms of random variables so R-Vs here means i'm using capitals i'm talking about random variables in general <pause dur="0.4"/> but i can of course use the rule <pause dur="0.5"/> for specific values of the random variables so #<pause dur="0.2"/> if we know <pause dur="0.5"/> or somebody wants asked wants to ask wants # us to consider <pause dur="0.3"/> what happens when the random variable X takes the value specifically one <pause dur="1.6"/> then we can work out <pause dur="0.2"/> what the corresponding profits would be <pause dur="0.8"/> by simply feeding that one <pause dur="0.3"/> into the formula <pause dur="0.4"/> and in this case we'd get profits of one unit <pause dur="0.8"/> thousand pounds per

day <pause dur="4.3"/> we've got <pause dur="0.8"/> <trunc>i</trunc> either because we <pause dur="0.2"/> <trunc>s</trunc> <pause dur="0.2"/> # developed a statistical model or we've got an experimental probability distribution <pause dur="0.4"/> we've got information <pause dur="0.3"/> about <pause dur="0.2"/> X <pause dur="2.7"/> but what we want to know <pause dur="0.7"/> about is not X <pause dur="0.3"/> it's profits this is what matters or at least for some reason this is what we're asked to investigate <pause dur="0.7"/> so how do we get a story about profits from the story about sales and how that in terms of probability distributions and their summary statistics <pause dur="2.0"/> well when you look at the <pause dur="0.5"/> equation <pause dur="1.1"/> you can see <pause dur="0.4"/> that <pause dur="1.2"/> any value of X <pause dur="0.2"/> generates a particular value of P <pause dur="0.8"/> so <pause dur="0.4"/> if we change the value of X we'll necessarily change the value of P <pause dur="0.6"/> and if we never come back to the same value of X we'll never come back to the same value of P <pause dur="0.6"/> a particular value of profits of # sales X <pause dur="0.2"/> is associated uniquely with a particular value of profits <pause dur="0.4"/> P <pause dur="0.7"/> so <pause dur="0.2"/> if somebody tells us what the probability of getting some particular value of sales is <pause dur="1.1"/> all we have to do <pause dur="0.6"/> is to <trunc>u</trunc> to describe the probability of getting the

corresponding value of profits is saying it's the same <pause dur="0.7"/> so <pause dur="0.2"/> the probability of getting <pause dur="0.3"/> profits of unit one <pause dur="0.3"/> is the same <trunc>s</trunc> <pause dur="0.2"/> as the probability of getting sales of unit one the probabilities are going to be the same <pause dur="0.7"/> it's called a one to one relationship you've got no overlapping at all <pause dur="3.2"/> so what this tells us <pause dur="0.3"/> is we <pause dur="0.2"/> can <trunc>e</trunc> if we know what the probability distribution of the <pause dur="0.2"/> Xs is <pause dur="0.7"/> we know what the probability distribution of the Ps are <pause dur="0.8"/> we can just read off what the probability of the corresponding X value is and we'll have a set of pairs of values of profits <pause dur="0.4"/> with their probabilities <pause dur="0.4"/> so we'll know what the <pause dur="0.4"/> probability distribution of profits is <pause dur="0.3"/> from which <pause dur="0.4"/> we can calculate <pause dur="0.3"/> any of the summary statistics for the distribution <pause dur="0.7"/> # that we want <pause dur="0.2"/> expected value or variance using the formulae we've got <pause dur="1.3"/> so we can regenerate a new probability distribution <pause dur="0.4"/> and we can calculate expected values and variances <pause dur="0.6"/> if we want to know the probability distribution

specifically <pause dur="0.4"/> we're going to have to go through this process <pause dur="0.5"/> but it turns out that if you want to know just these summary statistics <pause dur="0.7"/> you don't have to go through the irksome process <pause dur="0.3"/> of calculating all the probabilities <pause dur="0.2"/> and then running through the formulae there's a <trunc>sh</trunc> a very important <pause dur="0.3"/> short cut <pause dur="10.9"/> so just to be clear <pause dur="0.2"/> supposing <pause dur="0.3"/> you wanted to work out <pause dur="1.0"/> what the <pause dur="1.0"/> # <pause dur="0.5"/> expected value of profits was if you do it longhand what have you got to do <pause dur="0.8"/> well for each possible value of X <pause dur="1.1"/> you have to work out the corresponding value of profits <pause dur="1.1"/> and you have to know what the probability of getting that value of X is <pause dur="0.2"/> it'll be the same as the probability of getting that value of profits <pause dur="0.7"/> but you have to do this calculation <pause dur="0.4"/> for every single <pause dur="0.9"/> value of X you have to work out a new value of P <pause dur="1.0"/> to get the probability and then you're going to have to multiply <pause dur="0.3"/> the value of P by its probability <pause dur="0.4"/> for each possible value <pause dur="0.6"/> and add them all up <pause dur="0.3"/> to get the expected value <pause dur="0.5"/> # there isn't an

answer here i haven't written it out that's a tedious calculation <pause dur="0.6"/> and if there were very many possible values for the random variable <pause dur="1.3"/> not just six maybe sixty <pause dur="0.2"/> it would be a real pain <pause dur="1.7"/> but this is how you'd work out the expected value of P if you had to <pause dur="0.5"/> it's minus-two times point-one-eight plus one times point-three-nine plus <pause dur="0.2"/> four times point-two-four <pause dur="0.4"/> all the way to the end <pause dur="0.3"/> thirteen times point-o-one <pause dur="2.6"/> and # <pause dur="0.3"/> the variance would be even worse you'd have to work out all the <trunc>s</trunc> deviations square them multiply the probabilities add them <pause dur="0.4"/> a real <pause dur="0.3"/> pain <pause dur="2.1"/> so you don't do that <pause dur="0.4"/> when the relationship between the random variables <pause dur="0.4"/> is linear <pause dur="0.4"/> when the relationship between the random variables is linear <pause dur="0.5"/> you've got a very simple <pause dur="0.4"/> alternative <pause dur="0.2"/> route to follow <pause dur="3.1"/>

so <pause dur="0.8"/> we just have <pause dur="0.2"/> a general statement and we say here's a linear relationship between a random variable X and a random variable Y <pause dur="0.5"/> we multiply by B <pause dur="0.2"/> add A <pause dur="1.1"/> so we've got all the information we need about Y we've perhaps got its probability distribution <pause dur="0.5"/> and we want to find out what the expected value of X is how do we go about it <pause dur="1.4"/> well all we have to know <pause dur="0.5"/> is that <pause dur="0.3"/> the relationship between the expected values <pause dur="0.3"/> is exactly the same as the relationship between the random variables themselves <pause dur="0.5"/> so <pause dur="0.3"/> if in order to get any value of Y you must multiply X by B and add A <pause dur="0.4"/> it's true that if you want the expected value <pause dur="0.3"/> of Y <pause dur="0.2"/> you simply multiply by B <pause dur="0.3"/> and add A <pause dur="4.8"/> so you certainly don't go through all the rigmarole of working out all the <pause dur="0.3"/> probabilities and expected values and the <pause dur="0.2"/> and so on <pause dur="0.3"/> all you need to know <pause dur="0.4"/> is

what the expected value of X is so if somebody's told you what the expected value of X is or you've got that information from somewhere <pause dur="0.5"/> it's very easy to work out the expected value of Y <pause dur="1.6"/> the same is true of the variance <pause dur="0.3"/> but don't forget with the variance you're always looking at <pause dur="0.8"/> deviations from the mean <pause dur="1.4"/> and you're squaring them <pause dur="1.5"/> so <pause dur="0.2"/> when we look at the mean of Y or the <trunc>ad</trunc> expected value of Y <pause dur="0.4"/> no matter which expected value no matter what the actual value of Y is <pause dur="0.3"/> the <pause dur="0.3"/> the specific value of Y is <pause dur="0.2"/> the expected value is going to <pause dur="0.2"/> involve the term A <pause dur="0.8"/> so when we look at the difference between the average and the actual both the average and the actual <pause dur="0.3"/> will include a term in A <pause dur="0.2"/> here's the A coming in from the average <pause dur="0.3"/> here's the A coming in from the actual if you like <pause dur="0.6"/> so when we take the difference <pause dur="1.0"/> the A is going to disappear <pause dur="0.8"/> so when we look at deviations of Y about its mean <pause dur="0.5"/> clearly the A bit's not going to play a role <pause dur="0.5"/> what is going to play a role is the B bit <pause dur="1.0"/> but that's going to

get squared up because variance we look at the square of the deviation of the mean <pause dur="0.4"/> so when we look at say this minus this <pause dur="0.2"/> the A will disappear but we'll have B in there <pause dur="0.9"/> when we look at variance we square that <pause dur="0.3"/> so we're going to end up with B-squared <pause dur="0.6"/> and in fact the relationship between the variances <pause dur="0.8"/> is this <pause dur="0.6"/> the variance of Y <pause dur="0.4"/> is <pause dur="0.7"/> B-squared times the variance of X <pause dur="0.6"/> you have to the A is irrelevant it disappears when taking the difference <pause dur="0.3"/> the B sticks around <pause dur="0.6"/> but it has to be squared up because variance is the squaring operation <pause dur="1.7"/> and these relationships would hold as well for sample data <pause dur="0.5"/> if you knew what this <trunc>samp</trunc> if you got some data and you knew what the average value of X was <pause dur="0.3"/> and you knew that X was related to Y in this way <pause dur="0.4"/> then the average value of Y could be worked out <pause dur="0.3"/> using this formula but plugging in the average value of X <pause dur="0.3"/> similarly <pause dur="0.3"/> if you knew what the sample variance of X was <pause dur="0.4"/> then the sample variance of Y <pause dur="0.3"/> can be worked out like this <pause dur="0.3"/> it's true both of the <pause dur="0.7"/>

random variable case which we sometimes refer to as the population case <pause dur="0.4"/> and <pause dur="0.5"/> real data <pause dur="1.4"/> it's also the case <pause dur="0.3"/> that these relationships hold not just for discrete random variables we've been pushing the discrete random variable story for reasons of simplicity <pause dur="0.4"/> these relationships <pause dur="0.3"/> also hold for continuous random variables although we haven't defined exactly what we mean by expected value and variance <pause dur="0.4"/> you should be developing some conceptual ideas of what we mean <pause dur="0.4"/> and we are able to define these things for continuous random variables <pause dur="0.3"/> in which case <pause dur="0.3"/> these relationships continue to hold it's very general result <pause dur="0.5"/> when the basic relationship between the random variables is linear <pause dur="3.3"/> so <pause dur="0.3"/> here is <pause dur="0.3"/> # a very simple <pause dur="0.3"/> # illustration then <pause dur="1.0"/> we had that profits was <pause dur="0.2"/> three times sales minus two <pause dur="5.2"/> # once you know that the expected value of sales is one-point-five you can easily calculate the expected value of profits <pause dur="0.6"/> by feeding it into the formula <pause dur="0.8"/> you don't have to go through all the

rigmarole of working out a probability distribution <pause dur="0.4"/> and # going through <trunc>y</trunc> every step in the calculation of expectation <pause dur="0.9"/> similarly <pause dur="0.5"/> with variance <pause dur="0.5"/> the A bit is our minus-two here <pause dur="1.2"/> it's the constant that you add on in this case it's a subtraction so it's minus-two <pause dur="0.2"/> but it's irrelevant <pause dur="2.1"/> and the only thing that's important is the multiplicative factor of three <pause dur="0.8"/> and that <pause dur="0.4"/> of course has to be squared <pause dur="0.3"/> so we get the variance of P is three-squared times the variance of X <pause dur="0.4"/> variance of X one-point-two-five <pause dur="0.4"/> and so we get the variance of P <pause dur="0.8"/> which is large of course 'cause the multiplicative <pause dur="0.3"/> term is larger than one <pause dur="1.6"/> we'd have got variance reduction <pause dur="0.4"/> if this number instead of three was a number less than one <pause dur="0.8"/> just depends on what's # being fed in <pause dur="2.1"/> so <pause dur="0.3"/> when there's a linear relationship between your random variables don't recalculate <pause dur="0.6"/> expected mean and variance use these formulae <pause dur="16.6"/> we make far less play of the following results but nonetheless they're interesting <pause dur="0.2"/>

they're important to know about <pause dur="0.2"/> at least to be aware of <pause dur="0.4"/> there are all kinds of other relationships like this that exist <pause dur="0.6"/> if we want to if we've got to <pause dur="0.6"/> add <pause dur="0.4"/> random variables together <pause dur="0.2"/> possibly multiplied by their own constants <pause dur="0.4"/> there are all kinds of simple rules for working out the new expected values <pause dur="0.3"/> and the new variances from the old expected values and variances <pause dur="0.9"/> so a very simple case is <pause dur="0.4"/> if you want the expected value of the sum of two random variables <pause dur="0.3"/> it's just <pause dur="0.2"/> the sum of the expected values <pause dur="0.6"/> very straightforward indeed <pause dur="2.1"/> variance is more interesting <pause dur="0.2"/> when you come to think about sums <pause dur="0.4"/> because if you're dealing with a sum <pause dur="0.6"/> you don't just have to consider what's happening <pause dur="0.4"/> to each variable <pause dur="0.5"/> individually <pause dur="0.2"/> you have to think about what's happening to them <pause dur="0.3"/> together <pause dur="0.5"/> so <pause dur="0.3"/> if you're faced with a position of needing to work out the variance of a sum <pause dur="1.4"/> you have to take account of the covariance the way that they <pause dur="0.3"/> vary together <pause dur="1.0"/> so <pause dur="2.3"/> you'll notice we've got there's coefficients

being squared up here <pause dur="0.5"/> but <pause dur="0.2"/> let's look at the really simple case to make the point if you wanted the variance of a sum <pause dur="0.4"/> the variance of a sum would be the sum of the variances that's fine <pause dur="1.2"/> but <pause dur="0.2"/> you also have to take account of the covariance <pause dur="0.2"/> the extent to which they vary together <pause dur="0.8"/> and if the covariance is negative <pause dur="0.4"/> you can see that this that will <pause dur="0.3"/> reduce the variance compared with simply this sum of the individual variances the covariance is helping to <pause dur="0.6"/> have a reducing effect <pause dur="0.4"/> on the aggregate variance in there <pause dur="2.1"/> if that # if the covariance is negative <pause dur="1.0"/> notice also then <pause dur="0.3"/> that <pause dur="1.0"/> in the special case where there is no covariance where there's no relationship at all between the random variables <pause dur="0.3"/> then indeed <pause dur="1.2"/> if we want the variance of the sum it's the sum of the variances <pause dur="0.8"/> that should remind you a little bit of the story about independence and probabilities if you want the probability of a joint event <pause dur="0.6"/> we said <pause dur="0.3"/> sometimes you can just multiply the two probabilities together <pause dur="0.7"/>

but only if the two <pause dur="0.5"/> events are independent <pause dur="0.4"/> so the story here is <pause dur="0.4"/> if you want the variance of the sum <pause dur="0.3"/> you can take the sum of the variances <pause dur="0.5"/> but only if <pause dur="0.5"/> there's no relationship between the variables the covariance is zero <pause dur="2.7"/> so you then might ask well <pause dur="1.7"/> is it true that # <pause dur="1.1"/> zero covariance means that the <pause dur="0.2"/> random variables are <pause dur="0.4"/> independent can i say that if there's no covariance between the data they're really independent remember we defined independence quite precisely earlier on <pause dur="1.3"/> well <pause dur="0.6"/> unfortunately that's not the case <pause dur="1.3"/> the definition of independence is very precise and it only involves probabilities <pause dur="0.4"/> when we looked at covariance <pause dur="0.3"/> we weren't just interested in <pause dur="0.3"/> probabilities we were also interested in values that the random variables can take <pause dur="2.2"/> and it turns out <pause dur="0.2"/> that if you start off with random variables that really are independent <pause dur="0.5"/> then it's certainly true <pause dur="0.7"/> that there won't be any covariance between them the <trunc>co</trunc> rather to be <pause dur="0.3"/> <trunc>c</trunc> more careful the covariance will be zero <pause dur="0.2"/> between

them <pause dur="1.0"/> that's that's okay <pause dur="0.5"/> but <pause dur="0.3"/> it doesn't go the other way <pause dur="0.2"/> that is to say <pause dur="0.7"/> just because <pause dur="0.6"/> when you feed everything into the covariance formula everything cancels out to give you zero covariance <pause dur="0.4"/> it doesn't mean strictly speaking <pause dur="0.4"/> that the random variables are independent and the reason is <pause dur="0.3"/> that <pause dur="0.2"/> what's going into the covariance formula <pause dur="0.2"/> is not just the probabilities joint probabilities <pause dur="0.2"/> and that <pause dur="0.7"/> it's values of the random variable too <pause dur="0.6"/> when we look at independence <pause dur="0.3"/> the only issues <pause dur="0.4"/> are the marginal probabilities the individual probabilities <pause dur="2.9"/> okay <pause dur="0.2"/> that's it for the lecture can i remind you please # there's the # assessed exercise which is due in this time <pause dur="0.2"/> next week <pause dur="0.3"/> if you've got questions about it come and speak to me or your class tutor <pause dur="0.8"/> # <pause dur="0.6"/> and the problem sets for discussion this week <pause dur="0.3"/> are the ones on expected value variance and covariance <pause dur="0.4"/> okay thanks very much that's it