Skip to main content Skip to navigation


<?xml version="1.0"?>

<!DOCTYPE TEI.2 SYSTEM "base.dtd">





<publicationStmt><distributor>BASE and Oxford Text Archive</distributor>


<availability><p>The British Academic Spoken English (BASE) corpus was developed at the

Universities of Warwick and Reading, under the directorship of Hilary Nesi

(Centre for English Language Teacher Education, Warwick) and Paul Thompson

(Department of Applied Linguistics, Reading), with funding from BALEAP,

EURALEX, the British Academy and the Arts and Humanities Research Board. The

original recordings are held at the Universities of Warwick and Reading, and

at the Oxford Text Archive and may be consulted by bona fide researchers

upon written application to any of the holding bodies.

The BASE corpus is freely available to researchers who agree to the

following conditions:</p>

<p>1. The recordings and transcriptions should not be modified in any


<p>2. The recordings and transcriptions should be used for research purposes

only; they should not be reproduced in teaching materials</p>

<p>3. The recordings and transcriptions should not be reproduced in full for

a wider audience/readership, although researchers are free to quote short

passages of text (up to 200 running words from any given speech event)</p>

<p>4. The corpus developers should be informed of all presentations or

publications arising from analysis of the corpus</p><p>

Researchers should acknowledge their use of the corpus using the following

form of words:

The recordings and transcriptions used in this study come from the British

Academic Spoken English (BASE) corpus, which was developed at the

Universities of Warwick and Reading under the directorship of Hilary Nesi

(Warwick) and Paul Thompson (Reading). Corpus development was assisted by

funding from the Universities of Warwick and Reading, BALEAP, EURALEX, the

British Academy and the Arts and Humanities Research Board. </p></availability>




<recording dur="00:43:47" n="5901">


<respStmt><name>BASE team</name>



<langUsage><language id="en">English</language>

<language id="de">German</language>

<language id="fr">French</language>

<language id="it">Italian</language>



<person id="nm0919" role="main speaker" n="n" sex="m"><p>nm0919, main speaker, non-student, male</p></person>

<person id="sm0920" role="participant" n="s" sex="m"><p>sm0920, participant, student, male</p></person>

<person id="sf0921" role="participant" n="s" sex="f"><p>sf0921, participant, student, female</p></person>

<person id="sf0922" role="participant" n="s" sex="f"><p>sf0922, participant, student, female</p></person>

<person id="sf0923" role="participant" n="s" sex="f"><p>sf0923, participant, student, female</p></person>

<person id="sm0924" role="participant" n="s" sex="m"><p>sm0924, participant, student, male</p></person>

<personGrp id="ss" role="audience" size="m"><p>ss, audience, medium group </p></personGrp>

<personGrp id="sl" role="all" size="m"><p>sl, all, medium group</p></personGrp>

<personGrp role="speakers" size="8"><p>number of speakers: 8</p></personGrp>





<item n="speechevent">Lecture</item>

<item n="acaddept">Meteorology</item>

<item n="acaddiv">ps</item>

<item n="partlevel">UG1</item>

<item n="module">Measuring the atmosphere</item>




<u who="nm0919"> well good morning everybody # just to start off <pause dur="0.7"/> just like to remind you of the misprint <pause dur="1.5"/> we had in the # <pause dur="0.2"/> on the notes on bottom of page eight <pause dur="0.6"/> equation four-seven <pause dur="1.2"/> when we're doing the fit to the exponential-Y-A exponential-K-X <pause dur="0.8"/><kinesic desc="writes on board" iterated="y" dur="1:26"/> what we do is we transform it by taking natural logs of each side <pause dur="0.6"/> which is log-Y is log-A <pause dur="1.0"/> and i put <trunc>l</trunc> Ls so that L-Ns of E to remind ourselves it's logs to the base E <pause dur="1.7"/> log-A plus K-X i think there was a log in there by mistake <pause dur="0.7"/> and therefore if we plot # of the individual data points Y-I and X-I those <pause dur="0.7"/> pairs if we plot the log of Y-I against X-I <pause dur="0.6"/> then if this is a reasonable <pause dur="0.7"/> law <pause dur="0.5"/> we would expect it to be something like a straight line the gradient would be K <pause dur="0.4"/> and the intercept would give us an idea of <pause dur="0.2"/> # <pause dur="0.2"/> log-A <pause dur="0.8"/> in which case we can take the <trunc>l</trunc> <pause dur="0.5"/> antilog and get <pause dur="0.2"/> A <pause dur="0.9"/> # <pause dur="0.6"/> just <pause dur="0.9"/> realize <pause dur="0.2"/> what might have been the confusion last year if we had a power law which could also be a <pause dur="0.3"/> quite a reasonable behaviour <pause dur="0.4"/> for some transducer <pause dur="0.4"/> say Y equals A-X-to-the-K now instead of

E-to-the-<pause dur="0.3"/>X <pause dur="0.8"/> in this case <pause dur="0.4"/> obviously when we take logs we get log-Y <pause dur="0.4"/> is log-A and this time it is of course plus K-log-X <pause dur="0.5"/> and in this case one # would indeed plot log-Y <pause dur="0.2"/> against log-<pause dur="0.5"/>X-I <pause dur="1.7"/> that should be X-I i suppose for each individual data point <pause dur="0.5"/> K would be now the gradient <pause dur="0.3"/> which exponent the the power law here <pause dur="0.4"/> and log-A <pause dur="0.5"/> the intercept so hopefully <pause dur="0.5"/> if you could correct your <pause dur="0.4"/> written copy there <pause dur="0.3"/> from that misprint <pause dur="1.7"/> # <pause dur="0.2"/> next week is # <pause dur="0.5"/> week four <pause dur="1.1"/> or is it week five </u><pause dur="0.2"/> <u who="ss" trans="pause"> week five </u><u who="nm0919" trans="overlap"> week five yes it's week four this week isn't it yes <pause dur="0.3"/> next week is week five thank you <pause dur="0.7"/> and # <pause dur="1.8"/> this is week four <pause dur="0.8"/> so there's no lecture next Thursday <pause dur="0.6"/> i'm actually <pause dur="0.6"/> not in Canada next Thursday i actually arrive <pause dur="0.4"/> back from Canada at about <pause dur="0.2"/> seven-thirty A-M <pause dur="1.0"/> so # <pause dur="0.3"/> i <trunc>m</trunc> thought i might not be tremendously alert <pause dur="0.3"/> so we won't have a lecture next Thursday <pause dur="0.5"/> but what will occur next week since the lab is going to start in week six <pause dur="0.3"/> will be for the three groups <pause dur="0.2"/> who you know you are <pause dur="0.5"/> you will have your lab briefing <pause dur="1.1"/> # next

week before the week in week five before the labs start proper week six seven eight nine ten okay five weeks <pause dur="0.4"/> Tuesday ten A-M Thursday ten A-M <pause dur="0.5"/> and Friday for the easy people i'm not quite sure <pause dur="0.9"/> presume you know what time it is yeah <pause dur="1.2"/> right <pause dur="0.4"/> and then so next week i will be # recovering from my jet lag in the morning <pause dur="0.5"/> and we'll then have some lectures on <pause dur="0.3"/> twenty-second of Feb <pause dur="0.6"/> and the first of March okay <pause dur="1.6"/> right let me now we'll go through we've got quite a few of these problems to go through so let's go through those <pause dur="0.2"/> # <pause dur="0.4"/> take a <trunc>c</trunc> <pause dur="0.9"/> couple of minutes now to hand these out i'm afraid <trunc>lo</trunc> <gap reason="name" extent="2 words"/> <pause dur="0.8"/> sorry <trunc>tho</trunc> those of you who are wondering what's going on # <pause dur="0.6"/> if you want some <pause dur="0.2"/> cut on the fee you'll have to see my agent okay <pause dur="0.7"/> for this # <pause dur="0.2"/> sorry the last few minutes hasn't been terribly interesting television has it <pause dur="0.5"/> <gap reason="name" extent="2 words"/> <gap reason="name" extent="2 words"/> anybody <pause dur="0.5"/> <gap reason="name" extent="1 word"/> <pause dur="0.4"/><gap reason="inaudible" extent="1 sec"/> no okay <pause dur="0.8"/> let me also while we're doing that hand out this attendance sheet <pause dur="1.4"/> okay so let's just quickly have a look at these # particular aspects and # <pause dur="0.5"/> if you remember the first part of this question <pause dur="0.6"/> is to do with # <pause dur="1.5"/>

if i can find the # <pause dur="0.2"/> particular sheet i'm looking for <pause dur="0.9"/> is to do with # <pause dur="0.2"/> yes # <pause dur="1.6"/> we have # <pause dur="3.7"/><kinesic desc="writes on board" iterated="y" dur="1:30"/> twenty-five measurements we have T-bar <pause dur="1.3"/> # plus-or-minus sigma <pause dur="0.8"/> and we're told that that is five-point-five <pause dur="0.2"/> degrees centigrade <pause dur="0.6"/> and the standard deviation <pause dur="1.8"/> is equal to one degree centigrade <pause dur="0.4"/> so that means that our distribution looks something like this <pause dur="0.4"/> this is five degrees <pause dur="0.5"/> and sixty-eight per cent <pause dur="0.5"/> of our values are within one degree <pause dur="0.4"/> and so the standard error <pause dur="1.9"/> which is how accurately we can work out this <pause dur="0.2"/> bit at the top there <pause dur="0.5"/> is obviously equal to sigma <pause dur="0.7"/> which is the standard deviation divided by root-N <pause dur="0.4"/> and we have N is twenty-five <pause dur="0.6"/> 'cause there are actually twenty-five readings in this frequency diagram against the temperature <pause dur="1.5"/> that's T-bar <pause dur="1.3"/> and so it's obviously not a smooth curve it's a bit of a histogram <pause dur="0.3"/> and therefore that is # root of twenty-five is five <pause dur="0.4"/> so the standard <pause dur="0.2"/> deviation standard error i should say <pause dur="0.3"/> is plus-or-minus # <pause dur="0.3"/> <trunc>fi</trunc> # <pause dur="0.2"/> one <pause dur="0.4"/> over five-<pause dur="0.4"/>point-two degrees

centigrade <pause dur="0.5"/> so we could say that the value of T <pause dur="0.7"/> is <pause dur="0.2"/> is equal to five <pause dur="0.4"/> plus-or-minus nought-point-two degrees centigrade with N equals twenty-five <pause dur="1.8"/> and we would expect sixty-eight per cent <pause dur="0.2"/> with to be within one standard <pause dur="0.3"/> deviation of these twenty-five values <pause dur="0.4"/> and what are the assumptions we've made <pause dur="0.3"/> well one or two people sort of said things like oh well <pause dur="0.4"/> sort of <pause dur="1.4"/> things are pretty accurate and people have been rather careful that's not the the <trunc>as</trunc> <pause dur="0.2"/> i mean if you were not accurate and you were not careful <pause dur="0.4"/> then this <trunc>sord</trunc> it'd just be a broader wouldn't it you'd have a bigger standard deviation <pause dur="0.4"/> so the assumptions that you've made <pause dur="0.3"/> are two basically <pause dur="0.3"/> one is that there are no systematic errors <pause dur="0.6"/> it's all random <pause dur="3.5"/><kinesic desc="writes on board" iterated="y" dur="21"/> in other words <pause dur="0.5"/> it's not as if the the mean is up here and they're all shifted there's no systematic errors <pause dur="1.2"/> and <trunc>th</trunc> that the errors are normally distributed <pause dur="1.9"/> in other words they obey this Gaussian curve those are the two <pause dur="1.0"/> # normal distribution of errors <pause dur="0.5"/> okay <pause dur="0.7"/>

those are the two assumptions <pause dur="0.4"/> and all this business about that everyone's been a bit accurate and you haven't been too careful <pause dur="0.5"/> that's all in here if you weren't very accurate and there was <trunc>m</trunc> <pause dur="0.5"/> more spread that would just come through on the standard deviation wouldn't it <pause dur="0.5"/> so let's just go through this a little bit i've been doing a little bit of <trunc>re</trunc> research as i indicated i would over the weekend <pause dur="0.8"/> so here we have our situation <pause dur="1.9"/> frequency of <pause dur="1.1"/> time <trunc>w</trunc> of of # numbers of # values of X <pause dur="0.5"/> this is our mean <pause dur="0.8"/> and they're spread in this bell-shape curve sixty-eight per cent of them <pause dur="0.4"/> should be in within sigma what we call the standard deviation <pause dur="0.5"/> and of course as we get more and more and this gets smoother and smoother <pause dur="0.4"/> we can tell where the peak is better and better <pause dur="0.5"/> that's my <pause dur="0.3"/> standard error in the estimate S-M and that's sigma <pause dur="0.4"/> divided by root-N <pause dur="0.9"/> so the standard deviation doesn't <pause dur="0.4"/> matter really how many <pause dur="0.2"/> readings you have <pause dur="1.1"/> # but as you get more and more you should be <trunc>s</trunc> define the peak

better and better so anyway i've done then a little bit of research here 'cause as i was indicating <pause dur="1.4"/> last week <pause dur="1.1"/> # <pause dur="1.3"/> there's a slight problem in my opinion anyway in the # nomenclature here when you look at these two terms they're not self-explanatory <pause dur="0.6"/> and i promised you i'd do a bit of research on this so i've asked one or two i was at an international meeting <pause dur="1.0"/> # on <trunc>m</trunc> in London on Monday so i asked the Italian people what the Italian terms were to see if they were <pause dur="0.5"/> any more self-explanatory and this was absolutely hopeless <pause dur="0.8"/> because <pause dur="0.5"/> the Italian for standard deviation is <distinct lang="it">deviazione standard</distinct> <pause dur="0.4"/> and the Italian for standard error is <distinct lang="it">errore standard</distinct> so those are not very much better <pause dur="0.7"/> however the French term is much better <pause dur="1.5"/> the French term for mean is <distinct lang="fr">moyenne</distinct> <pause dur="0.6"/> and for standard deviation is <distinct lang="fr">écart type</distinct> <pause dur="0.4"/> so <distinct lang="fr">écart</distinct> actually means a sort of spreading out typical so it's a typical spreading out <pause dur="0.8"/> so that's quite a <pause dur="0.8"/> i'm sure you'll agree that's a very

logical term <pause dur="0.5"/> and nobody as yet i've raised this with several people has been able to tell me <pause dur="0.3"/> got a few e-mails on the go <pause dur="0.7"/> what the word for standard error is and have you have you managed to <pause dur="0.2"/> find this out </u><u who="sm0920" trans="latching"> no i haven't </u><pause dur="0.7"/> <u who="nm0919" trans="pause"> i asked a few people <pause dur="0.2"/> when i was <pause dur="0.2"/> at a <pause dur="0.4"/> dinner in Paris and they <pause dur="0.2"/> gave me some funny looks <pause dur="0.3"/> when i raised it and started talking about something else <pause dur="1.6"/> right <pause dur="0.3"/> what about the German term <pause dur="1.0"/> now the German for average is quite good i think it's <distinct lang="de">durchschnitt</distinct> <pause dur="1.2"/> <distinct lang="de">durch</distinct> <pause dur="0.3"/> means # <pause dur="0.2"/> through # cut through <pause dur="0.7"/> right <pause dur="0.6"/> so it's a cut through the middle <pause dur="1.1"/> and then the next term is <trunc>s</trunc> <pause dur="0.2"/> standard <pause dur="0.2"/> <distinct lang="de">abweichnung</distinct> <pause dur="0.9"/> okay <pause dur="0.8"/> which means again deviation or divergence i'm afraid doesn't it and then i asked these people that at were <pause dur="0.2"/> big statisticians and all they could come up with for standard error was <pause dur="0.4"/> <distinct lang="de">normalisiert abweichnung</distinct> well i don't think that's right actually <pause dur="0.6"/> 'cause i think the <distinct lang="de">normalisiert abweichnung</distinct> is sometimes you express a standard <pause dur="0.4"/> a fractional standard devation <pause dur="0.5"/> in

other words what's the fraction of sigma-<pause dur="0.4"/>over-X you know if X was ten centimetres <pause dur="0.5"/> and this is standard deviation is one you could say the fractional standard deviation <pause dur="0.9"/> <trunc>i</trunc> is is ten per cent <pause dur="0.4"/> which i think would be this <distinct lang="de">normalisiert</distinct> one <pause dur="1.1"/> and nobody could tell me this one either so obviously # <pause dur="0.2"/> it's an international problem <pause dur="0.7"/> what to call this thing <pause dur="1.9"/>

right let's go on to the # next one <pause dur="0.7"/> okay <pause dur="0.9"/> and in the next one number two here <pause dur="2.4"/> we have that the area <pause dur="1.5"/> equals <trunc>pi-R-squa</trunc> one person actually managed to get that wrong <pause dur="0.5"/> pi-R-squared <pause dur="1.1"/> and R D or equals to <pause dur="0.4"/> pi-D-squared over four if you like and D <pause dur="0.5"/> apparently is equal to twenty centimetres <pause dur="0.7"/> plus-or-minus point-one centimetre <pause dur="2.2"/> # <pause dur="1.6"/> right <pause dur="0.9"/> that's what it says here which is one part <pause dur="0.3"/> one in two-hundred <pause dur="2.3"/><kinesic desc="writes on board" iterated="y" dur="20"/> that error <pause dur="4.8"/> right so # <pause dur="0.8"/> well we know what the error is it's # <pause dur="1.5"/> a hundred-pi <pause dur="0.5"/> that's twenty metres squared <pause dur="0.4"/> sorry we know what the area itself is <pause dur="2.4"/> which is three-hundred-and-fourteen <pause dur="0.3"/> <trunc>s</trunc> square centimetres so the question

now is what's the # error in that <pause dur="2.0"/> and at this stage # we realize here we've got a formula of the form Z <pause dur="0.6"/> it's a product isn't it here <pause dur="1.1"/> in that # # my R is <pause dur="0.2"/> squared <pause dur="0.2"/> so the problem is if i've got an error in R and i square it what happens to the error in R-squared <pause dur="1.2"/> and # <pause dur="0.2"/> basically <pause dur="1.1"/> well <pause dur="0.5"/> what i'm going to do now actually <pause dur="0.5"/> i presented these formula before <pause dur="0.5"/> # for combination of errors when you had a product or a square or a power as you've got here <pause dur="0.8"/> just # as # <pause dur="2.0"/> out of the hat so to speak <pause dur="0.4"/> so let's actually just do <pause dur="0.3"/> these things properly <pause dur="0.4"/> which just involves a little bit of calculus <pause dur="0.5"/> and is also what is required <pause dur="0.7"/> for question three <pause dur="0.7"/> of the long problems that i'll hand out in a few minutes okay <pause dur="0.4"/> so <pause dur="1.9"/> right let's just <pause dur="0.6"/> go through these errors a little bit more formally <pause dur="1.4"/><kinesic desc="writes on board" iterated="y" dur="2"/> and those <pause dur="0.5"/> formula that are just <pause dur="0.2"/> quoted on the sheet let's just see where they come from <pause dur="0.7"/> and let's just have the formula Z <pause dur="0.3"/><kinesic desc="writes on board" iterated="y" dur="8"/> i'm going to derive <pause dur="0.4"/> from A a constant time some<pause dur="0.9"/>thing i measure called X <pause dur="0.5"/> and something

i measure called Y which could also be married by B <pause dur="0.7"/> and i'm interested in let's suppose there's an <pause dur="0.5"/> so what's <trunc>i</trunc> the first thing is what's the effect <pause dur="1.2"/><kinesic desc="writes on board" iterated="y" dur="1:12"/> on Z <pause dur="0.6"/> of an error <pause dur="1.3"/> D-X <pause dur="0.7"/> okay <pause dur="0.3"/> well i can work that out by differentiating if i just differentiate this with respect to X and i get D-Z D-partial <pause dur="0.5"/> D-X <pause dur="0.5"/> which means that means differentiate keeping <pause dur="1.6"/> keep Y constant when you do this <pause dur="0.9"/> okay <pause dur="0.4"/> because i'm only interested in the effect of the error on X <pause dur="0.5"/> so this is a constant <pause dur="0.4"/> and that is of course equal to # A <pause dur="1.3"/> and similarly <pause dur="1.5"/> the effect <pause dur="1.3"/> well i could do the same thing here <pause dur="0.5"/> on Z <pause dur="0.7"/> of an error <pause dur="1.6"/> D-Y <pause dur="0.4"/> if X is constant <pause dur="0.9"/> right <pause dur="0.6"/> and that of course is D-Z <pause dur="0.5"/> D-Y is lo and behold equal to B <pause dur="0.5"/> right <pause dur="2.6"/> in other words i've got my error <pause dur="0.3"/> one <pause dur="0.6"/> shall we call it D-Z here <pause dur="0.5"/> is equal to A-D-X <pause dur="0.6"/> and the error two <pause dur="1.3"/> D-Z <pause dur="0.5"/> is equal to B-D-Y <pause dur="0.7"/> okay now we make the # assumption <pause dur="1.0"/> that # supposing # <pause dur="0.3"/> if it's true <pause dur="0.2"/> that the errors in X and Y are independent <pause dur="0.8"/> then if one is high the other can be low et cetera et cetera <pause dur="0.4"/> then

these two <pause dur="0.3"/> error terms don't add up algebraically <pause dur="0.4"/> they add up as the sums of the squares don't they <pause dur="0.2"/> okay <pause dur="1.9"/> so # <pause dur="1.1"/> whoops that's not too good <pause dur="0.8"/> the # <pause dur="0.9"/> so we're now going to have these two error terms <pause dur="3.8"/> that's not much better either <pause dur="1.7"/> error one and error two <pause dur="3.3"/> ah <pause dur="1.8"/> we'll go back to this one i think for the moment <pause dur="1.8"/><kinesic desc="writes on board" iterated="y" dur="36"/> error <pause dur="0.6"/> one <pause dur="0.2"/> and error two <pause dur="0.9"/> are independent <pause dur="4.9"/> therefore total-error-squared <pause dur="2.3"/> equals error-one-squared <pause dur="2.3"/> plus <trunc>error-two-squa</trunc> you can see what's coming here can't you <pause dur="1.0"/> two-squared and therefore <pause dur="0.8"/> D-Z-<pause dur="0.8"/>squared <pause dur="0.3"/> equals A-<pause dur="1.8"/>D-X-squared <pause dur="0.8"/> plus B-<pause dur="0.8"/>D-Y-squared okay <pause dur="2.6"/> that's the first formula <pause dur="0.7"/> right <pause dur="0.9"/> now we can now go on to the second one <pause dur="0.3"/> which i'll do under here <pause dur="2.8"/> and this is a slightly more # complicated one <pause dur="0.2"/> # let's write up here <pause dur="0.6"/><kinesic desc="writes on board" iterated="y" dur="9"/> Z <pause dur="0.6"/> equals # A <pause dur="0.4"/> which A is a constant <pause dur="0.7"/> X-<pause dur="0.2"/>to-the-alpha <pause dur="0.2"/> Y-<pause dur="0.5"/>to-the-beta <pause dur="1.1"/> so this is the general term now where we have a product <pause dur="0.4"/> of two terms there could be an X-variable and a Y-variable <pause dur="0.9"/> raised perhaps to some term alpha and beta <pause dur="0.5"/> or there could be just one of

those which is the situation we've got there isn't it right <pause dur="0.4"/> where alpha is two <pause dur="0.8"/> and now we're going to do exactly the same thing <pause dur="0.6"/> right <pause dur="1.3"/> we're going to do D-Z D-X we're going to find the error due to the error <pause dur="0.2"/> error in Z due to X <pause dur="0.5"/> and error in X <pause dur="1.3"/> and the error due to an error in Y okay <pause dur="0.5"/> so we do the partials again here D-Z <pause dur="1.3"/><kinesic desc="writes on board" iterated="y" dur="55"/> how does <trunc>e</trunc> Z change if there's a change in X <pause dur="0.2"/> keeping Y constant <pause dur="0.6"/> well that's A and then X-to-the-alpha goes as <trunc>al</trunc> X-<pause dur="0.6"/>alpha-minus-one <pause dur="2.2"/> Y-<pause dur="0.9"/>B <pause dur="0.9"/> and let's write that as <pause dur="0.2"/> A-<pause dur="0.2"/>alpha-X <pause dur="0.2"/> over X i haven't done very much there Y-<pause dur="0.5"/>to-the-B <pause dur="1.0"/> except i now realize i'm back where i started with X-A <pause dur="0.2"/> X-S <pause dur="0.4"/> this thing here <pause dur="1.5"/> X <pause dur="0.2"/> whoops X-to-the-alpha <pause dur="0.9"/> Y-to-the-beta <pause dur="0.4"/> so what i've got there therefore <pause dur="0.3"/> is alpha-<pause dur="0.2"/>Z <pause dur="0.7"/> over <pause dur="0.2"/> X <pause dur="0.5"/> okay <pause dur="0.7"/> in other words D-Z-one <pause dur="1.5"/> error one <pause dur="1.0"/> is equal to # <pause dur="0.5"/> alpha-Z-<pause dur="2.3"/>D-X <pause dur="0.3"/> over X <pause dur="0.6"/> there's my fractional error coming in can you see <pause dur="0.5"/> that's why it's a fractional error in this term <pause dur="0.4"/> when these two are multiplied okay <pause dur="0.8"/> and i've actually # so that's my

error there straight away <pause dur="0.3"/> and in fact this is the one i want <pause dur="0.2"/> over here isn't it i could do this one now <pause dur="0.4"/> straight away <pause dur="1.9"/> if there's only an error in X term here <pause dur="0.2"/> then i've got that the error D <pause dur="0.3"/> # error in A the fractional error in A <pause dur="0.6"/><kinesic desc="writes on board" iterated="y" dur="38"/> is equal to twice <pause dur="0.4"/> the fractional error in D shall we say or # <pause dur="0.9"/> B-B over D <pause dur="0.2"/> okay <pause dur="2.3"/> so <pause dur="0.4"/> D-D the error <pause dur="0.3"/> in D <pause dur="1.9"/> is one part in two-hundred <pause dur="0.8"/> so the error in A is worse <pause dur="1.2"/> it's twice as bad 'cause we've squared it okay <pause dur="0.3"/> so it's one in one-hundred <pause dur="0.9"/> so i can say my area <pause dur="1.1"/> is equal to three-hundred-and-fourteen <pause dur="0.2"/> square centimetres <pause dur="0.4"/> plus-or-minus three-point-<pause dur="0.6"/>one <pause dur="0.2"/> square <pause dur="0.6"/> centimetres <pause dur="1.8"/> okay <pause dur="4.4"/> # <pause dur="1.2"/> the point about it as you'll see when you get to question three on the other sheet <pause dur="0.7"/> is that i gave you some special formulae these two here <pause dur="0.2"/> we haven't quite finished this one yet <pause dur="0.8"/> but in question three on the second sheet we have a slightly different arrangement don't we <pause dur="0.5"/> for the # <pause dur="0.5"/> platinum resistance thermometer <pause dur="0.7"/> so if you understand what we're doing here no let's so that's <pause dur="0.3"/>

this is the <pause dur="0.2"/> now let's next one oh good this one works <pause dur="0.4"/> D-Z D-Y we'll do now <pause dur="0.9"/> what's the error change in Z <pause dur="0.4"/> for a change in Y <pause dur="0.5"/> okay <pause dur="0.6"/> and that is # A-X-to-the-alpha 'cause that's constant for the moment <pause dur="0.4"/> beta-<pause dur="0.3"/>Y-to-the-beta-minus-one <pause dur="1.3"/> okay <pause dur="1.1"/> # if i can <pause dur="0.7"/> i can now write this as A-X-to-the-alpha-Y # <pause dur="0.8"/> we'll have a beta out here <pause dur="0.5"/> Y-to-the-beta <pause dur="0.5"/> over Y and i realize this thing here is what i started with Z <pause dur="0.5"/> so the <pause dur="0.4"/> D-Z error two if you like <pause dur="0.6"/> the second error term due to a change in Y <pause dur="0.6"/> 'cause X and Y can have errors on <pause dur="0.4"/> is now beta <pause dur="1.4"/> # <pause dur="3.6"/> and i'll i'll bring the Y over here Z <pause dur="0.2"/> D-Y <pause dur="0.3"/> over Y 'cause there's a Y under there 'cause i've had to put a Y back in because i lost a Y when i differentiated the beta-minus-one you see that's where the Y underneath comes from <pause dur="1.9"/> okay so now <pause dur="0.2"/> if these are independent <pause dur="3.9"/>

in other words <pause dur="0.5"/> if <pause dur="2.0"/> usual thing if Y goes high with an error or <pause dur="0.3"/> high there's no reason for X to have a high error <pause dur="0.3"/> they're totally independent <pause dur="1.7"/> then i can say that the total error in Z-squared <pause dur="5.3"/><kinesic desc="writes on board" iterated="y" dur="56"/> is those squared is alpha-<pause dur="0.2"/>Z <pause dur="1.0"/> D-X over X <pause dur="0.8"/> all squared plus <pause dur="0.5"/> beta-<pause dur="0.2"/>Z <pause dur="0.6"/> D-Y over Y <pause dur="0.4"/> all squared <pause dur="0.6"/> and therefore i could write this bring i'll take the Zs out here <pause dur="0.3"/> and therefore D-Z over Z <pause dur="1.2"/> all squared is equal to alpha-<pause dur="0.4"/>D-X over X all squared that's the <pause dur="0.2"/> term we were left with there when there was only X <pause dur="0.6"/> and of course if two things squared are equal to each other we can <pause dur="1.7"/> then the things themselves are equal if A-squared equals B-squared A equals B plus <pause dur="0.5"/> we've got the other one <pause dur="0.3"/> beta <pause dur="0.2"/> if that's the power <pause dur="0.5"/> D-Y over Y <pause dur="0.6"/> all squared <pause dur="1.6"/> and the we notice here <pause dur="1.2"/> as we said before when we get these product terms involved or power law <pause dur="0.4"/> it's the fractional error that's important can you see it here <pause dur="0.5"/> D-Z over Z the

fraction D-X over X D-Y over Y <pause dur="10.7"/> okay so # rather than producing them out of the hat as before <pause dur="1.0"/> it's little bit of calculus a nice little exercise in calculus okay <pause dur="1.0"/> just <pause dur="0.7"/> differentiating <pause dur="1.3"/> X-to-the-N is N-X minus X-to-the <pause dur="0.2"/> N-X-to-the-N-minus-one isn't it <pause dur="3.7"/> right <pause dur="4.2"/> everybody happy everybody read that yeah <pause dur="0.3"/> okay <pause dur="1.0"/> no <pause dur="1.7"/> yes no <pause dur="10.1"/> sorry <pause dur="7.1"/> shall we do number three </u><pause dur="6.3"/> <u who="sf0921" trans="pause"> excuse me </u><pause dur="0.2"/> <u who="nm0919" trans="pause"> yes </u><pause dur="0.2"/> <u who="sf0921" trans="pause"> how did you get <pause dur="0.2"/> error in A as one in one-thousand </u><pause dur="1.7"/> <u who="nm0919" trans="pause"> sorry </u><u who="sf0921" trans="latching"> <gap reason="inaudible" extent="1 sec"/></u><u who="sf0922" trans="overlap"> i got the answer as twenty-two </u><pause dur="1.2"/> <u who="sf0921" trans="pause"> i got the error in B as one in two-hundred </u><pause dur="0.5"/> <u who="nm0919" trans="pause"> okay <trunc>i</trunc> what <kinesic desc="writes on board" iterated="y" dur="33"/></u><u who="sf0921" trans="overlap"> and error in A is one in one-thousand </u><u who="nm0919" trans="latching"> oh sorry it was one in a hundred then sorry that's okay A equals pi-R-squared <pause dur="0.8"/> twice over four <pause dur="1.3"/> D-squared <pause dur="0.6"/> so D-A <pause dur="0.7"/> over A equals twice <pause dur="0.2"/> D-D <pause dur="0.6"/> over D <pause dur="0.2"/> okay <pause dur="2.4"/> this one here was one in <pause dur="0.6"/> two-hundred <pause dur="0.8"/> so this is one <pause dur="0.5"/> in one-hundred <pause dur="0.7"/> so A <pause dur="0.5"/> equals three-hundred-and-fourteen plus-or-minus three <pause dur="0.4"/> did i write one in a thousand there </u><pause dur="0.7"/> <u who="sf0923" trans="pause"> yeah </u><pause dur="0.2"/> <u who="nm0919" trans="pause"> okay so once i've done this right <pause dur="9.0"/> this is half a per cent <pause dur="1.5"/><kinesic desc="writes on board" iterated="y" dur="4"/> and twice it is <pause dur="0.2"/> one per cent <pause dur="4.0"/> so the other <pause dur="0.2"/> this is the inverse isn't it of the one we did <pause dur="0.4"/> last week <pause dur="0.6"/> where we had the <trunc>t</trunc> <pause dur="0.2"/> the the <pause dur="0.2"/> period

of a pendulum <pause dur="0.4"/> was proportional to L to the <trunc>h</trunc> the length to the half <pause dur="0.7"/> in which case the period to the fractional error in the period was <pause dur="0.3"/> half <pause dur="0.2"/> the error in the length <pause dur="0.6"/> here since it's length's <pause dur="0.9"/> squared it's twice <pause dur="1.0"/> just goes as the power law doesn't it <pause dur="5.4"/> well i think most people who got there got number three correct <pause dur="1.4"/> got that far <pause dur="0.6"/> it's V <pause dur="0.7"/><kinesic desc="writes on board" iterated="y" dur="3"/> equals K-U-minus-C <pause dur="0.5"/> and K here this is a velocity <pause dur="0.7"/> in metres per second <pause dur="2.2"/> # K is # <pause dur="0.8"/> three-point-two volts per metre per second <pause dur="0.6"/> so no volts <pause dur="0.2"/> sorry that's a voltage in volts <pause dur="1.3"/> and C <pause dur="0.4"/> was equal to six-point-four volts <pause dur="3.4"/> and if you think about that <pause dur="0.6"/> you <trunc>ou</trunc> you've got a <pause dur="1.4"/> that means there's a a characteristic looking something like this <pause dur="0.5"/> if this is V <pause dur="2.2"/> then # it's a straight line <pause dur="0.4"/> # if we put # <pause dur="1.1"/> # <pause dur="5.9"/> so we can substitute our value in here we're also given so that # <pause dur="0.3"/> yeah this # if if U is equal this is U <pause dur="0.3"/> U equals to nought that's minus-six-point-four <pause dur="0.7"/> down there comes up here <pause dur="0.4"/> and we're interested in a point we're given the point U equals what is it <pause dur="0.6"/>

U <trunc>e</trunc> U-I equals # <pause dur="0.3"/> ten metres per second <pause dur="0.6"/><kinesic desc="writes on board" iterated="y" dur="1:10"/> and we measure a voltage I equal to twenty-six volts <pause dur="0.8"/> and if you substitute that point in there <pause dur="1.1"/> what value do you get <pause dur="0.2"/> in the formula <pause dur="1.7"/> and then whereabouts is my actual data point does it lie on that line or not <pause dur="0.4"/> and that's the residual i'm interested in <pause dur="0.4"/> so if i substitute <pause dur="3.1"/> U equals ten <pause dur="0.2"/> metres per second i get V <pause dur="0.7"/> that's the one on the line <pause dur="0.4"/> is equal to thirty-two <pause dur="0.6"/> minus six-point-four <pause dur="0.4"/> is twenty-five-point-six volts <pause dur="1.0"/> so that's the # formula <pause dur="2.9"/> which is this point here <pause dur="0.8"/> twenty-five-point-six <pause dur="0.4"/> and the actual data point is up here <pause dur="1.1"/> at twenty-six volts so the residual <pause dur="0.8"/> which is how far my data point is off the line <pause dur="0.4"/> is obviously equal to <pause dur="3.4"/> equals data point <pause dur="1.3"/> minus # <pause dur="1.1"/> # <pause dur="1.0"/> fit <pause dur="1.4"/> and that's obviously equal to twenty-six minus twenty-five-point-six equals nought-point-four volts <pause dur="0.5"/> i think everybody actually got that one right apart from the odd person who got an arithmetic # <pause dur="0.6"/> substitution # <pause dur="0.5"/> in error <pause dur="1.9"/> right so next time let's go now we've got quite a few of these

to go through this time <pause dur="1.0"/> some more problem sheets here these are the longer ones # <pause dur="6.3"/> well i think # <pause dur="0.4"/> everybody who did question one <pause dur="0.8"/> which is really # <pause dur="2.4"/> # i don't know if i really need to go through this in all its gory detail because i think most people did get this one <pause dur="0.2"/> correct <pause dur="10.7"/> i think everybody who tackled this clearly could # <pause dur="0.5"/> work out we had these <pause dur="0.6"/> ten <trunc>val</trunc> <pause dur="1.5"/> what did i do with my # <pause dur="0.2"/> good one <pause dur="2.0"/> we had our ten values of # <pause dur="0.2"/> T <pause dur="4.8"/><kinesic desc="writes on board" iterated="y" dur="20"/>

and people most people got that T-bar was equal to ten-point-two-four <pause dur="1.2"/> degrees centigrade <pause dur="0.9"/> and that they got the standard deviation was equal to what did i get # <pause dur="0.8"/> plus-<pause dur="0.3"/>nought-point-one-five degrees centigrade <pause dur="1.1"/> okay <pause dur="1.0"/> one or two people made an error but i think i've identified that when they got the # <pause dur="0.2"/> arithmetic wrong <pause dur="2.0"/> and then # the <trunc>stan</trunc> the # standard then we also found that seven <pause dur="0.5"/> out of ten <pause dur="1.6"/><kinesic desc="writes on board" iterated="y" dur="6"/> were within <pause dur="1.8"/> sigma of <pause dur="0.3"/> T-bar <pause dur="1.8"/> and that seems it's fairly reasonable and were supposed to mention there <pause dur="0.5"/> that when we had this distribution <pause dur="0.5"/> we would

normally expect if you had ten <pause dur="1.4"/> readings here you'd expect sixty-eight per cent you'd expect seven within that so six you know <pause dur="0.3"/> seven <pause dur="0.3"/> you'd expect sixty-eight per cent so seventy is pretty good <pause dur="2.2"/> and then the next thing you had was that # the standard error <pause dur="0.9"/><kinesic desc="writes on board" iterated="y" dur="20"/> was equal to nought-point-one-five over root-<pause dur="0.2"/>ten <pause dur="0.7"/> which is something like nought-<pause dur="0.3"/>point-o-five degrees centigrade <pause dur="0.6"/> so you could then say that your final error <pause dur="0.4"/> if you like <pause dur="0.4"/> is ten-point-two-four <pause dur="2.8"/> plus-or-minus nought-point-nought-five <pause dur="0.3"/> that's the standard error <pause dur="1.4"/> that's how accurately you think you've got <pause dur="0.5"/> that mean <pause dur="1.1"/> even though quite a lot most of the <pause dur="0.5"/> points are <pause dur="0.2"/> further away from that <pause dur="0.6"/> what is the assumption you've been making here <pause dur="5.1"/><kinesic desc="writes on board" iterated="y" dur="2"/> and again the assumption # is not that people are being careful with the thermometer et cetera et cetera hopefully that's all in <pause dur="0.5"/> these standard errors and standard deviations <pause dur="0.3"/> the <trunc>sys</trunc> <pause dur="0.3"/> the # <sic corr="assumption">assumetion</sic> is that it's a normal distribution <pause dur="4.8"/><kinesic desc="writes on board" iterated="y" dur="11"/> and that there's no

systematic error <pause dur="3.8"/> if i was using the thermometer that was one degree out because there was a break in the # mercury <pause dur="0.5"/> then of course this would all be invalid <pause dur="0.4"/> so these are the two <pause dur="0.9"/> that they're distributed <pause dur="0.6"/> according to this bell-shape curve they're random <pause dur="0.5"/> like the property of noise <pause dur="0.7"/> and then the last question was if <pause dur="0.4"/> if you have <pause dur="0.4"/> you have an eleventh reading T is eleven-point-<pause dur="0.6"/>two degrees <pause dur="0.6"/> what do you do with it <pause dur="1.6"/> and some people sort of wrote and said whoa <pause dur="0.4"/> looks a bit far out <pause dur="1.6"/> you know this looks a bit dodgy that one it's a bit far from there <pause dur="0.5"/> i'd probably throw it <shift feature="voice" new="laugh"/>away <shift feature="voice" new="normal"/>that's not correct <pause dur="0.5"/> the correct version here is to say <pause dur="0.7"/> what is the difference of that from that and the answer it's one degree centigrade <pause dur="0.7"/> different <pause dur="1.1"/> the standard deviation is point-one-five so this is six standard deviations outside there <pause dur="0.7"/> so this is one standard deviation six is right out here <pause dur="0.7"/> we know there's a ninety-nine-point-nine-and-a-half per cent chance of everything being within

three standard deviations <pause dur="0.6"/> so you must say if this is six standard deviations away from the mean <pause dur="0.4"/> therefore i will reject it <pause dur="0.5"/> that's correct just to see ooh it looks a bit dodgy it's a bit <pause dur="0.3"/> different from all the others <pause dur="0.2"/> is not correct <pause dur="1.3"/> the whole point is we can do this analytically <pause dur="0.4"/> okay <pause dur="2.8"/> that's what we're learning hopefully here <pause dur="1.8"/> right now we get to the rather amusing little bit number <pause dur="0.2"/> two <pause dur="0.3"/> number one <pause dur="1.6"/> most people <pause dur="0.4"/> didn't have too much bother with that one okay <pause dur="4.2"/> and i'm afraid no it wouldn't <pause dur="0.7"/> i suppose the <pause dur="0.3"/> advantage of number <pause dur="0.2"/> two <pause dur="0.8"/> is if you struggled over it and got it wrong <pause dur="1.4"/> and i <pause dur="0.3"/> and i explain or i've explained <pause dur="0.7"/> on the sheet where you got it wrong <pause dur="0.8"/> <trunc>y</trunc> you'll certainly get the message as to what the difficulty is so basically <pause dur="1.4"/> we have <pause dur="0.6"/> R equals <pause dur="0.2"/><kinesic desc="writes on board" iterated="y" dur="4"/> A-<pause dur="1.0"/>exponential-<pause dur="0.2"/>B-<pause dur="0.2"/>T <pause dur="1.3"/> and what we have in this particular situation is the resistance <pause dur="0.6"/> of a <pause dur="0.5"/> thermistor yes </u><pause dur="0.2"/> <u who="sm0924" trans="pause"> would it be over T <unclear>too</unclear></u><pause dur="1.0"/> <u who="nm0919" trans="pause"> it is indeed yes thank you <pause dur="3.2"/> right <pause dur="3.7"/> okay so <pause dur="0.3"/> we're we're trying to measure temperature and we've got

something in resistance and it's not linear <pause dur="0.7"/> okay <pause dur="1.2"/> in fact # if we plot R <pause dur="0.9"/> against T <pause dur="0.4"/> it looks something like # <pause dur="0.4"/> this <pause dur="0.9"/> if it was <pause dur="0.3"/> B-T it would look something like <pause dur="0.4"/> that <pause dur="0.4"/> so thanks very much for pointing out i've got it i've written i copied it down wrong <pause dur="2.0"/> right so the question is <pause dur="0.2"/> what is the sensitivity of the device <pause dur="3.2"/><kinesic desc="writes on board" iterated="y" dur="38"/> what do we mean by sensitivity <pause dur="0.7"/> we mean what's the change in resistance <pause dur="5.8"/> for a change in temperature <pause dur="3.0"/> in other words it's going to be in ohms per K shall we say it's in <pause dur="0.2"/> K is absolute temperature there T big-T is the absolute temperature right <pause dur="1.2"/> now everybody realizes that's what's we mean <pause dur="0.9"/> but the problem is that most people <pause dur="2.7"/> have # remember <pause dur="0.9"/> a little example we did before where we had V against T <pause dur="0.5"/><kinesic desc="writes on board" iterated="y" dur="30"/> and V was equal to B-T <pause dur="0.4"/> and that was a straight line through the origin <pause dur="0.6"/> so they then said the sensitivity <pause dur="2.8"/> is actually # the change in voltage here <pause dur="0.4"/> over the change in temperature <pause dur="1.1"/> and they then said that's V-over-T well

in fact what you're interested in here clearly for a given temperature <pause dur="0.7"/> is the change in temperature <pause dur="1.1"/> resulting in a change in voltage which is the gradient <pause dur="2.0"/> okay <pause dur="0.7"/> and so you're interested in that gradient a little change in V over a little change in T <pause dur="0.4"/> and it so happens that this is a straight line through the origin <pause dur="0.4"/> so <pause dur="0.3"/> # if you actually also <pause dur="0.3"/> work out # <pause dur="0.4"/> V-<pause dur="0.4"/><kinesic desc="writes on board" iterated="y" dur="35"/>-over-T <pause dur="0.3"/> you will also <pause dur="0.2"/> get the right answer <pause dur="0.2"/> for the gradient <pause dur="2.7"/> so something like # half the class here <pause dur="0.4"/> said the sensitivity is <trunc>gr</trunc> equal equal to R-<pause dur="0.4"/>over-T <pause dur="0.5"/> and as we take a point here <pause dur="0.8"/> what we're actually interested in <pause dur="0.3"/> is a change delta-T here <pause dur="1.5"/> what does that give <pause dur="0.2"/> as a change delta-R <pause dur="0.2"/> that's what we really want isn't it <pause dur="0.5"/> is D-R <pause dur="0.2"/> D-T <pause dur="1.6"/> okay <pause dur="0.5"/> now if it's linear that's constant and it's the straight line through the origin <pause dur="0.6"/> but if you then suddenly decide oh well let's say that's R-over-T <pause dur="0.8"/> what you're actually calculating the sensitivity i think everybody can see is this local gradient <pause dur="0.7"/> what most if you calculate R-over-T

you're actually calculating that gradient <pause dur="5.0"/><kinesic desc="writes on board" iterated="y" dur="2"/> which is <pause dur="0.5"/> not the same thing <pause dur="1.2"/> so if you look back at the notes when we talked about that <pause dur="0.6"/> i think we did actually say that if it's a non-linear relationship <pause dur="1.7"/> then it's <trunc>n</trunc> <pause dur="0.7"/> then this gradient you have to work out the gradient and the gradient as you can see here <pause dur="0.6"/> is changing that's because it's non-linear isn't it <pause dur="1.5"/> so as we <pause dur="0.4"/> we got different gradients here <pause dur="0.4"/> and different gradients there <pause dur="2.0"/> right so basically <pause dur="2.3"/> hopefully those who ploughed through this using the wrong values <pause dur="0.5"/> when you see that diagram you can see what you've done wrong okay <pause dur="5.7"/> right so what we actually have to do now <pause dur="1.0"/> is to differentiate this thing here <pause dur="0.9"/> in other words we have to work out D-R D-T <pause dur="2.2"/><kinesic desc="writes on board" iterated="y" dur="2"/> okay and we realize that this is # <pause dur="0.6"/> we have to use the chain rule here <pause dur="1.5"/> if it was D-D-X of exponential-X <pause dur="0.2"/> we get exponential-X <pause dur="1.1"/> okay but it's exponential-B-over-T so we first of all differentiate the exponential and get what we started with <pause dur="0.7"/><kinesic desc="writes on board" iterated="y" dur="5"/> which is A-<pause dur="0.5"/>exponential-<pause dur="0.6"/>B-over-T <pause dur="0.2"/>

and then we have to differentiate what's in the exponential okay <pause dur="0.8"/> and so the D-D-T <pause dur="0.4"/> of B-over-T <pause dur="4.3"/><kinesic desc="writes on board" iterated="y" dur="7"/> is equal to minus-B over T-squared <pause dur="1.4"/> so this is minus-B <pause dur="0.3"/> over T-squared and i could if i wanted write that back as minus-B over T-squared and this is <pause dur="0.5"/> back what i started with times R <pause dur="0.5"/> probably makes <pause dur="0.5"/> some people left it like that but this <pause dur="0.6"/> is a bit easier to see <pause dur="1.8"/> this looks correct doesn't it because the gradient is in fact negative okay <pause dur="0.3"/> when D # <pause dur="0.2"/> when D-T goes up <pause dur="0.2"/> okay <pause dur="0.9"/> and we increase # <pause dur="1.3"/> # <pause dur="0.4"/> T when we decrease R <pause dur="1.2"/> that is negative <pause dur="3.4"/> right <pause dur="0.2"/> hopefully people <pause dur="0.2"/> if you can't follow the maths there you'd better have a word with your maths people <pause dur="0.3"/> integrating differentiating a function of a function <pause dur="1.7"/> right so at this stage now # <pause dur="0.5"/> we're asked what is the sensitivity <pause dur="0.5"/> so we have to work out <pause dur="0.2"/> D-R D-T at two temperatures <pause dur="4.9"/> so D-R <pause dur="0.8"/> D-T the first one we need to do is at # <pause dur="2.4"/> two-hundred-and-seventy-three <pause dur="0.6"/> and then we also need D-R <pause dur="0.5"/> D-T <pause dur="0.6"/> at # <kinesic desc="writes on board" iterated="y" dur="2"/> three-hundred-and-seventy-three okay so we're

interested in the gradient here at two-seven-three <pause dur="1.7"/><kinesic desc="writes on board" iterated="y" dur="6"/> and the one down here which we think is probably a bit less at three-seven-three <pause dur="1.7"/> what i need to do also is to work out what the value of R is and if i substitute in there <pause dur="0.9"/><kinesic desc="writes on board" iterated="y" dur="8"/> # # the values i get two-thousand <pause dur="0.5"/> twenty-three-thousand-<pause dur="0.4"/>and-eighty-two ohms <pause dur="1.0"/> is the value here <pause dur="1.3"/> right <pause dur="0.4"/> at <trunc>two-s</trunc> # whereas at R <trunc>th</trunc> <kinesic desc="writes on board" iterated="y" dur="2"/> just putting A and B in with the values <pause dur="0.2"/> on the sheet <pause dur="0.8"/> leave you to do that <pause dur="0.6"/> R three-seven-three <pause dur="0.4"/> is quite a lot less it's only four-hundred-and-fifty-four ohms okay <pause dur="2.7"/> so we could put those in in <pause dur="0.8"/> green i suppose here <pause dur="0.5"/> this one down here is actually four-hundred-and-<pause dur="0.8"/>fifty-four <pause dur="0.6"/> this one up here <pause dur="0.7"/> is twenty-three-thousand <pause dur="1.6"/> so it's a big change isn't it <pause dur="1.2"/> and the question is now <pause dur="0.7"/> so by changing a hundred degrees i seem to have changed the resistance by about <pause dur="0.3"/> twenty-three-thousand ohms <pause dur="0.6"/> so it looks as if i'm going to get some rather large changes in # <pause dur="1.1"/> for a degree temperature change i'm going to get a lot of ohms change so it looks potentially as if i've got rather a good # sensitivity <pause dur="1.8"/>

so this is equal to # <pause dur="0.3"/> B <pause dur="0.7"/> is equal to # <pause dur="0.4"/> what is B equal to four-thousand <pause dur="2.0"/><kinesic desc="writes on board" iterated="y" dur="2"/> and T is equal to in this one two-seven-three <pause dur="0.5"/> so the one the value here <pause dur="0.4"/> is equal to # <pause dur="0.6"/> let's do this # <pause dur="0.2"/> this one here R which is two-three <pause dur="0.8"/> this is two-seventy-three so that's <pause dur="0.2"/> the value of the resistance twenty-three-thousand-and-eighty-two <pause dur="0.6"/> then it's B four-thousand <pause dur="0.9"/> over two-seven-three <pause dur="0.2"/> two-seven-three <pause dur="0.4"/> minus <pause dur="0.5"/> and that comes out to equal minus # <pause dur="3.5"/> one-two-three-eight <pause dur="0.3"/> ohms per K <pause dur="0.4"/> okay <pause dur="5.2"/> this is <pause dur="0.2"/> <trunc>four-thou</trunc> that's about <trunc>twen</trunc> yeah that's about a twentieth of that number right <pause dur="0.3"/> this one here <pause dur="0.6"/> what i'm doing is i'm substituting here obviously an R <pause dur="0.7"/> minus R-<pause dur="0.4"/>B over T-squared so in this case <pause dur="0.3"/> the resistance is at lot less <pause dur="0.4"/> four-five-four <pause dur="0.5"/> so i think my gradient's also going to be a <trunc>bo</trunc> lot less <pause dur="0.3"/> it's not quite in the ratio <pause dur="0.4"/> because my T is now three-seven-three-<pause dur="0.4"/>squared <pause dur="0.4"/> instead of two-seven-three <pause dur="0.8"/> and if i put those numbers you can see that one there cancels with that <pause dur="0.5"/> that goes into that it's going to be

about ten isn't it in fact it's <pause dur="0.4"/> minus-<pause dur="0.2"/>thirteen <kinesic desc="writes on board" iterated="y" dur="2"/> ohms per K <pause dur="4.4"/> right now the next thing i get here <pause dur="0.5"/> is what happens if <pause dur="0.2"/> A <pause dur="0.6"/> changes by one degree <pause dur="3.4"/> okay <pause dur="0.7"/> so let's have a look at this first one here A changed by one degree <pause dur="1.0"/> then this means <pause dur="0.7"/><kinesic desc="writes on board" iterated="y" dur="2"/> A changes by one per cent <pause dur="0.3"/> right <pause dur="0.5"/> so what's going to happen here if A changes by one per cent <pause dur="0.5"/> is that my value of R is going to change by one per cent <pause dur="8.3"/><kinesic desc="writes on board" iterated="y" dur="15"/> so R changes <pause dur="1.6"/> by <pause dur="0.3"/> one per cent <pause dur="0.2"/> of <pause dur="0.5"/> twenty-three-thousand ohms <pause dur="2.4"/> and that is going to equal therefore two-hundred-and-thirty ohms <pause dur="6.9"/><kinesic desc="writes on board" iterated="y" dur="4"/> and what is that equivalent to in temperature <pause dur="0.5"/> well twelve-thirty-eight ohms <pause dur="1.5"/><kinesic desc="writes on board" iterated="y" dur="16"/> is equivalent to one K <pause dur="0.5"/> so two-hundred-and-thirty ohms <pause dur="2.7"/> is equivalent to # <pause dur="0.4"/> two-hundred-and-thirty over twelve-thirty-eight <pause dur="0.6"/> which i got to be nought-point-one-eight degrees centigrade <pause dur="0.2"/> okay <pause dur="4.1"/> now you <trunc>c</trunc> you can you see that it's the A changing the R that's the big effect we've also changed D-R D-T <pause dur="0.4"/> by one part in a <pause dur="0.2"/> hundred <pause dur="0.4"/> in other words this number <pause dur="0.4"/> here <pause dur="0.5"/> # <pause dur="0.3"/> twelve-thirty-eight has changed

by one per cent but that's rather a small change <pause dur="0.6"/> changing twelve-thirty-eight by one per cent <pause dur="0.9"/> the main thing is the change in the value of R <pause dur="1.0"/> # <pause dur="0.5"/> giving a drift <pause dur="0.4"/> which looks like a change of temperature of point-one-eight degrees centigrade <pause dur="2.7"/> and then the last one here <pause dur="1.2"/> let's have a look at this one <pause dur="2.3"/> now R changes by <pause dur="1.9"/><kinesic desc="writes on board" iterated="y" dur="30"/> one per cent <pause dur="0.2"/> of a <pause dur="0.2"/> four-hundred-and-fifty-four <pause dur="1.4"/> and that to my reckoning is four-point-five ohms <pause dur="1.7"/> and we've got thirteen ohms is <trunc>e</trunc> equivalent to one K <pause dur="0.4"/> so four-point-five ohms here <pause dur="0.6"/> is equal to # <pause dur="1.0"/> # <pause dur="0.5"/> four-point-five over thirteen <pause dur="0.4"/> and that's actually almost <pause dur="0.2"/> it's a it's a little bit worse isn't it it's about a <trunc>quar</trunc> a third that <pause dur="4.4"/> okay so basically <pause dur="0.3"/> this region here <pause dur="1.2"/> at the lower temperature with the higher resistance and the steeper slope it's more sensitive <pause dur="0.8"/> and less sensitive at the higher <pause dur="0.2"/> temperatures <pause dur="0.8"/> i think you can see here that allegedly if you # <pause dur="0.7"/> you know if you had an ohmeter <pause dur="0.9"/> that read to an ohm here <pause dur="0.6"/> then you might think you'd be able to measure <pause dur="0.3"/> this

temperature that's a resolution problem isn't it <pause dur="0.4"/> suppose i could <trunc>reme</trunc> measure a change of one ohm <pause dur="0.7"/> i might say to myself oh i can measure the temperature to a thousandth of a degree K <pause dur="0.8"/> 'cause that's the resolution of the instrument <pause dur="0.5"/> but in fact suppose A due to ageing and this is what happens it changes by one per cent <pause dur="0.8"/> and the fact i can resolve these ohms to one ohm one part in a thousand <pause dur="0.6"/> doesn't really tell me <pause dur="0.4"/> it's just like having a voltmeter and putting a few more extra decimal places at the end <pause dur="0.6"/> that doesn't mean to say i might have got a more accurate instrument <pause dur="0.4"/> might have more resolution <pause dur="0.6"/> but supposing that A is changing as the thing ages from month to month <pause dur="0.6"/> then that's my limit there <pause dur="0.5"/> rather than just saying i can measure ohms incredibly accurately <pause dur="1.1"/> it's a difference between resolution <pause dur="0.5"/> and accuracy <pause dur="3.0"/> right let's take a little bit of a break then <pause dur="0.4"/> now <pause dur="0.8"/> hopefully # <pause dur="0.6"/> having tried these and struggled with them going through them <pause dur="0.5"/> is helping you to follow a little bit what's going on