TheFourierTransformAndItsApplications-Lecture04

Instructor (Brad Osgood):We’re on the air. Okay. A few quick announcements. First of all, the second problem set is posted on the website. Actually, I posted it last evening. So for those of you that are very eager and check on the website all the time, it was there. And secondly, the TAs are beginning their office hours this week, today in fact; is that right?

Okay, so if you have any questions for them, they will be available to help. All right, anything on anybody’s mind, any questions about anything?

Student:Um.

Instructor (Brad Osgood):Yeah.

Student:[Inaudible].

Instructor (Brad Osgood):Anybody else have any issues with the online lectures? I don’t know, I haven’t – I'm afraid to look at myself, so I don’t know what they’re like.

Student:I was [Inaudible].

Student:Nothing happens.

Instructor (Brad Osgood):Nothing happens when you click on it?

Student:[Inaudible].

Instructor (Brad Osgood):It’s a little trick we like to play on people.

Student:[Inaudible] which browser you are using, so in the Mac [inaudible], should be [inaudible].

Instructor (Brad Osgood):So the question may be which browser you’re using. I honestly don’t know; I’ve never tried to do it before [inaudible].

Student:If you’re using a Mac, you have to use Safari; it doesn’t work on anything else.

Instructor (Brad Osgood):It doesn’t work on anything else, except – use the Mac, the word from over there is you have to use Safari, which is the one that comes with it. And I don’t know about other ones. Anybody else have issues with this? I can find out and I can post an announcement, I suppose.

But I haven’t heard, actually I haven’t tried it, so I don’t know that the – how do they look, the lectures?

Student:[Inaudible].

Instructor (Brad Osgood):Great. Thank you; that was the right answer. Anything else? All right, so I’ll check into it, but try that on the Mac, try Safari or try other browsers. Any problem with PCs?

Student:They work fine.

Instructor (Brad Osgood):PCs work fine; okay. Don’t, don’t – I don’t want to see. All right, anything else? All right, so today I have two things in mind today. I want to wrap up our discussion of some of the theoretical aspects of Fourier series. We’re skimming the surface on this a little bit, and it really, you know, kind of kills me because it’s such wonderful material and it really is important in its own way.

But as I’ve said before and now you’ll hear me say again, the subject is so rich and so diverse that sometimes you just have to, you can’t go into any – if you went into any one topic, you could easily spend most of the quarter on it and it would be worthwhile, but that would mean we wouldn’t do other things which are equally worthwhile.

And so it’s always a constant trade-off. It’s always a question of which choices to make. So again, there are more details in the notes than I’ve been able to do in class, and will be able to do in class, but I do want to say a few more things about it today. That’s one thing.

And the second thing is I want to talk about an application to heat flow that’s a very important application historically, certainly and it also points the way to other things that we will be talking about quite a bit as the course progresses. All right, so let me wrap up and again, some of the sort of the theoretical size of things.

And I’ll remind what the issue is that we’re studying, and so this is our Fourier series fine, all right? Last time we talked about the problem in trying to make sense out of infinite sums, infinite Fourier series, and the important thing to realize is that that’s not by no means the exception, all right?

We want to make sense of infinite sums of complex exponentials sum from K equals Minus Infinity, Infinity, cK, either the 2 pi, KT. I'm thinking of these things as Fourier coefficients, but the problem is general. How do you make sense of such an infinite sum? And the tricky thing about it is that if you think in terms of sines and cosines, these functions are oscillating.

All right, everything here in sight is a complex number and complex functions, but think in terms of the real functions, sines and cosines where they’ oscillating between positive and negative, so for this thing to converge, there’s got to be some sort of conspiracy of cancellations that making it work.

Of course, the size of the coefficients is going to play a role as it always does when you study issues of convergents. But it’s more than that because the function is bopping around from positive to negative, see, all right and that makes it trickier to do. That makes it trickier to study.

And again, realize that this is by no means the exception, and so in particular if F of T again is periodic, period 1, we want to write with some confidence that it’s equal to its Fourier series.

We want to write with some confidence, at least we want to know what we’re talking about, that F of T, say is equal to its Fourier series going from minus [inaudible] 2 p i KT, and again, it’s really if you want to deal with any degree of generality, it’s going to be the rule rather than the exception that you’ll have an infinite sum because any small lack of smoothness in the function or in any of its derivatives is gonna force an infinite number of terms.

A finite number of terms, a finite of trigometric sum will be infinitely smooth. The function and all its derivatives will be infinitely differentiatable, so if there’s any discontinuity in any derivative you can’t have a finite sum.

So any lack of smoothness forces an infinite sum, again, so it’s not because the method is trumpeted as being so general, you have to face the fact that you’re dealing with an infinite number of terms here, all right?

Now, by the way, I don’t mean to say that all the terms are necessarily non-zero, that all the coefficients are necessarily non-zero. That’s not true. Some of the terms may be zero.

For example, when you have certain symmetries, the even coefficients may be zero or the odd coefficients may be zero and in special cases, or a finite number may be zero or a block of them may be zero. You don’t know exactly what’s gonna happen.

But all I'm saying is you can’t resort to only a finite sum if there’s any lack of smoothness in there. All right, so again, that’s the issue. Yeah.

Student:[Inaudible].

Instructor (Brad Osgood):Does [inaudible] of K, that actually is a K although it looks like a T, it’s [inaudible]. All right, the K Fourier coefficient. I’ll remind you what the definition is since we’re gonna use that. So [inaudible] of K is the integral as a K, integral from 0 to 1 of either the minus 2 pi KT, F of T DT. Is somebody’s phone ringing?

All right, now, so last time we dealt with, at least in statement, the cases where that function was smooth or is smooth and you get all that nice sort of convergents that you want. All right, so F of T is continuous, smooth or even if you have a jump continuity, then you get the sort of kind of convergents that you want. You get satisfactory convergents.

I’ll just leave it at that because the precise statement we talked about last time, and it’s also in the notes, you get satisfactory convergents results. So that’s fine, so again, that gives you a certain amount of confidence that you write the series down, you can manipulate it and plug into it and things like that and nothing terrible is gonna happen.

But to deal with a more general functions, the more general signals that arise, it really requires a different point of view, all right, greater generality requires a different point of view, and that’s where we finished up last time.

Now, it’s not just for fun, even for mathematical fun. That is, this point of view turns out to have far-reaching consequences and really does frame a lot of the understanding and discussion, not only for Fourier series, but other subjects that are very similar and also in sort of everyday use in a lot of fields of signal processing.

So very generality requires a different point of view, different terminology, different language and a whole sort of re-orientation. All right, and again, I set that up last time, and I'm gonna remind you where we finished up and I wanna put in one more important aspect of it today and that’s all we’re gonna do, sad to say, all right?

So again, the condition is the integrability of all things. Instead of smoothness, instead of differentiability, the condition that turns out to be important is integrability to function. All right, the important condition, and a relatively easy one to verify, integrability, all right?

So you say that a function, say that F of T is squared integrable or briefly, you say that F is in L2 of the integral from 0 to 1. I'm only working on an integral from 0 to 1 here. And L2 stands for square, well the 2 stands for square and the L stands for Lebeg, and I’ll say a little bit more about that in a second if the integral is finite, integral of the square is finite, F of T DT is less than –

I want to allow complex value functions here, although many of the applications are real, but I want to allow complex value functions, so I put the absolute value of F of T squared there, all right?

That’s an easy condition to satisfy. Physically one encounters this condition in the context of this integral representing energy and so one also says that this signal has finite energy. That’s another way of saying it, and you see that terminology.

All right? So if you have a periodic function, which is integrable, like so, all right, so again if F of T is periodic and square integrable, then you form the Fourier coefficients and they exist actually the square integrability is enough to imply that the Fourier coefficients exist.

Then your form, [inaudible], as before, either the minus 2 pi I, KT F of T DT and the sum converges the infinite Fourier series is equal to the function in the sense of mean square convergents.

And you have, and this is the fundamental result, that the integral from 0 to 1 of the difference between the function and the finite approximation between the function, the square of the difference, the integral of the square of the difference, this tends to 0 as N tends to infinity. Takes up the entire board and it deserves to. All right, it’s an important result, okay?

Okay, now I really feel like have to say a little bit more here, but only a little bit more. The fact is you only get these wonderful convergents results if you not only generalize your point of view towards convergents, but you also have to generalize the integral.

All right, this actually only holds for – the whole circle of ideas really only holds for a generalization of the [Inaudible] integral. Do you have to worry about this? No. All right. But the fact is, you only get these convergence results, such convergence results. You can only really prove them in the slightly more general context of integration.

Convergence results, if you use a generalization of the integral even, which is a whole other subject due to Lebeg, and that’s what the L stands for in L2, due to Lebeg. It’s a French mathematician in the turn of the 20th Century, who actually in the context of these sorts of applications, trying to extend integration, trying to extend limiting processes to more general circumstances where they couldn’t prove the results classically, mathematicians were worried about this, he had a more general definition of the integral.

And with that more general, more flexible definition of the integral, the limiting processes are easier to handle. You can do more than you’d like to do, but somehow don’t feel justified in doing, and in that context, this is something you don’t have to know about, all right. But he generalized the integral only, and by doing so, it was perfectly suited to solving these sorts of problems.

It was really a quite compelling case. It was really a beautiful theory, but, you know, it’s a famous quote. The usual integral that you studied when you studied integrals in Calculus is called the [inaudible] integral. All right, and that suffices for just about every application, but there are more general integrals.

On the other hand, John Tukey who was a famous applied mathematician was quoted as saying, “I certainly don’t want to fly in an airplane whose design depended on whether to function as a remount integral or a Lebeg integrable. That’s not the point.”

All right, the point really is a theoretical one, not a practical one. But nevertheless, somehow honesty compels me to say what’s involved here. All right, so it’s in that sense that you had to talk about convergents, and it’s not an unreasonable condition, all right, that is also called, as I said last time, convergents in energy, convergents in the mean or mean squared convergents.

And what that means is on average this sum is converging to the functional and average in the sense that if you look at the difference between the function and finite approximation, square that, integrate it and integrating is sort of taking an average and then that tends to zero as [inaudible] infinity.

It’s approximating over the entire interval on average rather than concentrating its efforts instead of approximating it at a single point. And again, you will, if you look at the literature and I'm talking about the engineering literature, you’ll see these terms all the time. You’ll see L2 all the time.

As a matter of fact, actually, if you take, some of you have probably had courses in quantum mechanics and if you take it all sort of an advanced course in quantum mechanics, not that I ever have, but if you do, it’s also framed very much in the context of L2 [inaudible] spaces and things like that, [inaudible] spaces and L2 spaces and so on.

It’s not, I'm not making this up. It really is – it has become the sort of framework for a lot of the discussion. If you want to be sure that you have some certain amount of confidence in applying mathematical formulas, you need the right kind of general framework to put them in and this is for many problems, exactly that.

Now, there is one further aspect of it, and that’s as far as I'm gonna go, that brings back that fundamental property of the complex exponentials that we used to solve the Fourier coefficients. So I want to highlight that now.

So remember in solving for the Fourier coefficients way back when, I mentioned this last time and I wanted to bring it back now in a more general context again, in solving for the Fourier coefficients we used a simple integration fact that it was actually everything. It was essential that either the 2 pi, NT times the [inaudible] minus 2 pi MT DT so that we can combine those complex exponentials, that’s either the 2 pi and minus MT was equal to zero if M is different from N. If M is equal to N, then it’s equal to 1, all right?

That simple fact, that simple calculus fact emerges – turns out to be the cornerstone for understanding these spaces of square integral function – if we’re introducing geometry into those spaces. So this simple observation, simple fact, is a cornerstone for introducing, if I can say it that way, introducing “Geometry,” I put it in quotes because it’s geometry like you don’t even think of geometry, but the features are there by analogy.

Geometry into the space into square integral functions, L2 01. And when I say introduce geometry, again I'm reasoning by analogy here, although it’s a very powerful analogy. And the thing that makes geometry, geometry as far as Euclid is concerned is the notion of no perpendicularity.

All right, that’s one of the most important notions of geometry and that’s exactly what gets carried over here. It allows you, it allows one to define orthogonality or perpendicularity, same word, same thing, all right, via inner product or dot product.

All right, so let me give you the definition. I'm not gonna justify it because there’s justification in the book actually, in the notes. But it looks like this, so again if F and G are square integrable functions and I'm gonna assume they’re complex, there’s a little distinction here between what happens in the real case and what happens in the complex case, square integrable on 01, as always, that’s the basic assumption we make.

Then you define they’re inner product which is a generalization of the usual dot product for vectors a generalization of the dot product for vectors, by an integral formula, all right, it’s defined by FG and you’re right, there are various notations for it, but the common notation is just to put them in a pair, or sometimes people will write them with angled brackets or sometimes people do all sorts of things.

Sometimes physicists call it, you know, [inaudible] vectors and [inaudible] vectors and all sorts of bizarre, unnatural things. But I’ll take the simplest notation, so zero to one, F of T times G of T bar complex conjugate DT. That’s because I'm allowing the complex value of functions.

If G of T were a real value of function, then I would just have F times integral or F times G because it’s complex for reasons which again are explained a little bit better in the notes. You put the complex conjugate in there.

Now, what you have to believe is that this is sort of a continuous infinite dimensional generalization of the [inaudible] of two vectors. How do you take the [inaudible] of two vectors? You multiply the components together, the corresponding components together and add, all right?

So it’s like you’re multiplying the values together, although the function times the conjugate of the other function, those are the values, and you’re adding, but continuously in the sense of taking the integral instead of the sum, all right? That’s sort of where it comes from.

Now, that’s fine, but the real benefit is it allows you to define if you ever want to do that, when two vectors, when two functions are perpendicular. So you say that F and G are orthogonal, you could say they’re perpendicular, but it sounds ever so more mathematical if you say they’re orthogonal, all right.

Orthogonal and if you write it neatly – orthogonal, much neater. If they’re in a product, it’s 0, if FG is equal to 0 period – that’s a definition, all right. That’s a definition. Now, where does the definition come from, again, you know, because I can’t say everything as much as I’d like to, let me refer to the notes, because it’s actually not so unreasonable. This definition comes from the exactly wanting to satisfy the Pythagorean theorem or inner products.

Now, the calculation we do with the complex exponentials shows exactly the different complex exponentials are orthogonal. One more thing actually, let me introduce the length of a function or the norm of a function in terms of the inner product, or in terms of the integral, same thing.

So the norm of F, norm of a function F, is defined by the square of a norm is just the inner product of the function of itself, just like the square of the length of a vector is the inner product of a vector with itself. So the inner product of F with itself is the norm of F squared and that’s the notation you use, and so what is that; that’s exactly – so the norm of F squared is exactly the integral from 0 to 1 of F of T squared DT.

And let me just tell you what the Pythagorean theorem is then. All right, the Pythagorean theorem is – I’ll do it over here, because that’s where this comes from. That’s exactly where this definition comes from and that’s exactly where the property comes from. The Pythagorean theorem is exactly this, that F is orthogonal to G if and only if the norm of F plus G squared is equal to the norm of F squared plus the norm of G squared and that is if and only if the inner product is 0.

Why is that the Pythagorean theorem – because of vector addition, all right? If I wrote vectors, U and V, then this is the vector U plus V, all right, if I just write vectors and the Pythagorean theorem says, “The square of the hypotenuse is the sum of the squares of the sides for right triangle and only for right triangle.” That characterizes the right triangle, all right.

So that says the norm of U plus V squared, the square of the length of the hypotenuse is the sum of the squares of the other two sides, U squared plus V squared, and that holds only exactly when the two vectors are perpendicular and that’s where the definition of the inner product comes from and everything else.

Beautiful – it’s beautiful, all right. So the trick is in extending that from, if you like this sort of thing, the trick is extending that from vectors to functions and reasoning by analogy, all right. It’s not, you know, now let me say what you should think about and what you shouldn’t think about, the analogy is very strong. In many ways your geometric intuition for what happens for vectors you can draw carries over at least algebraically to this more general situation; however, so let me say one more thing then.

These complex exponentials are exactly functions or orthogonal functions of length one. That calculation that I’ve just erased with complex exponentials says this, says either the 2 pi NT in a product with either 2 pi MT, all right, that’s the integral of this times the conjugate of this. The conjugate of either the 2 p MT is either the minus 2 pi MT, that’s where the minus sign comes from, all right.

This is equal to zero if M is different from N and it’s different to one if M is equal to N. All right, these are ortho-normal vectors with respect to this inner product. They’re orthogonal, their inner product is zero, when they’re distinct, and they have Length One, their norm is equal to 1 when they’re equal.

Okay, now, as I said your reason by analogy, you can visualize what it means for vectors to be perpendicular. You can draw this picture, all right. So you might say to yourself, I should be able to visualize this. I should be able to sit in a quite room, turn the lights off and visualize what it means for complex exponentials to be orthogonal. Yes, yes, I see it.

No, you don’t. Let me relieve you of this burden. There is no reason in hell that you should be able to visualize when two functions, let alone complex exponentials are orthogonal. Don’t beat yourself up trying to do that and don’t say to yourself you’re less of a person if you cannot visualize when two functions are perpendicular, not only complex exponentials, but sines and cosines, all sort of innocent-looking functions out there that you have worked with for all of your life turn out to have this sort of orthogonality relationship.

But you might say to yourself, like sines and cosines, so sine FT if orthogonal to cosine FT or sine of 2 p T is orthogonal to cosine 2 p T. All sorts of interesting results like that, but you might say to yourself, “Gee if I look at the graph, I should be able to visualize this.” Don’t bother.

All right, there’s no point. There’s no point. It’s reasoning by analogy, all right. The fact is you establish thee formulas; you establish they’re orthogonal and then you apply your intuition for orthogonal functions, for orthogonal vectors where you can visualize it to situations where you can’t visualize it, all right and that’s the real power of this line of reasoning, because you can apply your intuition to places where it should have no business applying somehow, all right.

All right, now, almost, almost, almost, almost, almost, the final thing then, the final piece of this approach to Fourier series is to realize the Fourier coefficients are projections of the function on to these complex exponentials, all right. So again, I want to remind you of one of the ways you use inner product is to define projections, to define orthogonal projections in particular, so you use the inner product for vectors to define and compute projections.

All right, if U and V are two vectors, univectors, say, norm of U equals 1 and norm of V equals 1, then what is the projection of V onto U, and the projection onto V onto U is just the inner product of V.U. That’s how much U knows V or V knows U, all right. The projection here is the inner product of V with U, okay. And how much does U know V is the inner product of U, same thing.

All right, so the vector projection here is that’s the length of the projection, all right, and then the vector that goes in the direction of U that has this length is this time, so the vector projection is inner product of V with U times U.

All right, U is the univector, so you go in that direction this length and that’s how you project, and it’ shouldn’t be shocking. It should be somewhere in your background to realize that you’ve certainly had classes in linear algebra that decomposing a vector into its projections onto given vectors can be a very useful thing.

It’s breaking a vector down into its component parts, breaking a vector down into its component parts, all right. Now, what is the situation for functions, it’s exactly analogous. I don’t want to say it’s exactly the same because you can’t draw the picture, but you can write down the formulas and the formulas are a good guide. The formulas are a good guide.

What is the Fourier coefficient? The Fourier coefficient is exactly a projection, all right. If I compute the inner product of the function F with the complex exponential, either the 2pi NT, then that’s exactly the integral from zero to one of F of T times either the 2 pi NT bar DT. That is the integral from zero to one of F of T either the minus 2 pi NT DT, that is the nth Fourier coefficient. The nth Fourier coefficient is exactly the projection of the function onto the nth complex exponential.

All right, cool. Way cool. Infinitely cool. So cool. And what is writing the Fourier series? What is writing the Fourier series – to write F of T is equal to the sum from K going to a minus infinity to infinity, [inaudible] either the 2 pi KT is to write F of T is the sum from K going to a minus infinity to infinity of the inner product of F with the K complex exponential times the complex K complex exponential.

This is a number, all right, that’s the Fourier coefficient. That ‘s the length of the projection of F onto its K component, and there’s the K component. All right, it’s exactly what that is, and it’s this point of view that is so ubiquitous, all right, so ubiquitous, not only in a Fourier series, but in other versions of essentially the same ideas.

And you see this all the time in signal processing. I talk about wavelets time. I mentioned wavelets just very briefly because wavelets is such a hot topic. It’s the same sort of thing. You’re trying to decompose the function into its simpler components and in this case the simpler components are either the 2 p or the complex exponentials, all right.

So to write an expression like this down to be able to say this in the appropriate notions of convergents is to say – I’ll do it over here – is to say that the complex exponentials form an ortho-normal basis for the square integrable functions, all right.

To be able to write this statement and understand what it means in terms of convergents and all the rest of that jazz is to say, is to say, that the complex exponentials that is all of these, even the 2 pi KT, K going from minus infinity to infinity form an ortho-normal basis for these periodic functions, squared integral periodic functions, all right.

And then sometimes the game is to take different ortho-normal basis. Wavelets are nothing but, I'm not going to say nothing but because they have their own fascination, they’re another ortho-normal basis for square integrable functions.

The complex exponentials are not the only ortho-normal basis just like any vector space just doesn’t’ have, has lots of different orthogonal-normal basis. These are particularly useful ones.

So to write this synthesis formula is to express F in terms of its components, what components, the components in terms of these elementary building blocks, all right, and what are the coefficients, the coefficients, like they are for any ortho-normal basis are the projection of the vector in those directions, all right.

It’s very satisfying and you should try to put this in your head, yeah.

Student:[Inaudible] in the sense that you can’t get one from the other in the rotation [inaudible].

Instructor (Brad Osgood):Well, it’s more complicated, so the question is are the bases like in [inaudible] dimensional spaces the bases are related to each other by a essentially a rotation of unitary or orthogonal matrix and in space, yes, you have unitary transformations linear operators that are unitary, but the definitions are a little bit more complicated.

But you have similar sets of things. All right, you can get to ortho-normal basis to another ortho-normal basis. Okay. All right, this is, like I say, so what one can say, of course, I would say much more about this. I don’t want to. All right, well, I do want to, but I can’t, all right.

All right, it’s this point of view that is important for you to carry forward, all right. That’s all I'm saying because you will see it, you will see it, all right. And again, the idea is you reason by analogy, all right. You gotta; it’s hard to write down this formula, all right, that’s something new. All right, writing down a formula, writing down an inner product in terms of an integral, all right, I gotta deal with that. That’s something hard, and I can’t visualize it, all right.

But the words you use in the case where you can visualize things are almost identical to the words you use in the situation where you can’t visualize things. All right, and you can carry that intuition over from the one case to the other case and it’s extremely important and I’ll give you one, I keep saying one final thing.

So this time for sure, an application of this is what’s also called Railey’s identity, which is nothing more than to say a length of a vector can be obtained in terms of some of the sum of the squares of its components, all right.

You know how to find the length of a vector in [inaudible]. You add up the sums of the components, right? You do the same thing here. Railey’s identity, and I will not derive it for you, but it is derived in the book and I even say something like, “Do not go to bed until you understand – do not go to sleep until you understand every step in the derivation.”

It says the integral from 0 to 1 of F of T squared DT. You write it in terms of its Fourier coefficients – I didn’t write that down, is the sum of the squares of the Fourier coefficients. Okay, going from minus infinity to infinity of [inaudible] K squared, all right. That’s Railey’s identify.

It says the length of a vector is the sum of the squares of its components, all right, the components of the function are its Fourier coefficients, all right. This is the length of a function; it is the inner product of the square length of a function. It’s the inner product of a function with itself. It is the sum of the squares of its components. That’s all this says. And it follows algebraically exactly in the same way as you would prove this using inner products for vectors, exactly, exactly.

All right, now, this was known before any of this stuff was put in place, all right, but when all this sort of general framework was put in place, this identity was known before the general framework or orthogonal functions, square integrable functions, all the rest of that jazz, and it was viewed as an identity for energy, all right, this is the energy, the function and these are somehow, you know, the individual components here.

And that’s why one often says, you can compute the energy and the time domain or the frequency domain, we’re gonna find an analogy for this before it transforms, all right. But it really says nothing other than the length of the vectors, the sum of the squares of the components.

How much – so let me write down, here’s F of T is the sum of, here’s the Fourier series, [inaudible] either the 2 pi KT, all right. All right, how much energy does F have as a signal? It has this much. All right, how much energy does each one of its components have?

Well, the energy of each one of the complex exponentials is one because they’re of length one. So, how much energy, how much square energy does each one of the components have? It’s the magnitude of this thing squared; it’s the multiplier. It’s the projection out front.

[Inaudible] squared times the energy, which is 1 of the complex exponential, so what is the total energy here, what is the square of the total energy here? It’s the sum of the squares or the contributions of that energy from individual components. Each individual component is contributing an amount [inaudible] of K to the whole sum, or the energy it contributes is the square of [Inaudible] of K absolute value squared to the whole sum, the whole number, okay, pretty cool; pretty cool.

All right, here ends the sermon. Don’t leave without putting something into the collection plate. All right. All right, here ends the sermon on inner product square [inaudible] and so on, okay? All I can say is trust me; you’ll run into it again.

Now, I want to do and I probably won’t finish it today. I’ll finish it up on Wednesday, although it goes pretty fast. I want to do a classic application of Fourier series to the study of the classic physical problem, in fact, the problem that launched the whole subject. Sop I want to do an application to heat flow.

All right, this is a very important part of your intellectual heritage, all right. I'm serious. That is if you’re going to be practicing scientists and engineers, you know, you want to know something about where the subject came from because again, you know, you some how wind up re-visiting a lot of these ideas in different context, but often in similar context.

And this was the problem that really started it all. This was the problem in studying how heat, how the temperature varies in time, when there’s some initial temperature distribution.

All right, so you have a region in space and with an initial temperature distribution and an initial distribution F of X of temperature. I say F of X here just to indicate that X is some sort of spatial variable, all right, so some region in space, you know, the dimension of X is the dimension of the region, all right. And the question is how does the temperature [inaudible], all right.

You have an initial distribution of temperature, that’s what happens at T equals zero, then as T increases, the temperature changes. All right, the heat flows from one part of the body to the other part of the body and you want to know how is that governed, all right.

How does the temperature change both in position, I should say vary, well, change is fine – how does the temperature change in position and time. All right, this was an important problem, still is an important problem actually and we’re only gonna handle one very special case of it, the case of the original case that was handled by Fourier series, the problem with the Fourier study.

Where periodicity comes into the problem naturally because of the periodicity in space. So to study this problem means to say first of all, what the region is and then to say what the initial distribution of temperature is or at least say that there is given an initial distribution of temperature, all right.

So we look at a heated ring. Sound – that sound. Okay. All right, something like this, all right. Given an initial temperature, given an initial temperature say F of X, F of X – X is a position on the circle; X is a point on the circle, okay? And we let U of XT be the temperature at X at time T – at position X at time T. All right, and the question is can we say something about it. That’s the function we want to find.

We want to study U of XT, all right. Now, the fact is periodicity enters into this problem, periodicity in space because the circle, the temperature here is the same as the temperature at the same place, if I increase it by 2 p if I'm going around a circle, then obviously the temperature is periodic as the position.

Okay, so the temperature is periodic as a function of X. All right, and let’s normalize things so we assume the period is 1, so the circle is radius or whatever it is, or another way of looking at it is just imagine the circle is the interval from 0 to 1 and I identify the end points, all right. So let’s supposed we have – just because we’ve been dealing with functions of Period 1, let’s suppose that. Let’s supposed Period 1.

Period 1, all right, so that is the function of the initial temperature distribution, F or X is periodic of Period 1 and so is UNX, not in T; it’s not periodic as a function of time, but it’s periodic as a function of the spatial variable. U of X plus 1 for any variable of T is the same thing as U of X at that value of T, okay?

That’s fundamental and that’s how Fourier got into this; that’s why he introduced those ideas. You’d consider this problem where there’s periodicity in space. The symmetry of the object that you’re heating up and that has consequences. All right, so now, with a certain amount of confidence, with a certain amount of bravado, we write down the Fourier series.

So we write the Fourier series, U of XT is the sum from K going to minus infinity to infinity of C sub K, I’ll write it like this, C sub K either the 2 pi KX. It’s periodic in the spatial variable, so the variable in the complex exponential is X. Where is the time dependents? No, the time dependents is in the coefficient, C. That’s where the time dependence has to be. K is just a constant; K is just an integer.

The time dependence is in the cK. That is a better way and more accurate way of writing it is like so. U of XT is the sum from K going from minus infinity to infinity of C sub K of T. I know what it is; it’s the Fourier coefficient, I’ll bring that in later, but let me just write in terms, let me just call it C right now.

C sub K of T times E to the 2 pi KX, all right, periodic and X varying in T. How does it vary in T; that’s what we want to know, all right. The mystery here are the coefficients. We could solve the problem if I could find the coefficients, so in terms of the initial temperature distribution, so what are the CK? That’s the question.

All right, now, we’re gonna be able to attack this problem because independent of periodicity, independent of anything else, heat flow is governed by a partial differential equation. All right, the flow of heat on a circle or another regions – all right, this is in itself a big subject, but it’s one of the basic equations, partial differential equations of mathematical physics, which you have probably seen somewhere in your life and again, you will see again, heat flow – I’ll just do it in one dimension, all right.

I'm talking about one-dimensional problems here, but there are also ways of analyzing this for flow over a two-dimensional or a three-dimensional region. We have the heat equation, which says that the time derivative of the temperature is proportional and somehow maybe I should call it another constant because I’ve already used K, A times the second X derivative, all right. That’s the one-dimensional heal equation.

All right, I'm not gonna derive that. Actually I have a derivation of that in the book that sort of follows Bracewell’s discussion of the heat equation, but it’s a fundamental equation of mathematical physics. The constant A here depends on the physical because this is one of these great dodges of all time.

The constant A depends on the physical characteristics of the region which no one wants to talk much about, but that affects the size of A, all right. I should say more generally, this equation governs not just the flow of heat, but it’s in general is called the diffusion equation.

It governs how things diffuse. Things – what things, like charge through a wire is studied by this equation, holes through a semi conductor are governed by an equation, a higher dimensional version of the equation but the same idea, all right, so this is general governs, this is also called the diffusion equation and it governs phenomenon that are associated with diffusion.

It’s a general term, but it’s a term you’ll probably run across. All right, now I want to choose; I'm not gonna get too far today, but I'm gonna get a little along the way, so I wanna choose – I want to apply this equation to study this function. All right, I'm gonna use this for the ring. All right, and just to simplify my calculation, although it does not make any substantive difference, I'm gonna choose the constants so that A is equal to ½.

That’s a standard choice, certainly for the mathematical analysis, but it doesn’t matter. You could have the constant tagging along in the whole thing and it wouldn’t’ affect the analysis. So I'm gonna choose A equals ½ or choose constants of A equals ½. So the equation looks like use of T is equal to ½ use of XX, all right.

Now, I'm gonna short-cut the discussion a little bit. There’s one way of doing it in the book, in the notes, which is a little bit more rigorous than what I'm gonna do although both can be justified very easily. I'm going to plug that equation, that formula for – we have a formula for U in some sense, not in some sense, we have a formula for U in this sense.

And I'm gonna plug this into the equation. Plug U of XT is sum over K, CK of T, either the 2 pi KX into the equation, into the use of T is equal to ½ UXX. What happens?

What happens? Well, so use of T, if I differentiate with respect, I'm sorry; I should have said this, but I assumed everybody knew, use of T is a partial [inaudible] with respect of T, or that function over there, so what is that? Well, the only thing that depends on T here is the coefficients, C sub K. So that is sum over K, CK prime of T times the complex exponential. That stays alone -- 2 pi KX. What is UXX of this expression?

Well, here I’ve put the derivatives on the complex exponential and I differentiate twice, differentiating complex exponential is like differentiating an ordinary exponential. The constant here comes down if I differentiate, so I get CK of T, that’s left alone because I'm differentiating with respect to X and then if I differentiate the 2 pi KX twice, with respect to X, I get 2 pi K squared times either the 2 pi KX.

Nothing up my sleeve, no tricks, no deceptions. Okay, that’s one more step, that’s the sum over K T sub K of T times minus 4 p K squared because I squared is minus 1 times a complex exponential either the 2 pi KX. All right, not quite the two.

Then we gotta go – damn. Equate the two sides, use the heat equation. Plug into the heat equation. So I get sum plug into UTs is equal to ½ UXX, and then I get sum over K CK prime of T, either the 2 pi KX is equal ½ times that thing, so it’s gonna be sum over K minus 2 p squared K squared times C 2 p squared K squared times CK of T, times the complex exponential, either the 2 pi KX. Not hard, that’s not a hard step.

Okay, all right, how would we do? Equate like terms; equate the coefficients. If I equate the coefficients, something great happens. [Inaudible] consequences I’ll do next time. If I equate the coefficients, I’ll get CK prime of T is equal to the coefficient, when I say equate the coefficients, I mean the coefficients of the complex exponential. CK prime into the 2 p KX, what’s the corresponding coefficient over here. It’s this is equal to minus 2 pi squared K squared, CK of T.

But my friends, you know, that is a simple equation. That is an ordinary differential equation for CK. I can solve this other problem. I get CK of T is the CK of 0, the initial condition times [inaudible] minus 2 p squared, K squared T. Blow me down, all right.

I’ve found my coefficients – pretty cool. All right, extremely cool. Very cool. And next time what I haven’t done is I haven’t brought back in the initial distribution of temperature and I want to manipulate this solution a little bit more and something absolutely magical happens. Wait for it on Wednesday. Okay.

[End of Audio]

Duration: 54 minutes