TheFourierTransformAndItsApplications-Lecture18

Instructor (Brad Osgood):And then – all right. We still friends?

Student:[Inaudible].

Instructor (Brad Osgood):Actually I have not – to be hon – I haven't started grading the exam yet, so I don't how things are gonna look. So you'll get it back some time next week. All right. And I'll also post the solutions. But I have nothing more to say right now. Okay. It's a tough week for everybody. I realize that. I mean there are a lot of exam – 263 had the exam the previous weekend. We had our exam – was it 216 also an exam or something like that, and – or one other course, and then 278's having its exam – when's that – when is that, next week?

Student:Tuesday.

Instructor (Brad Osgood):Tuesday.

Student:Should be.

Instructor (Brad Osgood):It's brutal. What can I say? Better you than me. I don't know – sorry. It's pretty [Audio breaks up] yeah.

All right. I want to spend a little more time today [Audio breaks up] theorem – sampling, and interpolation, and some of the phenomena that was associated with it. I'll remind you what the setup was from last time. And I'll almost carry out the proof again – or at least I'll give you the setup for the proof again because as I said many times in this – last time, and as I will say again today, for me the sampling [Audio breaks up] the derivation of the – the sampling theorem, the formula is identical with a proof of the sampling formula. I think the two are so closely related that to understand one you really have to think in terms of the other. All right.

So here's the setup. We look at a signal which is bandlimited. That is to say, its Fourier transform is identically zero from a certain point on. All right. So suppose – say F of T is a signal. F of T is a bandlimited signal. So I'll remind you again what that means. This was the key definition really that got the whole thing rolling. So it means that the Fourier transform is a condition on the [Audio breaks up] signal. The Fourier transform is identically zero say for S bigger than equal to sum P over two [Audio breaks up] – I'm writing it. All right.

I'm writing as [Audio breaks up] with respect to the or [Audio breaks up] and there's [Audio breaks up] such P is called the bandwidth. There may be many P's for which that is true, but the smallest one is called the band [Audio breaks up] the small such P is called the bandwidth. Bandwidth is sort of a ubiquitous [Audio breaks up] in signal processing, but I think this is – I don't know if this is the first definition of it, but I think this is the definition that's most commonly accepted. It's the smallest – you're assuming the Fourier transform is zero – identically zero from some point on, and the smallest number for which that's true is called the bandwidth.

So the picture is this. Here's zero. Here's minus P over two. Here's plus P over two. And from there on out going up to plus infinity/minus infinity the Fourier transform is identically zero. Now, remember you can't really plug a Fourier transform because the Fourier transform is complex, so – and the Fourier transform, if it's a real signal, is symmetric, and all the rest of that jazz. So this is not to be taken literally as a picture, but it's something [Audio breaks up] mind. That's the general picture you should keep in [Audio breaks up] next step [Audio breaks up] what is [Audio breaks up] the clever key idea.

You periodize the Fourier transform – FFS by comb [Audio breaks up] by [Audio breaks up] that spacing. All right. That is you look at this. The Fourier transform of F convolved with the Shah function. And what that does is because it's identically zero outside going from P over two to infinity minus P over two minus infinity, this just shifts it, and adds it up, and you get a bunch of copies of the original signal – non-overlapping copies of the original signal. So here's zero. Here is P. Here is minus P. Here is, again, P over two – I'll get out of the way in just a second. Here is minus P over two, and so on, and so on. All right. So this is a picture of the periodized version of – hello? Is something wrong?

Student:This one's cutting out.

Instructor (Brad Osgood):Oh, it's cutting out? I wonder if it's because I dropped it. Okay. Back to live action.

And then the main trick is to cut off. Then you cut off by the rectangle function of width P, and that gives you your original Fourier transform back cut off by multiplying by the rectangle function of width P. And that gives back the original Fourier transform because nothing is overlapping there because you periodized, and [inaudible] cut off.

So again, the picture's like this roughly. Here's zero. Here's minus P over two. Here's plus P over two. You multiply by the rectangle function of width one. Remember that's one between minus P over two to P over two, so when you multiply that doesn't do anything in that interval, and it sets everything to zero outside that interval. So the result is the Fourier transform of F is equal to the rectangle function times the Fourier transform of F convolved with the Shah function. This is the basic equation. Everything depends on this.

It looks like – as I say, you've done something, and then undone it, so how could it possibly lead to anything interesting, much less nontrivial. But it's the whole ballgame because now what you do is you take the inverse Fourier transform, and that swaps multiplication and convolution. And you have two instances of each here. You have an instance of each, I should say. It's multiplication here, so an inverse Fourier transform that's gonna turn that multiplication into convolution. The convolution in here – when you take the inverse Fourier transform, it's gonna turn that into multiplication. Sampling comes from this part, and the sinc part function comes from this part, and you turn the crank, and this is what you get.

So you take the inverse Fourier transform, and the turn the crank. Inverse Fourier transform on the left-hand side is just the function again. The inverse Fourier that's transformed on the right-hand side leads to the amazing formula. You get F of T is the sum from minus infinity to infinity F of K over P times the sinc of P T minus K over P. Okay. I managed to fit it all on the board. Little squeeze there – sorry. All right.

The points K over P are the sample points. They are in the time domain of course – or the sample points, and they're spaced one over P apart. All right. And what that formula does is it interpolates all values of the function in terms of the sample values. All right. So you think of these as the values that you measure say. And the question is, "Can you figure out what the values are in between in terms of the measured values?" And under the assumptions, and under the – by virtue of the derivation, yes, you can do that. All right.

So the – and the shifted sinc functions – say sinc of P T minus K over P. Sometimes you might call those the interpolating functions – something like that. Just – let me just – all right. They don't – they're independent of function F. All right. The only value – the only time F enters into the equation is through its sample values. So it interpolates all values of the function in terms of the sample values. It's a remarkable formula. It's an amazing formula. And it all depends on this. Okay.

That's why I say – and I will continue to say – that for me the sampling formula is identical with the derivation of the sampling formula. The reason why I say that, we'll see in a little bit, is because there are occasions when it goes wrong or it's misapplied or applied differently. And to understand that situation, you have to understand the derivation. Then – if you do, it's really quite straightforward. Okay.

Now, there are couple of terms that – some further terminology that's associated with this formula, and with this whole circle of ideas. You call – you also call the number P – so I'm calling it the bandwidth. That's the – that's an accepted term. You also call P when it's associated with a sampling formula like this. You call it the sampling rate. You call P the sampling rate. And you speak in terms of number of samples per second or speak in terms of – usually you – and because P has the units of frequency here, it's in Hertz. So it's something per second. So you usually speak in terms of samples per second, so you speak in terms of samples per second. All right.

And again, if it's however many samples per second, that means the cycle – the samples are spaced one over P or whatever the rate is apart. And you also refer to the sampling rate for which the theorem is true as the Nyquist rate. That's also sometimes referred to – it's named after Harry Nyquist, God of Sampling, who was one of the first to write this formula down in the context of telegraph signaling, I think actually. So you also call P the Nyquist rate. All right. At least that's what I would call P. All right. But we really understand it again, in terms of the bandwidth, and we understand it in terms of this formula. That's the most fundamental thing. All right.

Now, of course you can always sample at a rate higher than the sample – higher than the bandwidth. All right. That's okay. Then you are – your formula is still true. You can sample at a rate higher than P. All right.

What does that mean in terms of your formula? Or what does that mean in terms of the derivation? It's still true. So again, if here's the original Fourier transform – all right – going from minus P over two to P over two, and if you sample at a rate – I don't know – P prime bigger than that, then you are just periodizing that period bigger than P prime. You can still cut off by that. You're getting the original signal back. If you periodize by something like that, then you're getting something that is even – has less overlap so to speak. Looks like that. All right.

And it's still true that if you cut off you get the signal back again. All right. It still true if you cut off by the Fourier – by the corresponding rectangle function, you get the signal back again, and you can turn all the crank. That's fine. You're doing more work than you have to. That means the sample points are spaced closer together, so it's still true – go over here. Say all is okay – all is well with the rest of the derivation. You're getting more sample points than you need. The sample points are spaced closer together, and you're getting more of them so to speak, so you're working harder than you need to. The sampling rate is higher being the sampling points are spaced closer together.

Now, the real question is – and I'll come back to it in just a minute – is what happens if you sample at too low a rate? Well, this is not an interesting question so to speak. That is nothing goes wrong. Everything's okay. The question is what happens if the sample – if you take – if you sample too low a rate? All right. So we'll get back to this today.

Before doing that though I have a couple of other comments – couple other general comments about the formula. So let me write it down again. Actually it's written up there, but let me write it again.

Because there are some other interesting sort of natural phenomena that are associated with this formula, so I got to erase it, but let me write it again. So F of T is assuming that everything is all right, you've done the derivation, you know the bandwidth, you're expressing the function in terms of the sample points by this way. Now, we start off sort of in a very physical context. We said, "Suppose you have a set of measurements, and you want to fit the curve to the measurements – you want to fit a curve to the measurements. You want to pass through the points, and so on."

You will notice the sampling formula involves an infinite number of points here. Right. To interpolate the value of F by this formula involves the sum from minus infinity to infinity. They are a discreet set of points. They're a discreet set of measurements, but there an infinite number of them. Interpolation involves – depends on an infinite number of sample points. All right. So you might be quite disappointed in this, but that's the way it is. All right. So certainly for any practical application you can only use a finite sum. And that will introduce an error. All right. So – and that's another subject that we're not – really not gonna get into, but it's something to be aware of.

So a finite sum – so you need to find an approximation. And there are results that tell you a – that estimate the error, and things like that, but again, these aren't things are – those are sort of in particular applications for particular – in particular contexts, and we're not gonna talk about it. But just – it is to be aware of. So an application you'll need a finite sum in any application – in any real application. All right. So that introduces error. And you pro – so it becomes an approximation instead of an equality, and that's something you have to be aware of. That's one thing. Okay.

But actually there's a deeper – there's something act – there's something deeper going on here. This is a deeper phenomena that's related to the fact that you have an infinite number of samples here. And that is – well, let me state it as following. A signal cannot be both time limited, and bandlimited. This a – I'll write it, and then I want to – I'm not gonna – well, say – let's discuss it a little bit.

So the phenomenon is a signal cannot be both limited in time, and in frequency. All right. Now, what that means is – what I mean by this is that there's actually two parts here. All right. If the Fourier transform is identically zero for say S greater than or equal to P over two – all right. So in the situation we've been dealing with, then F of T – well, the mathematical way of saying this is it does not have compact support means that it does not – it is not identically zero at some point. All right. It's not identically zero for T sufficiently large. All right.

So if it's limited in time – if it's limited in frequency, it cannot be limited in time. I'll give you an argument for this in just a second, but let me just state the facts now. So once again, if it's in fre – a sequence cannot be limited in frequency, and limited in time. If it's limited in frequency, it cannot be limited in time. And conversely the same thing holds switching the two domains. That is if it's limited in time, it cannot be limited in frequency.

But likewise if F of T is identically zero say for T – I don't know – bigger than Q over two – something like that – just some – outside some range, then the Fourier transform of F of S is not identically zero. It does not have compact support. It can't be identically zero for S sufficiently large. All right. You can go back, and forth between the time domain, and frequency domain by Fourier transform. You'll have the same property or the related properties here in either domain. All right.

So that – in the context over here because we're assuming the signal is limited in frequency, it can't be limited in time. So you shouldn't expect to be able to interpolate with only a finite number of sample points. All right. A finite number of sample points would only go out so far on the graph, but the graph goes on forever. All right. You can't expect to interpolate values way out on the graph by only staying to part of the graph. Yeah?

Student:[Inaudible] at the post just bounded?

Instructor (Brad Osgood):Sorry?

Student:Of the Fourier transform – does everything stay compact support, and not just bounded support?

Instructor (Brad Osgood):I say – compact is sort of the technical term for – means close, and bounded. So it means it's zero outside some closed interval. That's – just for mathematical term. So even – you could think of it is – I mean I just find myself using that terminology because it's fairly standard. It's certainly standard in math, and you find – you see it more, and more in the – in nearing literature. But don't sweat it so much.

Compact means closed, and bounded. So outside a compact interval means outside some close, and bounded interval. Okay. Closed meaning it's – includes the endpoints. It's – there for tech – there are technical reasons why that's – why you have to include that – why you have to include the endpoints when you talk about things like that. All right.

So once again, let me just – to emphasize why this is not – why this has to be the way it is – why it involves an infinite number of sample points. If it's a finite extent in the – if it's a compact support in the frequency domain – if it's bandlimited, that means it can not be time limited, so the signal has to go on forever, and you can't expect to interpolate way out in the tails of the signal by only a finite number of points. All right. So it should involve an infinite number of sample points. That's just the way it is – all right – given these results.

Now, why is this true? Well, I'll give you a sort of somewhat bogus argument. There's a more precise argument that's given – more rigorous argument that's given in the notes. So let me prove this that if Fourier transform of F of S identically zero [inaudible] S bigger than or equal to P over two, then F of T goes on forever. All right.

The reason why this is true is, and it's a – I don't know –it's sort of analogous to what we did in actually deriving the sampling formula, although, with one exception. Since F – since the signal – the Fourier transform is zero after a certain point –again, here's minus P over two to P over two. Here's the Fourier transform. Then it's equal to its cut off.

The Fourier transform of F – if I cut off by multiplying by a rectangle function of that width, I haven't changed anything because the rectangle function, again, is one between minus P over two, and P over two. If I scale by P, and zero outside, so I haven't changed anything. Now, take the inverse Fourier transform on both sides. I haven't periodized here. All right. This is a different argument. I'm deriving the sampling formula. I'm proving this statement. If the Fourier transform is identically zero from a certain point on, then F of T goes on forever.

Why is that? Well, write this statement out, and then take the inverse Fourier transform. So again, it looks like I haven't done anything here because I've just written the Fourier transform in terms of the Fourier transform. I haven't done anything. But now if you take the inverse Fourier transform, the miracle that happens is it swaps convolution, and multiplication. All right. So if I take the inverse Fourier transform you get F of T is equal to the inverse Fourier transform of this which is a scaled sinc function – sinc PT convolved with F. All right. Sort of a striking statement actually. All right.

It says for that these functions the sinc function is in some sense functioning like the Delta function or functioning like an identity for convolution. All right. The sinc function won't do that for all signals, but it will do that for the bandlimited signals. All right. So F is the sinc convolved with F if the Fourier transform of F is zero beyond a certain point.

Now, here's where I'm not – I'm just gonna wave my hands, and I'm gonna say what this means is – and I think you can probably make this precise, although, there are actually better arguments for the whole circle of ideas – is that sinc function goes on forever. All right. So convolving F of T – whatever happens with F of T with a sinc function is gonna stretch out a – it is gonna smear out F so it goes on forever. It's never no – it's never identically zero beyond a certain point. And so F is never identically zero beyond a certain point.

It's a strange looking art equation, I understand. But the way that sort of – the conclusion that you draw is your convolving F with a sinc function just looking at the right-hand side. This function on the right-hand side you would say goes on forever. Does not – is not identically zero beyond a certain point. And what do you know, it's equal to F. Okay. So the right-hand side goes on forever. All right.

I don't know if you can actually make this argument completely rigorous. It's the one that's always given in electrical engineering classes. I don't like it myself, but it sort of gives you some reason for why this is true. And as I said, there's actually a more rigorous argument that's given in the notes, but I won't go through that. Okay. But nevertheless it's an important principal. All right.

Now, this is an example –think about this a little bit. If someone says – this is an example of where mathematics meets the real world, and they disagree. All right. You always – we think about mathematics as getting a pretty accurate description of the real world, and it's kind of spooky sometimes how well it works. But in the real world signals really don't go on forever. Real signals really don't go on forever. And in the real world signals really are limited in the frequency. The frequencies don't go on forever. All right.

This says that you can't have both of those. All right. You cannot have a signal which is both limited mathematically – you cannot have a signal which is both limited in frequency, and limited in time. It can't be. If it's limited in frequency, it's unlimited in time. If it's limited in time, it's unlimited in frequency. All right. But any real signal is limited in time, and limited in frequency. So it – there's a clash. All right. There's a clash between the mathematical description between the mathematical theorem, and what you actually encounter. Okay. I mean that's something to take note of. All right.

On the other hand – so that's just to say mathematical models are imperfect or the real world is imperfect, however you want to look at it. Okay. On the other hand you can also look at this in some way as like a qualitative version of the uncertainty principle. And we've seen situations somewhat like this before. That is we've seen them where they're concentrated in the time domain means spread out on the frequency domain – or constantly in the frequency domain means spread over the time domain. We've seen various instances of that, and that's a very important principle in Fourier analysis. It's very important for your intuition that realizes if something is concentrated in one domain, it's spread out in the other domain.

So here concentration is in terms of compact support. If it's identically zero beyond a certain point, it has to go on forever in the other domain. All right. The uncertainty principle in quantum mechanics is a statement about you can't simultaneously localize two physical quantities: momentum and position say. All right. And that's also was sort of shock to people's conception of the physical world, and how mathematics modeled the physical world. I mean the idea is you should be able to localize with arbitrary precision both where something is, and how fast it's going. All right. But from the point of view of quantum mechanics, that's not true. You can't do that.

And this in some says – is maybe can be viewed as related to that. All right. So maybe it's not a mathematical imperfection with the way the real world is. Maybe you have to change your mind the way the real world is. Maybe you're making it an imperfect statement when you say a real signal cannot be limited in time, and limited in – it has to be both limited in time, limited in frequency. Maybe that's really also an approximation. All right. So think about that tonight as you're lying in bed staring up into the abyss. Okay.

There is – by the way, we didn't do it. But there is in the notes actually also a proof of the uncertainty principle, and precise proof the uncertainty principle which does talk about exactly – and there are various ways of formulating it sort of in support of a function, and the support of the Fourier transform where the Fourier transform is non-zero, where the function is non-zero, and the – and relates those two in a more quantitative way. This is sort of a qualitative statement is that it can't be both local – both concentrated in time, concentrated in space or it can't be both time limited, and bandlimited. But there are ways of making these statements more quantitative. They're interesting – and they're interesting. Okay.

Now, make sure I didn't miss anything – any more profound statements here. Okay. The final thing I want to talk about – I think I told you – I mean I love this material. I think the sampling theorem, and all the material surrounding it is really fascinating, but if I had my way I'd just do it more, and more – talk about it more, and more. But I really only want to talk in a little bit more detail about one other phenomenon that's associated with sampling interpolation, and that's the phenomenon of aliasing.

That you sort of think of as the natural phenomenon associated with a misapplication of the sampling theorem or misapplication of the proof. So aliasing, and interpolation to – aliasing, and interpolation is the other comment that I wanted to – the other sort of circle of ideas that I want to talk about. All right.

So one more time the fundamental equation is F of T is bandlimited. The Fourier transform of F of S is identically zero for S greater than or equal to P over two. And you have the equation that the Fourier transform is the cut off of its periodized version. All right. Everything depends on that. Everything depends on that equation. Okay.

Now, here's a common problem for signals that you might want to interpolate for signals that you're sampling. If you know the bandwidth – if you know P – we have a pretty good idea of P, and you can always take P too large. All right. That's not a problem. You're working harder than you have to, but at least as far as this formula goes, it's okay if you're beyond – it's okay if you take a P beyond where the Fourier transform is identically zero. The problem is if you don't know the bandwidth so well, and you take a P that's too small.

So what happens if you try this for a – let's call it P prime – that's too small? Okay. All right. So you can – so here's the picture. It's bandlimited all right. So there's a real value for the bandwidth, but if you take something that's a little bit too small – here's P – minus P prime over two. Here's plus P prime over two. And actually the Fourier transform bleeds out a little bit over that. Okay. All right.

You can form the periodization. You can derive the sampling formula just like you always derive the sampling formula. So you can form the Fourier transform of F convolved with P prime – with the Shah function with spacing P prime, and you can cut it off. What do you do if you – what if you – what do you get if you form that? Well, if you form that – again, here's the real Four – here's the Fourier transform with its real bandwidth shown. If I periodize with a less per – with less of a period – with too small a period – minus P prime over two P prime over two.

Then what's gonna happen is if I shift it by P prime then I'm gonna be getting – it's not gonna be shifted off itself. It's gonna overlap. All right. It's gonna overlap, and add up. So it's not only overlapping, but the sum of – so the Fourier – this new function – the Fourier transform of F convolved with the Shah function, which is spaced a little too close together, is gonna give you something that looks like this, and so one, and so on. Now, you should convince yourself of that. All right. It's not hard to see pictorially what's going on, and that's exactly what the issue is. And you can all – but anyway, you can do it. That's fine.

And you can also cut off. All right. Then you can cut off. You persevere here. You can cut off by – you think you're doing everything right. All right. You think you're doing everything fine. So you cut off by that. You form high P prime times the Fourier transform of F convolved with the Shah function of P prime. All right.

Now, what you get is if you cut off by the corresponding – let me see if I can draw the picture here again. It's roughly like this. I'll just try to do two things. So the picture is something like – here's P prime. Here's the new curve. By the sum – this goes like that. All right. If you cut off by the rectangle function, then the curve you get is this. Right? That's not the original Fourier transform. You don't get back the Fourier transform of F. Okay. You don't. But you persevere.

You don't know that. Right? You are a Stafford engineer. No mistake will stop you. Right? All right. You persevere. What do I mean by persevere? You take the inverse Fourier transform. You derive the sampling formula – all right – because the sampling formula is identical with the derivation of the sampling formula. All right.

So you take the Fourier transform – inverse Fourier transform of this. I P prime Fourier transform of F convolved with Shah function of P prime – just as you always did. All right. And what do you get? You get a sampling formula. You get the same formula it looked like you had before. You get the sum from minus infinity to infinity the value of F at these sample points K over P prime times the shift of sinc function sinc of P prime T minus K over P prime. That's what you get. You take the inverse Fourier transform of this expression over here, you turn the crank, and that's exactly what happens.

But you don't get F. You can't get F. All right. You can't get F because this is not the Fourier transform of F. You get something, but you don't get F. You do not get F. You do not get F of T because the derivation of the sampling formula is identical with the proof of the deri – proof of the sampling formula, and sa – and you don't have this. You have – because the Fourier transform of F is not equal to the cut off of its periodized version. Because the Fourier transform is not recovered this way, that formula can't be recovering F – can't be. All right. You don't get F, but you get something.

What do you get? Okay. What is the relationship between what you get – what do you get? Well, let's call – and so everybody got this? This is a very important point. All right. Let's call it G. You get something. It's a fine signal, so call this G. So let's say G of T is this sum. K equals minus infinity to infinity F of K over P prime sinc of P T minus K over P prime. That's it. Okay.

What does G have to do with F? Anything to do with F? Yes. It has the same sample values of F. F and G agree at the sample values. So F of T, and G of T agree – are equal at the sample values K over P prime. Why? Because G of – let's take one. So G of M over P prime. All right. That's the sample value for some value of M. All right. So what is that? That's the sum K going from minus infinity to infinity F of K over P prime sinc of P M over P prime minus K over P prime. Right?

Now, look at what –look at what's happening in the sinc function there. In the sinc function the sinc of – so the sinc of P M over P prime minus K over P prime –you'll have to remember some properties of the sinc function here. Multiply the P through. So that's sinc of M minus K. Right? That P prime's in the denominator in both cases. The sinc function –this is the sinc function of an integer. M and N – or M and K are integers here. All right.

It's one when M is equal to K, and zero otherwise. All right. This is equal to one if M is equal to K, and zero if M is different from K. And the sinc has zero crossings, if you want to think of it that way – use that terminology, at the integers – all right – except at the origin where it's one – where it's equal to one. All right. So all these terms drop out except when K is equal to M.

In other words, to write it down one more time, G of M over P prime is equal to this sum K going from minus infinity to infinity F of K over P prime sinc P M over P prime minus K over P prime. All the terms drop out except when M is equal to – when K is equal to M, and that then gives you F of M over P prime. All right. They agree at the sample points, but not all –but not elsewhere.

There may be some other accidental agreement. Who knows? Who cares? But the one thing you do know is they agree at the sample points. All right. Because of this property you say that G is an alias of F or that F, and G are aliases of each other. I don't know which way – so you say that G of T is an alias of F of T. It's not equal to F of T, but it has the same sample values as F of T.

Now, what that means is, among other things, if you know only the sample values – if that's all you know about F, you can not distinguish F from G. All right. You know some things about F, say you know the sample values, you make a guess about the bandwidth. All right. If you guess wrong, and you reconstruct G according to the derivation of the sampling formula because the sampling formula is identical with derivation of the sampling formula, then you are constructing not F, but an alias of F, but you don't know that. All right. You don't know that because you don't know anything about F other than its sample values. So you can not distinguish the two. That's why I use the term alias. It's a perfect masquerade. All right. It's G masquerading as F because they have the same sample values? Okay.

And it arises – I wouldn't say not from the misapplication of the sampling formula. I don't think that's fair. It falls from deriving the sampling formula as you would ordinarily when it doesn't apply – or rather when you don't get back – when that fundamental equation relating the Fourier transform – the periodized Fourier transform is not valid – is not satisfied. You get something, you can turn the crank, you get a formula, but you're not reproducing the original function.

Now, you've probably studied aliasing, and things like that – and not today, but I hope next time actually I bring in a demo of this. And you may have talked about various things folding in higher frequencies to lower frequencies, and all the rest of that stuff. That's fine, but for me, again, I never think of the sampling formula without thinking of the derivation of the sampling formula.

And if I wanted to understand aliasing, and what's going on in aliasing, I think about exactly this argument. What's gone wrong? Or what do you – what are you doing? The derivation was fine. Everything we did was fine, but what's not true is this fundamental equation. You don't – you have – the Fourier transform is not equal to the cut off of the periodized Fourier transform. I called it P prime here. All right.

If the Fourier transform is not equal to the cut off of the periodized Fourier transform then what you're getting back by turning the crank by writing down the sampling formula is not reproducing F. It's reproducing something else. It's reproducing an alias of F. Okay.

Now, I want to show you one explicit example. This is a fairly simple example, but a nice one. And this is done in the notes actually. All right. So I won't go all – go through all the details, but I want you to see how it works in one particular case.

So again, it's – I want to stress this that in aliasing it's not that you've done something wrong. You have reproduced a signal that had the right sample values, but it's not the signal that you hoped to reproduce because of this. All right. It's not that you made a mistake somehow – or maybe you have made a mistake, but you haven't sinned against the forces of good. It's just that you're deriving the sampling formula, pursuing the argument the natural way of pursuing it, but you're not getting back the original function because this equation is not satisfied. Okay.

So I want to do an example. So again, these are in the no – this is in the notes. Example of aliasing. All right. So I'm gonna take just a cosine. So the spectrum is just two Delta functions. G – F of T is a cosine of nine Pi over two times T. All right. So this says frequency nine fourths – Hertz. Now, I know what the Fourier transform looks like. The Fourier transform of F looks like two Delta functions. Looks like one half Delta S plus nine fourths plus Delta S minus nine fourths. All right.

So what's the picture? The picture is this. The picture is – here's the Fourier transform. Here's zero. There's a Delta at nine fourths. There's a Delta at minus nine fourths. Now, a little tricky here – this is something that again, that's discussed in the notes, and I'm not – I'm gonna sort of skate over now. How would you – now, so you can – if you want to reproduce this function by a sampling formula somehow, right? So what do you do? You periodize this. You cut off, and so on.

So how would you periodize this? What is the appropriate P to take? Well, the appropriate P to take should be anything strictly bigger than nine fourths because you want to shift this completely off its self. The problem here is that there's a spike right at the endpoint. All right. And it takes – it drops sharply. All right. So there's a little bit of an issue here at the endpoints, but I'm not going to – let me just say – sort of skate by that, and I'll – you can look at the notes on that for a little bit more detail.

So proper sampling rate would be anything greater than nine fourths because shifting this by – convolving this thing by a Shah function with that spacing – anything bigger than – not actually half – twice this, right? The whole bandwidth would be okay. All right.

So you would want to convolve – so P here – P over two is nine fourths. So P is equal to nine halves, but again, you want to take P – anything bigger than that to get – to shift the spectrum off its self convolving with the Shah function of spacing P. All right. And again, there's a little bit – some more to say here with the endpoints, but never mind that right now.

What I – the phenomenon that I want to talk about is what if you can't do that. What if your sampler is stuck in low, and you can sample it, and you sample it too low a rate? All right. So what if we can't do this? What if the sampler is stuck in low? All right.

So, for example, E G sample at rate P is equal to one. Okay. So in order to reproduce this thing you'd want to take samples at a rate greater than nine halves – Hertz, but you can't always do that. You can only take samples at a rate of one per second. All right. Play along. Play along. The sampling formula is identical with the derivation of the sampling formula.

What do I have to do? To sample at a rate one means to convolve with Shah sub one. Sorry – I should've – probably should've said that a little bit more forcefully when I was talking about the original derivation. So sample at a rate at P means to convolve with the Shah function of spacing P. To sample at a rate one means to convolved with the Shah function of spacing one. Right? To sample at P equals one means to form the cut off, and the periodization. And the cut off means to form Pi one of the Fourier transform of F convolved with the Shah function of spacing one. Very good. Right? That's what it means because the sampling formula is identical with the derivation of the sampling formula.

Now – all right. You can form – and I won't do it. You can easily form that convolution one half Delta S plus nine quarters plus Delta S minus nine quarters convolved with the Shah function K equals minus infinity to infinity Delta X minus K spacing one. All right. What you use is – I'll remind you of formula for convolving Delta functions – shifted Delta functions, you use Delta A convolved with Delta B is Delta A plus B. Okay. All right. And you find if you do that you get a bunch of arrows. All right.

You get a bunch of arrows, but you find only that two of them lie within – I mean there are two parts to this derivation of the sampling formula. That is periodizing, and there's cutting off. All right. So you get a bunch of arrows, but then you have to ask yourself, "This is Pi one. Which arrows are within the interval for minus a half to a half?" Only two arrows – only two Deltas within minus a half less than S less than a half. That is what you find is – you find that – I want to be sure I got this right. You find only – matter of a fact at a quarter to minus a quarter. Only – S equals only Delta S minus a quarter, and Delta S plus a quarter.

That is to say if I take the rect – if I cut off by a rectangle function of width one, the Fourier transform F convolved with the Shah function of spacing one, the only thing that remains are these two Delta functions. That is it's one half Delta S plus one quarter plus Delta S minus one quarter. All the other Deltas are outside the range. Okay. All the other Deltas are outside the range. You just check that out algebraically just by convolving using this formula. Okay. All right. Play along.

What does the derivation of the sampling formula say? The derivation of the sampling formula says take this formula, and takes its inverse Fourier transform. Derivation of the formula says to take the inverse Fourier transform of I one of the Fourier transform of F convolved with Pi one. That'll give you the function. That'll give the so called interpolating function.

So what do you get? You get the inverse Fourier transform of that. That's easy. That's a cosine. That's another cosine. This is the inverse Fourier transform of one half Delta S plus a quarter plus Delta S minus a quarter. And that is the cosine of Pi over two times T. All right. Deriving the sampling formula at the sampling rate one, which is too small, has led to the function cosine of Pi over two T, not to the function cosine of nine Pi over two times T. All right. Deriving a sampling formula, once again, leads to the function G of T is cosine of Pi over two times T, not to F of T is equal to cosine of nine Pi over two times T. All right.

Not that you've done something wrong, you haven't made any mistakes along the way other than the mistake of taking too low a sampling rate, but you've turned the crank, and you found a function, but you didn't find the function you wanted. All right. But you did find a function which agrees with the function that you wanted at all the sample values.

And you find that G of T equals F of T at the sample values at – which sample values? The samples values corresponding to taking samples every second – at zero, plus or minus one, plus or minus two, and so one, and so on. Okay. You have found an alias of the function, not the function. You've done that by deriving the sampling formula with a sampling rate that's too small. All right. In the sense that the fundamental equation – the cut off of the periodized Fourier transform does not give you the Fourier transform back. All right – that that fundamental equation rela – is not satisfied. All right. And so what you've come up with is something different. Okay.

That's the main – this is sort of the natural phenomenon this assoc – aliasing is a way to sort of speak the natural phenomenon that's associated with the sampling theorem. All right. But it is nothing other than doing the right work for – in the wrong cause, so to speak – all right – by taking the wrong sampling rate – by taking a sampling rate such that the fundamental equation's not satisfied. Okay. All right.

That's all about I wanted to say on this actually as much as I love this topic. So next time actually we're gonna use it to give a derivation of the discreet Fourier transform, and we're gonna move in the modern world from the analog to the digital. All right. Next time.

[End of Audio]

Duration: 52 minutes