![]() ![]() With these requirements in place, we might say “At 4:00, the ball was at 10 meters. Which one had the ball at 4:00? This ambiguity shatters our ability to make a confident prediction. What happened? We had a sudden jump (a camera change?) and now we can’t pin down the ball’s position. ![]() Imagine at 3:59 the ball was at 10 meters, rolling right, and at 4:01 it was at 50 meters, rolling left. Uh oh! Zooming should narrow our estimate, not make it worse! Not every zoom level needs to be accurate (imagine seeing the game every 5 minutes), but to feel confident, there must be some threshold where subsequent zooms only strengthen our range estimate. The predictions agree at increasing zoom levels. With a slow-motion camera, we might even say “At 4:00, the ball was between its positions at 3:59.999 and 4:00.001”. Our prediction is “At 4:00, the ball was between its position at 3:59 and 4:01”. Just grab the neighboring instants (3:59 and 4:01) and predict the ball to be somewhere in-between.Īnd… it works! Real-world objects don’t teleport they move through intermediate positions along their path from A to B. Even so, what’s your prediction for the ball’s position?Įasy. Unfortunately, the connection is choppy:Īck! We missed what happened at 4:00. And if the function behaves smoothly, like most real-world functions do, the limit is where the missing point must be. ![]() When our prediction is consistent and improves the closer we look, we feel confident in it. The limit wonders, “If you can see everything except a single value, what do you think is there?”. If we can directly observe a function at a value (like x=0, or x growing infinitely), we don’t need a prediction. But for most natural phenomena, it sure seems to. Our prediction, the limit, isn’t required to match reality. Why do we need limits? Math has “black hole” scenarios (dividing by zero, going to infinity), and limits give us an estimate when we can’t compute a result directly.If our prediction is always in-between neighboring points, no matter how much we zoom, that’s our estimate. How do we make a prediction? Zoom into the neighboring points.What is a limit? Our best prediction of a point we didn’t observe.Well, let's pick a level of precision $\epsilon=1/2$ for our example.Limits, the Foundations Of Calculus, seem so artificial and weasely: “Let x approach 0, but not get there, yet we’ll act like it’s there… ” Ugh. $f$ is continuous at $c$ if for any level of precision $\epsilon>0$ we can find a corresponding $\delta>0$, with $|x-c|0$ is encoding how close we need the values of $f$ at all these sample points to be. My opinion: The symbols and logical statements may seem daunting, but I actually think it can avoid some of the early confusions around the notation, rendering learning them well worthwhile, not to mention essential for higher math.Īs an illustration, it would be easy to see that the above function, and any function with a jump discontinuity like the above, is not continuous using the following definition: To be fully rigorous, one would need to introduce $\delta$'s and $\epsilon$'s, after all that's how we really make sense of the symbols What happens at the origin? As we "sub in values" on the right or left, which are all $0$, are they actually all that close to $f(0)=1$? Try a function with what you are describing as a point discontinuity: "For instance, this definition is true for a function with a point discontinuity." So can someone explain to me, what the formal definition means, how it is only true for continuous functions, and how it is compatible with the 2 intuitive definitions I wrote above? Thank you.Ĭan you also not make the explanation too rigorous? I’m just learning Khan Academy Calculus, and still haven’t touched on things like epsilon delta proofs yet. But this definition is true even when the functions aren’t continuous! For instance, this definition is true for a function with a point discontinuity. Or rather, as we sub in values of x that are infinitely close to c, the value of the function becomes infinitely close to the value of f(c). It seems to me that the rigorous definition is just saying that as x gets closer to c, the limit of f(x) will be f(c). I’m struggling to see how, however, this has been reflected in the rigorous definition lim x-> c f(x) = f(c). Basically, you can draw the whole thing without lifting your pencil. I understand that a function is continuous when you can get infinitely close to each point of it, until you arrive at it. I have an intuitive understanding of continuity, however, I’m struggling to understand the rigorous definition. ![]()
0 Comments
Leave a Reply. |