Administrative stuff, most importantly, sign up for your section on Coursework! Everything is on the website math42.stanford.edu. Three main topics in course: Taylor series, applications of integration, differential equations. Be warned Taylor series will be tough! What to try to get from the course. Many applications of calculus are "embedded" inside other topics, like statistics.
Refresher on integration: definition of definite integral, why areas are trickier than you might think. Indefinite integral, fundamental theorem of calculus. Integration by subtitution, example of x*cos(x^2). Integration by parts, example of ln(x). How to approximate 100 factorial by taking ln and approximating the resulting sum by a definite integral (the last was very quickly done, probably hard to follow).
Corresponds to 5.1--5.6.
Improper integrals, what does it mean (and why do we care) to compute the area under a graph that is infinite in x- or y- directions? Definition by means of limits for basic Type 1 and Type 2 integrals. WARNING: I should have specified "which direction" to take the limit in the Type 2 case but didn't -I will discuss this in next class. Example of 1/x^2 integrated from 0 to 1 (divergent, i.e. limit doesn't exist) and from 1 to infinity (converges and equals 1). Example of escape velocity, to explain why improper integral can measure something physically relevant. Integral of cos(x) from 0 to infinity is divergent, but for different reasons: area doesn't go to infinity but just oscillates! Interactive question and discussion of (hard!) question, integrating sin(x)/x from 1 to infinity converges. General improper integral: break up into pieces, where each piece encounters just one type of infinity.
Quick review of type 1, type 2 improper integrals. [11am lecture only: warm up example of 1/sqrt(x) integrated from x=0 to x=1, and compared this with 1/x integrated from x=0 to x=1.] Comparison test tells you convergence/divergence (without evaluating) by comparing with an easy-to-integrate function. Integral of 1/(1+x^4) from 1 to infinity: we can see this converges because it is less than 1/x^4, whose integral converges. Formalized by the comparison test.
More difficult example: integral of 1/sin(x) from x=0 to x=Pi/2. For this one, we notice (by computing values or by looking at the tangent line) that sin(x) is roughly x for x small, and so we'd expect it to be similar to the integral of 1/x, which diverges on the same range. To make this more formal, note that sin(x) <= x, so 1/sin(x) > = 1/x, and use the comparison test to conclude that the integral diverges.
Numerical integration (5.10): The basic method is to cut up into rectangles. This can be improved greatly! Study the simple case of the area of a room with three straight sides and one curved side. The first approximation is to approximate the room by a trapezoid. [10am lecture: For a better approximation, we add a parabolic "cap" to the trapezoid, but everyone found this confusing. ] 11am lecture: For a better approximation, we use a parabola ax^2+bx+c to approximate the graph of the room, and then integrate the parabola. Next time: we'll apply these lessons from the "room area" computation back to numerical integration.
Numerical integration ctd. (5.10) -- recap room with one curved side. First approximation based on trapezoidal; better approximatio by using a parabolic approximation to the curved sides. (This was just stated, not derived, but we worked out a simple example to see it really works.) These approximations lead to the trapezoid rule and Simpson's rule.
Simpson's rule works very well in practice! - example of integrating sin(x) from x =0 to x=Pi/2, where Simpson's rule far outperforms trapezoid. Only about 20 intervals needed to get six digits of accuracy. Interactive question involved intervals of different sizes, where one could mix Simpson and trapezoid rules to get a good answer.
Error bounds (I didn't quite complete the discussion of error bounds for the 11am class; we did that on Wednesday, instead.) You will be given all the equations for the integration rules as well as error bounds in an exam.
We need to first discuss how to make sense of infinite sums. Definition of infinite series: it converges if, when you add up mroe and more terms, it gets closer and closer to something. Rigorous definition in terms of limits of the sequence of partial sums (8.2).
This definition uses limits of sequences, so we went over this in 8.1: basic definition, and some methods of computation (Theorem 2 of 8.1 and the squeeze theorem from 8.1).
Limit comparison test. Example of 1/(n^3-1), and use LCT with 1/n^3. This was only briefly discussed.
Start of 8.4: alternating series. Example of 1-1+1-1+1-1+... and 1-1/2+1/3-1/4+... : the first one diverges, the second converges. Alternating series test and alternating series estimation theorem: when approximating the series by a partial sum, the error is at most the first missed term.
8.4 continued: absolute convergence. Discussion of Theorem on page 588 (which I called "Important Fact" in class). Example of the sum of cos(n)/n^2.
Brief discussion of absolute convergence. Absolute convergence implies convergence, but not the other way around.
Ratio test, example of geometric series. Mainly useful for power series.
Start of 8.5: a power series is just an infinite polynomial. Very useful because most functions can be described as a power series, but power series are easy to handle -- easy to differentiate, integrate and evaluate.
Some sample power series, including Taylor series of ln(1-x), e^x and sin(x) (slightly different examples in the two lectures). Computation of where they converge by means of the ratio test.
Radius of convergence.
8.6: We started with 1/(1-x) = 1+x+x^2+... and derived some other power series representations of functions by differentiating and integrating. In particular, had power series for 1/(1-x)^2 and -ln(1-x).
These power series can be used to compute values! We plugged in x=0.5 and x=0.8 into the power series for -ln(1-x). The first one (x=0.5) converged much more rapidly than x=0.8. Moral of the story: even when they converge, power series behave worse and worse as you get towards the radius of convergence.
10am lecture only: I used power series to integrate 1/(1+x^5) from 0 to 0.5, but I think it was very confusing. IF you were confused, don't worry, we'll cover it again.
We derived the power series expansion of e^x and sin(x) by differentiating successively and plugging in x=0, as in the reasoning on page 605 of the book. Then did the same for a general function f(x), arriving at formula (7) on page 606 for the Taylor series of f(x) around x=0.
The Taylor series for e^x, sin(x), cos(x) all look similar and this leads to de Moivre's formula relating all three (not needed for the course).
Some examples of using this formula to numerically evaluate e^x or sin(x). In general, one needs more and more terms as x gets larger.
In the 10am class only: Taylor series around x=a, i.e. formula 6 of the text.
Brief recap of series we already covered, in particular e^x, sin(x), and the general form of the Taylor series.
A bit of motivation for the idea of Taylor series in general: the first two terms of the taylor series give he linear approximation, i.e. the equation of the tangent line. To improve on the idea of a linaer approximation, one might try to approximate by quadratic functions, as we actually already did for Simpson's rule. Going in this way leads to the idea of a Taylor series.
Remainder estimate, Taylor's inequality. This was worked out in full for the case of f(x)=sqrt(x), Taylor series around a=2, and estimating the error at 2.1.
Using Taylor series to do numerical integration - just a bit, we started on the example of e^(x^2), integrated from 0 to 1/2.
Went over the recap in class. In the 11am class there were very many questions on the "book form" of Taylor's inequality, in particular what is going on with "d". After pathetic attempts to improve the situation, I proposed using alternate form without the d (see the notes above).
Numerical integration by means of Taylor series.
I discussed integration by partial fractions. I did ONLY the case where there are only linear factors although I did discuss the case where the roots are repeated - no discussion of quadratic factors, and this will not be on any exam.
10am class only: started discussing trig substitution.
Trig substitution: example of integral of sqrt(1-x^2). Integral of cosine^2 and other powers of sine,cosine.
Areas: areas between two curves in the plane. Example of area between y=x and y=x^2: can be done by slicing either way (vertically or horizontally). Leads to different integrals but the same answer.
Volumes: volume of a sphere, by slicing! Volume of a cone (start).
Differential equations: what is a DE? Basic examples, Newton's law of cooling.
Differential equation examples -- Newton's law of cooling and population growth.Why the naive model of population growth is no good (exponential growth), logistic model.
How to solve differential equations: Euler's method (worked out in an example for Newton's law of cooling), separation of variables (cooling example).
Equation of motion of a pendulum, as example of second order DE.
More on Euler's method and separation of variables: The equation y'=x+y and y'=y^2. The second has the interesting property that y goes to infinity for finite x; this showed up in Euler's method not converging as we take more and more intervals.
I also discussed how to solve y'=x+y by power series. This isn't discussed in the book.
Population modelling: just started.
Logistic model: dP/dt = cP(1-P/M). What do c and M represent. Then we did the interactive quesetion and discussed it.
Exact solution to logistic model, using separation of variables
Infectious disease modelling (SIR model) and how it leads to a logistic equation when R=0.
I discussed the system dx/dt=y, dy/dt=-x and how it led to oscillating solutions. The pendulum equation, although second order, can be approximated by f'' = - f, which reduces to this system.
Lotka-Volterra: I just started drawing the direction field.