t y 0.000000 0.500000 0.200000 0.830627 0.400000 1.219705 0.600000 1.662463 0.800000 2.153075 1.000000 2.684424 1.200000 3.247815 1.400000 3.832632 1.600000 4.425906 1.800000 5.011798 2.000000 5.570960 cumlative error for h= 0.005000: 0.638096 255.876437 cumlative error for h= 0.010000: 0.719357 144.590754 cumlative error for h= 0.020000: 0.762486 77.011071 cumlative error for h= 0.040000: 0.789045 40.241273 cumlative error for h= 0.200000: 0.857770 9.435465
1b. The comments in the program explain how it works. The values stored in the array 'N' are chosen to provide the requested time steps for the interval from t=0 to 2. The cumlative error is the sum of the absolute values of the differences between the value of the current solution and the value of the reference solution (h=0.001) at the appropiate time point.
To measure the convergence order, the output for this part contains an extra column, which is the cumlative error for that value of h multiplied by the corresponding value of N. Note that this is proportional to the number that one would obtain by dividing by h, so for our purposes the effect is equivalent. We can see that the values for the cumlative error (which is a global error) are almost constant originally, so that the global error is almost zeroth order in h! (This means that the global error decays as h^zero=1 as h decays, i.e. error doesn't decrease with h.) Since the second column is (proportional to) the global error divided by h, and it grows rapidly with decreasing h, we know that the convergence order is much less than first order. I in fact performed a detailed calculation to estimate the convergence order more accurately, and got something around 0.1. (I did this by investigating how the differences in successive cumlative errors above increases with N.)
Since the Runge-Kutta method is claimed to have fourth order global error, it would appear that something is wrong. To try to determine exactly what is wrong, I tried a number of things.
t y y ref. difference 0.000000 0.500000 0.500000 0.000000 0.200000 0.830627 0.829299 0.001328 0.400000 1.219705 1.214088 0.005617 0.600000 1.662463 1.648941 0.013523 0.800000 2.143972 2.127230 0.016742 1.000000 2.661189 2.640859 0.020330 1.200000 3.204792 3.179942 0.024850 1.400000 3.762755 3.732400 0.030355 1.600000 4.320556 4.283484 0.037072 1.800000 4.860453 4.815176 0.045277 2.000000 5.360770 5.305472 0.055298 cumlative error for h= 0.005000: 0.000235 37.622188 cumlative error for h= 0.010000: 0.000937 37.489285 cumlative error for h= 0.020000: 0.003673 36.733170 cumlative error for h= 0.040000: 0.014083 35.207219 cumlative error for h= 0.200000: 0.250392 25.0392422b. Note that here the extra error column is multiplied by N^2, so that this method is a little worse that order h^2. (It may converge to order h^2 in the limit as h --> 0. Since this method uses the Runge-Kutta method for just a portion, it is reasonable to expect that it will perform better. One would also expect the process of predicting and correcting to lead to faster convergence to the true solution. My results confirm this; the Adams method appears to be two orders faster than the Runge-Kutta. Unfortunately the same problem as in question 1 seems to be present, as I am not getting the claimed fourth order convergence.