(a)the following is expression relating si,j to its neighbors si,j-1, si-1,j, si-1,j-1 : si,j = { 0 if ai,j = 0, 1+min(si,j-1,si-1,j,si-1,j-1) otherwise }. (b)Let t(i,j) be the time required to compute si,j using a direct recursive implementation of the above definition. Consider the computation of sn,n in this case. t(n,n) = t(n,n-1)+t(n-1,n) +t(n-1,n-1)+theta(1). Since t(n,n-1)>=t(n-1,n-1) and t(n-1,n)>=t(n-1,n-1) we have t(n, n)>=3t(n-1,n-1)+theta(1) and t(1,1)=1 which yields t(n,n)>=3power(n). Therefore, time taken by the entire algorithm is definitely exponential in n. We use an array L to sto re si,j. We note that si,j=ai,j if i=1orj=1. So we compute the si,j values row by row starting from row 2 after intializing the boundary values(by using the expression in (a).-------For each j, finding the least-weight edge from j to some k takes O(n) time. This is repeated n times to obtain a cycle C(j). Therefore, it takes O(n2) amount of time to obtain a cycle C(j). To find a cycle for all j will, therefore, take O(n3)time Thus, in O(n3) time we obtain the set C. Finally, we find the minimum weight cycle of the set C that can be performed in additional O(n) time. Therefore, overall complexity of the algorithm is O(n4). The suggested algorithm is a polynomoal time algorithm. It does not give the optimal solution. The following example shows this by contradiction. Consider the graph given below: The associated weight matrix is given as (oo 5 8 8) (10 oo 5 4)(8 7 oo 5)(5 10 4 oo). Clearly, the cycles obtained using the described algorithm are: (1,2,4,3,1) with total weight=21, (1,3,4,2,1) with total weight=33, (1, 4,3,2,1) with total weight=29 whereas the optimal cycle is (1,2,3,4,1) with total weig ht=20.-------The following procedure ought to work. for i:=2 to n for j:=i+1 to n-i+1 for l:=j-i+1 for k:=i+1 to i+l if(w[i,j]=distance(Pj_to_Reno). Th is implies the optimum algorithm will not be able to go in less number of stops than t he greedy algorithm. Since, there is a contradiction, we conclude that above strategy yields an optimal solution.-------Here is the code for parts (a)&(b). Change(n) { Quar ters=[n/25];n=n-25*Quarters; Dimes=[n/10];n=n-10*Dimes; Nickels=[n/5];Pennies=n-5*Nick els; Return Quarters, Dimes, Nickels, Pennies }. Change'(c,k,n) { for i<-k to 0 do m[i] <- [n/c of i]; n <- n-m[i]*c of i; return m } Proof: Optimality for parts (a)&(b) . We show the optimality property of the greedy algorithm for (b) only, the result for (a) follows in a similar manner. Let n=ak*c of k+ak-1*c of k-1+...+a0*c of 0. Without loss of generality, assume that ak > 0. Note that this representation implies that in the greedy algorithm (ak+ak-1+...+a0) coins were used. Suppose n=bk*c of k+...+b0*c of 0 is the optimum solution. The claim is; (bk+bk-1+...+b0)<=(ak+ak-1+...+a0). If the tw o solutions are equal then there is nothing to prove. Hence, we assume that(bk,bk-1,.. .,b0) is different from (ak,ak-1,...,a0); they must differ for some values of i. Suppo se, without loss in generality, bk /= ak. Then, either bk>ak or bk < ak. But because greedy algorithm uses maximum allowed coins of this denomination, therefore bk>ak is n ot possible. So the alternative result must be true, i.e., bkai, which will imply that bk+ bi>ak+ai. c)Consider 30-cent,25-cent,and 10-cent coins and n=50. The greedy algorithm gives a solution of one 30-cent coin and two 10-cent coins, but clearly two 25-cent co ins will do.-------Assume the inventory at the warehouse V is x to be used to satisfy the demand at destinations D1,...,Dj. Suppose xj widgets are sent to the destination D j from warehouse V then, the rest of the demand (dj-xj) at destination Dj would be sup plied from warehouse W. So, the total cost of satisfying the demand at the destination Dj is vj*xj+wj*(dj-xj). After sending xj widgets from warehouse V, the inventory at V drops to x-xj and this amount is to be used to satisfy the demand at destinations D1,. .,Dj-1. Thus, the following recurrence equation is evident: gj(x)=min(0<=xj<=dj){gj-1( x-xj)+vj*xj+wj*(dj-xj)}. where g1(x)=(v1*x+w1*(d1-x)) if x < d1 and g1(x)=oo otherwise . Note that gi(x) is the optimal cost of supplying D1,...,Dj when V has x widgets. The inventory of W implied via the equation rv+rw=sigma(i=1ton)di. In the view of this con dition, there is no remaining widget at V after all destinations are supplied. The alg orithm is to fill out rV*n table using the recurrence equation. We start with the firs t row of the table using the boudary condition of the recurrence equation. Then comple te the rows one by one. There are dj values to check for each entry in the jth row. So , the running time of the algorithm is O(n*rV*max(1<=j<=n){dj}).-------P is the set of all decision problems solvable by deterministic algorithms in polynomial time. NP is the set of all decision problems solvable by nondeterministic algorithms in polynomial time. Let L1 and L2 be problems. Problem L1 reduces to L2 iff there is a way to solve L1 by a deterministic polynomial time algorithm using a deterministic algorithm that s olves L2 in polynomial time. A problem L is NP-hard iff satisfiability reduces to L. A problem L is NP-complete iff L is NP-hard and L belongs to NP. The satisfiability prob lem is to determine whether a formula is true for some assignment of truth values to t he variables.-------the essential difference between the greedy method and dynamic pro gramming is that in the greedy method only one decision sequence is ever generated. In dynamic programming, many decision sequences may be generated.