CTK Exchange
Front Page
Movie shortcuts
Personal info
Awards
Reciprocal links
Terms of use
Privacy Policy

Interactive Activities

Cut The Knot!
MSET99 Talk
Games & Puzzles
Arithmetic/Algebra
Geometry
Probability
Eye Opener
Analog Gadgets
Inventor's Paradox
Did you know?...
Proofs
Math as Language
Things Impossible
My Logo
Math Poll
Other Math sit's
Guest book
News sit's

Recommend this site

Manifesto: what CTK is about Search CTK Buying a book is a commitment to learning Table of content Things you can find on CTK Chronology of updates Email to Cut The Knot Recommend this page

CTK Exchange

Subject: "calculus of variations"     Previous Topic | Next Topic
Printer-friendly copy     Email this topic to a friend    
Conferences The CTK Exchange College math Topic #564
Reading Topic #564
jim
guest
Mar-21-06, 07:25 PM (EST)
 
"calculus of variations"
 
   Can anyone suggest a book for problems in calculus of variations.
Precisely I would like to solve problem of following structure.

Maximize E{(g(A)+g(B))^2} over the domain of all functions 'g' that satisfy
E{(g(A))^2}=Constant,


where E stands for the EXPECTATION over all the random variables.
A and B are independent and identically distributed.


  Alert | IP Printer-friendly page | Reply | Reply With Quote | Top
jim
guest
Mar-22-06, 00:58 AM (EST)
 
1. "RE: calculus of variations"
In response to message #0
 
   I start the above problem with help of Lagrangian Multipliers

L(g, v)=E{(g(A)+g(B))^2}+v(E{(g(A))^2}-Constant).

I have problems differentiating L with g..

Can anybody help me in solving this..also pls suggest a good book for this. I think Alex could be of help.
Thanks.

Jim


  Alert | IP Printer-friendly page | Reply | Reply With Quote | Top
mr_homm
Member since May-22-05
Mar-23-06, 12:30 PM (EST)
Click to EMail mr_homm Click to send private message to mr_homm Click to view user profileClick to add this user to your buddy list  
2. "RE: calculus of variations"
In response to message #1
 
   A reasonably good book is Weinstock "Calculus of Variations" available from Dover in inexpensive paperback. I have not found a really satisfactory book on calculus of variations.

I am not sure that you NEED calculus of variations to solve this problem, however. Here is another idea:

For independent random variables, we know that the means and variances add. Also, since A and B are independent identically distributed ("iid") so are g(A) and g(B). (Identical distribution is obvious, I think, and independence is clear because for independent variables, knowledge of A provides no information about B. Since g(A) depends on A, knowledge of g(A) provides no more (and less, if g is not invertible) information than knowledge of A itself. Hence g(A) provides no information about the value of g(B)).

Since Var(X) = E(X^2)-(E(X))^2, the value you are trying to maximize is Var(g(A)+g(B)) + (E(g(A)+g(B)))^2. But this is

Var(g(A)) + Var(g(B)) + (E(g(A))+E(g(B))^2 =
Var(g(A)) + (E(g(A)))^2 + Var(g(B)) + (E(g(A)))^2 + 2E(g(A)*E(g(B)) =
E(g(A)^2) + E(g(B)^2) + 2E(g(A))E(g(B)) =
C + C + 2E(g(A))E(g(B)).

Since the first two terms are constant by assumption, you must maximize only the last term. Since A and B are iid, this last term is really 2(E(g(A)))^2. Therefore, what you need to maximize is |E(g(A))| while holding E(g(A)^2) = C = constant. This is a much simpler problem.

Now since E(g(A)^2) = Var(g(A)) + (E(g(A)))^2, the solution is to minimize the variance Var(g(A)). This is easily done, since variance is always nonnegative: just choose g(A) = sqrt(C) = constant, which obviously makes the variance zero, hence an absolute minimum.

Therefore the solution to your problem is to make g = sqrt(C). This may be a trivial seeming solution, but it'should be clear that if g(A) has any variance at all, then the value you seek to maximize can still be improved. Hence there are no other solutions.

Hope this helps! If you see any mistakes, please let me know.

--Stuart Anderson


  Alert | IP Printer-friendly page | Reply | Reply With Quote | Top
Jim
guest
Mar-25-06, 08:33 AM (EST)
 
3. "RE: calculus of variations"
In response to message #2
 
   Stuart, Thanks for your quick and careful reply. Your solution is absolutely perfect. Problems I wish to solve are, however very complex and I guess will require calculus of variations. For example, I have slightly changed the previous problem.

Minimize E{(g(A+B)+g(A+C))^2} over the domain of all functions 'g' that satisfy E{(g(A+B))^2}=Constant,
A,B,C are i.i.d.


This problem will turn out to be
Minimize E{g(A+B)g(A+C)} s.t E{(g(A+B))^2}=Constant.

For this problem, I think you need to use differentiation. It would be interesting if you can come up with a solution again without using calculus. Thanks for your help.

--Jim


  Alert | IP Printer-friendly page | Reply | Reply With Quote | Top
Jim
guest
Mar-27-06, 09:34 PM (EST)
 
4. "RE: calculus of variations"
In response to message #3
 
   If we construct the Lagrangian as a function of g and v, we have
L=E{g(A+B)g(A+C)}-v (E{(g(A+B))^2}-Constant)

I am stuck at this stage, when differentiating over g.


  Alert | IP Printer-friendly page | Reply | Reply With Quote | Top
mr_homm
Member since May-22-05
Mar-29-06, 07:03 AM (EST)
Click to EMail mr_homm Click to send private message to mr_homm Click to view user profileClick to add this user to your buddy list  
5. "RE: calculus of variations"
In response to message #4
 
   Hi Jim,

Well, I tried my trick on your more general function and it didn't work directly. Then today I asked my brother (who has more training in probability theory than I do) and he suggested a functional analysis approach that once again manages to answer the question without calculus of variations. I'll give it first, and then try to see how the calculus of variations approach might work.

Going back to the original formulation of the problem (before simplification), you have:

max(E((g(A+C) + g(A+B))^2)) s.t. E((g(A+B))^2) = K^2 = constant.

Think about what the E computation looks like. It is basically an integral over all values a,b,c of the variables A,B,C, weighted by the joint probability p(A=a & B=b & C=c), which (since A, B and C are independent) is p(A=a)·p(B=b)·p(C=c). Now given that this is a triple integral, we can separate off the integration in variable "a" and think of it as an expectation within an expectation:

max(E(E((g(A+C) + g(A+B))^2)|A=a)) s.t. E(E((g(A+B))^2|A=a)) = K^2,

where the inner E is over variables b,c and the outer E is over variable a.

Now comes the tricky part: we want the overall expectation to be maximized, but let's look at the formula for each particular value of A. If we rename the function g_a(X) = g(a+X) and suppose E((g(A+B))^2|A=a)) = (K(a))^2, then we have for each value A=a the problem

max(E((g_a(C) + g_a(B))^2)) s.t. E((g_a(B))^2)) = (K(a))^2,

which is identical in form the simple problem that I solved in my first posting! Therefore, g_a = K(a) is a constant for each value A=a, and so the value E((g_a(C) + g_a(B))^2) = (2K(a))^2. Note that this can be thought of as maximizing the ratio of E((g_a(C) + g_a(B))^2) to (K(a))^2, and finding that the maximum ratio is 4.

Next, note that since g_a(B) = g(a+B) is a constant for each value A=a, it must actually be a single global constant independent of A. This is because A and B are iid, so B takes on the same values as A. Hence, since B takes on the values a1 and a2 at some time (for any a1 and a2 in the range of A), we can write g_a1(B)|(B=a2) = K(a2), and g_a2(B)|(B=a1) = K(a1). But by definition, g_a1(B)|(B=a2) = g_a2(B)|(B=a1) = g(a1+a2). Hence K(a1)=K(a2) for any a1 and a2, so K(a) is really just a constant K, and hence at any value A=a, g(A+B)|(A=a) = g_a(B) = K, so g(A+B) = K globally.

Is this really the unique maximum? Yes it is, because if any function h had a larger value of E((h(A+C) + h(A+B))^2) for the same value of E((h(A+B))^2), the ratio of overall expectations would exceed 4, and since expectations are really just weighted averages, the ratio must then exceed 4 at some particular value A=a, which we know is impossible because we just proved that 4 is the maximum at each A=a. Therefore no other function has an expectation exceeding that of g, and in fact no non-constant function can actually reach a ratio of 4, because my earlier proof in the simple case showed that only the constant function reached the maximum value.

Therefore, once again, g is a constant, g=K. As I said, this rather tricky method is due to my brother Steve. I can't claim credit for thinking of it, and in fact he had to wait around while I convinced myself that it was true. Although we didn't discuss it, it'seems that this method would work for many variables. For example, E((g(A+B+C+D) + g(A+B+F+G))^2) should work the same way, just treat the shared variables like we treated A above, and treat the unshared variables like we treated B and C above. It looks like the reasoning should still work.

However, I am out of time to write about the calculus of variations approach (which I am still not having much success with myself as yet), and so I'll defer that to a later post, tomorrow I hope. I realized that these are just examples, and that you may have more complicated functions in mind that truly do require variations, so I will continue to think about it.

Hope this helps!

--Stuart Anderson


  Alert | IP Printer-friendly page | Reply | Reply With Quote | Top

Conferences | Forums | Topics | Previous Topic | Next Topic

You may be curious to have a look at the old CTK Exchange archive.
Please do not post there.

|Front page| |Contents|

Copyright © 1996-2018 Alexander Bogomolny

Search:
Keywords:

Google
Web CTK