Well, I tried my trick on your more general function and it didn't work directly. Then today I asked my brother (who has more training in probability theory than I do) and he suggested a functional analysis approach that once again manages to answer the question without calculus of variations. I'll give it first, and then try to see how the calculus of variations approach might work.
Going back to the original formulation of the problem (before simplification), you have:
max(E((g(A+C) + g(A+B))^2)) s.t. E((g(A+B))^2) = K^2 = constant.
Think about what the E computation looks like. It is basically an integral over all values a,b,c of the variables A,B,C, weighted by the joint probability p(A=a & B=b & C=c), which (since A, B and C are independent) is p(A=a)Ěp(B=b)Ěp(C=c). Now given that this is a triple integral, we can separate off the integration in variable "a" and think of it as an expectation within an expectation:
max(E(E((g(A+C) + g(A+B))^2)|A=a)) s.t. E(E((g(A+B))^2|A=a)) = K^2,
where the inner E is over variables b,c and the outer E is over variable a.
Now comes the tricky part: we want the overall expectation to be maximized, but let's look at the formula for each particular value of A. If we rename the function g_a(X) = g(a+X) and suppose E((g(A+B))^2|A=a)) = (K(a))^2, then we have for each value A=a the problem
max(E((g_a(C) + g_a(B))^2)) s.t. E((g_a(B))^2)) = (K(a))^2,
which is identical in form the simple problem that I solved in my first posting! Therefore, g_a = K(a) is a constant for each value A=a, and so the value E((g_a(C) + g_a(B))^2) = (2K(a))^2. Note that this can be thought of as maximizing the ratio of E((g_a(C) + g_a(B))^2) to (K(a))^2, and finding that the maximum ratio is 4.
Next, note that since g_a(B) = g(a+B) is a constant for each value A=a, it must actually be a single global constant independent of A. This is because A and B are iid, so B takes on the same values as A. Hence, since B takes on the values a1 and a2 at some time (for any a1 and a2 in the range of A), we can write g_a1(B)|(B=a2) = K(a2), and g_a2(B)|(B=a1) = K(a1). But by definition, g_a1(B)|(B=a2) = g_a2(B)|(B=a1) = g(a1+a2). Hence K(a1)=K(a2) for any a1 and a2, so K(a) is really just a constant K, and hence at any value A=a, g(A+B)|(A=a) = g_a(B) = K, so g(A+B) = K globally.
Is this really the unique maximum? Yes it is, because if any function h had a larger value of E((h(A+C) + h(A+B))^2) for the same value of E((h(A+B))^2), the ratio of overall expectations would exceed 4, and since expectations are really just weighted averages, the ratio must then exceed 4 at some particular value A=a, which we know is impossible because we just proved that 4 is the maximum at each A=a. Therefore no other function has an expectation exceeding that of g, and in fact no non-constant function can actually reach a ratio of 4, because my earlier proof in the simple case showed that only the constant function reached the maximum value.
Therefore, once again, g is a constant, g=K. As I said, this rather tricky method is due to my brother Steve. I can't claim credit for thinking of it, and in fact he had to wait around while I convinced myself that it was true. Although we didn't discuss it, it'seems that this method would work for many variables. For example, E((g(A+B+C+D) + g(A+B+F+G))^2) should work the same way, just treat the shared variables like we treated A above, and treat the unshared variables like we treated B and C above. It looks like the reasoning should still work.
However, I am out of time to write about the calculus of variations approach (which I am still not having much success with myself as yet), and so I'll defer that to a later post, tomorrow I hope. I realized that these are just examples, and that you may have more complicated functions in mind that truly do require variations, so I will continue to think about it.
Hope this helps!