Changes between Version 1 and Version 2 of Ticket #234


Ignore:
Timestamp:
2011-06-24T15:24:18Z (13 years ago)
Author:
Ian Culverwell
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #234 – Description

    v1 v2  
    55We should find out why minROPP and !LevMarq differ.
    66
    7 Here are some preliminary thoughts:
     7== First thoughts ==
    88
    9 Plots of the convergence of both schemes (in terms of J and dx, the normalised change in the state from one iteration to the next) are listed below, for the 8 profiles in IT-1DVAR-03, for reference with Huw's earlier work in GSR-03 and GSR-06
     9Plots of the convergence of both schemes (in terms of J and dx, the normalised change in the state from one iteration to the next) are listed below, for the 8 profiles in IT-1DVAR-03, for reference with Huw's earlier work in GSR-03 and GSR-06.
    1010
    11 == Refrac, J ==
     11==== Refrac, J ====
    1212[[Image(Test4a_refrac_J.gif)]]
    1313
    14 == Refrac, |dx| ==
     14==== Refrac, |dx| ====
    1515[[Image(Test4a_refrac_dx.gif)]]
    1616
    17 == Bangle, J ==
     17==== Bangle, J ====
    1818[[Image(Test4a_bangle_J.gif)]]
    1919
    20 == Bangle, |dx| ==
     20==== Bangle, |dx| ====
    2121[[Image(Test4a_bangle_dx.gif)]]
    2222
     23These don't really make sense to me. The change in the state is about the same for the last two steps of !LevMarq, yet it has almost converged by the first of these. Is it going ''around'' the minimum or ''over'' it? (This is with the Linf (= max) norm; what does L2 say?) minROPP, on the other hand, seems to spiral in to the min.
     24
     25All the cpu times (sec) for the 8 profiles in IT-1DVAR-03 are 3-4 times quicker than Huw found (linux upgrade?), but the story is the same: !LevMarq converges monotonically in a few, costly, iterations, while minROPP goes all over the place before converging - quickly.
     26
     27So, although minROPP converges much more slowly than !LevMarq, typically needing ~5 times as many iterations, because it calculates each iteration so much more cheaply, it's around 3-5 times quicker overall.
     28
     29Here are some plots of '''x''' and '''H'''('''x''') for each iteration of the convergence of profile 3:
     30
     31==== Temp - bkgr ====
     32[[Image(Test4_multi_1.gif)]]
     33
     34==== Hum - bkgr ====
     35[[Image(Test4_multi_2.gif)]]
     36
     37==== Pres - bkgr ====
     38[[Image(Test4_multi_3.gif)]]
     39
     40==== Forward modelled Bangle - obs ====
     41[[Image(Test4_multi_4.gif)]]
     42
     43Certainly it ''looks'' as though !LevMarq is doing a better job, but can we prove it?  Given its extra cost, the question needs answering. I think we could try using a profile with an analytical solution (eg exponentially decaying T & q, on diff scales perhaps), roughed up with some random noise (of known statistics). Then we could see whether minROPP or !LevMarq is better.