I have a large Fortran code that computes the evolution of a system through time. The code is subdivided in modules, each of them tackle a precise physical process.
The last module compiled, i.e. the module that depends on all others, is the one that executes the main loop through time. At a given time step, a bunch of equations are solved and if we are happy with the quality of the solutions, time is increased by a certain amount dt, otherwise, dt is divided by two and the loop starts over. It looks, schematically, like this:
subroutine sub_main(args)
use mod1, only: sub11, sub12, ...
use mod2, only: sub21, sub22, ...
...
main_loop: do
! solve equations:
call sub11( ... )
call sub22( ... )
...
if (problem) then
dt = dt / 2.0
cycle main_loop
else
t = t + dt
enddo main_loop
end subroutine sub_main
Usually, convergence issues, or other problem are detected only one level below this subroutine, for instance directly in sub11 or sub22. In this case, catching what kind of issue occurred through a flag is easy. In addition, sub11 or sub22 are only called in the scope of main_loop.
However, sometimes problems occur in much deeper subroutines. For instance, in sub_abc called in sub_def, called, in sub_ghi, called in sub_21. In addition, this sub_abc, where the problem occur, is also sometimes called outside of the main_loop. In such case, we do not want to decrease the time step if a problem occur.
My question is then: is there a way to either (1) decrease the time step and cycle the main_loop from the inner subroutine sub_abc, or (2) to exit sub_abc, sub_def, sub_ghi and sub_21 automatically, and go back in subroutine sub_main where main_loop is defined, and then decrease dt ? (or any other solution to my problem).
I recall that sub_main is part of a module that depends on mod1, mod2, etc.
Thank you for your ideas ! (I am opened to any standard of Fortran >= 90)