Error in coupling CROCO - WW3

Dear friends,

I am working on the coupling of CROCO with WW3. To do so, I am first trying to run the Benguela coupled test case; I am following the methodology exposed in the last CROCO summer course >> http://mosa.dgeo.udec.cl/CROCO2024/CursoAvanzado/Day3_morning.pdf

Each model I have run separately successfully. For the coupling, I have configured my switch considering:
F90 NOGRB TRKNC NC4 DIST MPI PR3 UQ FLX0 LN1 ST4 STAB0 NL1 BT4 DB1 MLIM TR0 BS0 BS0 IC0 IS0 REF1 XX0 WNT0 WNX1 RWND CRT0 CRX1 COU OASIS OASOCM O0 O1 O2 O2 O2a O2b O2c O3 O4 O5 O6 O7; also in cppdefs I have defined OW_COUPLING and MRL_WCI.

I have checked the dates of the input files for both CROCO and WW3, although they have different formats (hours and days), and different start and end dates, they all cover the month of January 2005.
When running the model in cluster, I am getting the following error:

You can access the run files I am using at the link below. The error is save in the file “job_1226.out”. Does anyone have any idea what the cause of the error is?

Please let me know if additional information is needed.

Thank you for your support and kind attention.

Regards,
Cesar.

Hi Cesar, it seems from some log files that the coupling is not really activated in both models : e.g.

  • in nout.000000 : I see only wwatch as coupled models at the end of the file : (oasis_init_comp) COUPLED models 1 wwatch T
  • in log.ww : it seems that coupling is not activated, as I do not see any section about coupling fields after the section about output fields. Generally you have a section named " Coupling output fields :" after the section " Gridded output fields".

But it also seems that you have done different runs in this same directory (several job_XXX.out), so maybe all logs are not consistent with each other, and it is difficult to really assess what is the problem. I suggest to create a new clean run directory and put there only your trial of coupled run. Please also add the cppdefs.h and param.h from croco compilation, as well as the switch file for WW3.

Hi Swen. Thank you for your reply.

Finally, I was able to run the Benguela coupled case. Apparently I had a error in the WW3 auxiliary programs; by compiling WW3 again with OASIS, I generated the auxiliary programs and the WW3 input files with which I was able to run the case.

Regards,
César.

Hi Swen,

Now, I am trying to run the coupled models using CROCO TOOLBOXS. I have carefully checked the directories to each of the executable programs. In “rundir” I can see that the provided scripts compiled CROCO and fill the *.base files of both CROCO and WW3, but do not run the WW3 auxiliary programs (e.g. ww3_grid, ww3_prnc…).

Checking “wav_getrst.txt” I find the following error:

************* get WAVE RESTART files *****************

WW3 pre-processing before run:
mpirun -n 1 ww3_grid &> grid.out
ERROR when running ww3_grid, mod_def.ww3 does not exist

I think that in “myjob.sh”, I am not defining correctly:

MPI_LAUNCH_CMD=$MPI_LAUNCH (default)
export SERIAL_LAUNCH_WAV="$MPI_LAUNCH -n 1 "

My machine is Linux, so I have in the directory SCRIPTS_TOOLBOX/MACHINE adapted “myenv.Linux” according to the compiler I use (ifort); consequently, I have tried with:

MPI_LAUNCH_CMD=$mpirun
export SERIAL_LAUNCH_WAV="$mpirun -n 1 "

but when doing this, the program does not even start running, so after “./submitjob.sh” I get the message: “./mynamelist.tmp: line 357: mpirun: unbound variable”.

In my case, how should I define “MPI_LAUNCH_CMD” and “export SERIAL_LAUNCH_WAV” ??

Thank you very much for your attention and help.

Regards,
César.

cesar90
Hi,
Is this problem solved? -n to -np ?
let me know. I am free. I want to see it closely.
We are together!

Hi Smaishal,

Thank you for your support.
I have seen your suggestions that you sent me by mail.
I already had an idea to change -n to -np in “launch_Linux.sh”. I made those changes and ran the model from the run directory (rundir), but it didn’t work. I wanted to try again to run the model via “./submitjob.sh”, but as I mentioned before, the model does not start.
In “myjob.sh” what should I define in ?
MPI_LAUNCH_CMD=
export SERIAL_LAUNCH_WAV

can you send me your CPL problem in my mail :).

send your CPL problem with some log. as I expected?
do you want to, I need to reproduce your problem in my system?
I really want that in the Virtual Box.
Beautiful

Thank you for sending your files.
your error is this write:
************* get WAVE RESTART files *****************

WW3 pre-processing before run:
mpirun -np 1 ww3_grid &> grid.out
ERROR when running ww3_grid, mod_def.ww3 does not exist

Hi,
Thank you for your response.

I have followed your suggestion to make some small changes in “w3_getrst.sh”.

In a previous answer you tell me that “before reading your massage just ./”, could you explain me in more detail where I should write only “./”?
Based on what you suggest, I thought that in “w3_getrst.sh” I should only write “./” before:
“${io_getfile} ${WAV_NAM_DIR}/ww3_prnc.inp.${forcww3[$k]} ./ww3_prnc.inp”

I don’t know if this is what you are trying to say.

Likewise, with this change now “mod_def.ww3” and “ww3_prnc.inp” is created correctly, but I still get an error when the code wants to run “ww3_prnc”; about this I must say, based on what I commented above, that I have already run the coupled models with this data, and consequently, with these same executable programs (e.g. ww3_grid, ww3_prnc…). So I don’t know what causes this error. It looks like WW3 is not properly compiled, as if the programs in “exe_ow” are wrong, but that should not be a problem because with those same programs I have run this coupling following the “at hand” methodology.

The error that I am getting in “prnc.wind.out”:

*** WAVEWATCH III Input pre-processing ***
===============================================

Grid name : benguela_CPL_ow

Comment character is ‘$’

Description of inputs

   Input type        : winds
   Format type       : long.-lat. grid
      Field conserves velocity.

       File name         : wind.nc
       Dimension along x : longitude
       Dimension along y : latitude
       Field component 1 : u10
       Field component 2 : v10

*** WAVEWATCH III ERROR IN PRNC :
NETCDF ERROR MESSAGE:
NetCDF: HDF error

w3servmd MPI_ABORT, IEXIT= 59
w3servmd UNIT missing
w3servmd MSG missing
w3servmd FILE missing
w3servmd LINE missing
w3servmd COMM missing
Abort(59) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0

Thanks for your attention and support,
César.

Hi,

thanks.
I am just referring your problem to respected sjullien and andres.
I hope they will help you better. thanks.

Hi Cesar,
I haven’t follow all your exchanges before, but from what I read, the issue you are facing is that WW3 pre-processing step (e.g. ww3_grid here) are not working. These steps have to be run in serial (not mpi), but depending on your environment and how you compiled WW3, you may have either to launch them like usual serial programs so with export SERIAL_LAUNCH_WAV=“./” or with mpi launch but on 1 CPU, e.g. if you are using mpirun : export SERIAL_LAUNCH_WAV="mpirun -np 1 ". Note that you also need to have your netcdf libraries correctly set up in your environment (the librairies that you used to compile WW3).