Error code -57 without parallelisation

Hi,
I am a new user trying to find my way around croco.
I followed the documentation and got to a stage where I was ready to compile and run the model. However, on model run(./croco croco.in) I get -
NF_FREAD ERROR: nf_get_vara netCDF error code = -57
GET_GRID - error while reading variable : h in grid netCDF file: CROCO_FILES/croco.grd.
With a search on the forum I came across
previous similar post, however that was related to MPI/ domain decomposition.
The tutorial pages also mentions this in connection with MPI/ domain decomposition. I am however (not actively to my knowledge) setting up a parallel run. The console output also says Number of threads: 1 blocking 1 x1. So I doubt this is domain decomp related(however, please correct me if I am wrong).
I am trying to run the tutorial case BENGUELA LR. I have not made any major changes to the setup except for the following:

  1. the .m files in the tools folder do not execute on octave without changing over the commands (netcdf → netcdf.create etc.)
  2. add_topo has indices (j,i) which differs from the indexing in the topo file (returns an index out of bounds error). So, I have switched that to (i,j).
    eg. %topo=cat(2,-topo_data(j,i1),topo); (original)
    topo=cat(2,-topo_data(i1,j),topo); (modified)
    In the end h=interp2(y,x,topo,lat,lon,‘cubic’);
    Alternatively h=interp2(x,y,topo’,lon,lat,‘cubic’);

ncdump shows tha the grid file has the variable h. the dimensions are as defined (eta_rho, xi_rho) and the values are also printed out fine through ncdump.

So I am at a loss as to what could be causing the problem. Anyone has any hints?

need your cppdefs and param.h files

Thanks for the offer! I see that Octave reads and writes the variables in the reverse order. Looked at the ncdump of the grd file for Benguela and compared it to the grd I generate and the eta and xi dimensions are switched. Doing a switch in the pre proc files. Hopefully this solves the problem.

Just wanted to report back that the problem is resolved with respect to that error code. The model starts working, but I get a
Program received signal SIGFPE: Floating-point exception - erroneous arithmetic operation.

I make no chnages to the param and cppdefs from the downloaded files, other than write in my path.
Seems like a problem with the initialisation?
The same compilation of crocos works just fine with the input files (.grd, blk, br,bryZ, ini) that I downloaded from the website for the benguela case. So this has to do with the preproc would be my guess.

Comparing the contents of the generated croco files with the examples files available in the tutorial, I see that the only difference I get in the variables is that the files I generate have time stored as float64, whereas the tutorial input files have timedelta64. Octave doesnt generate this format. However also wondering if this is actually the cause?

Looks like there is a difference between “double” and “NC_DOUBLE”. Defining the variable NC_DOUBLE sorts this problem.

However the flosting point error I get from bulk.f remains

I have now solved the problem I faced. thought I would just document my progress here for anyone who faces a similar problem in the future. Summarizing:

  1. .m files that ship with the download in croco_tools have netcdf call that Octave (also matlab) doesn’t consider correct syntax (as in my post here). They need to be switched to the syntax in the documentation: netcdf.create, netcdf.open, netcdf.putVar etc.
  2. Octave (here I haven’t tried Matlab) reads and writes the variables in the reverse order of appearance. For example:

nc=netcdf.open(grdname,“NC_NOWRITE”);
id_lonr=netcdf.inqVarID(nc, “lon_rho”);
nclonr = netcdf.getVar(nc, id_lonr);

returns an array of size(xi_rho, eta_rho) even though it is stored as (eta_rho, xi_rho) in the netdcf file. SO , a reversal.

  1. The writing is also reversed. For example:

var_u_s = netcdf.defVar(nc, “u_south”, ‘NC_DOUBLE’, [dim_xi_u dim_s_rho dim_v3dt]);

is the declaration to create a variable with dimensions [dim_v3dt, s_rho, xi_u].
And entering values into it also has to be performed in the same order (Reverse of what the downloaded files have)

  1. Specifically to Benguela: The external data files do not have Attributes add offset and scale factor and the code defaults it to NaN if the value is not found. This returns an NaN array.
    Additionally, a missing_value attribute is declared (say -1e9) but the data contains other garbage values for example -5e04. This needs to be cleaned up.

Once these steps are carried out, the case works. I am still a little confused/surprised as to this not being a topics discussed/ pointed out previously on the forum and makes me wonder if I am doing something very seriously wrong.

PS. I drew these conclusions after comparing the preproc files I generated with the preproc files available for download for the benguela case. Thanks to the developers for having those available :slight_smile:

I hope this will help someone, or I get feedback on something I am doing seriously wrong :smiley: