Skip to content

Problem parsing negative longitude values in Lambert Azimuthal Equal Area grid specs #427

@JohnHalleyGotway

Description

@JohnHalleyGotway

I'm writing to inform you of a potential bug in the grib2c library revealed via testing with the wgrib2 utility.

Here's the GRIB2 file to demonstrate the issue:
ukv_agl_temperature_1.5_12.grib2.gz

Running wgrib2 -v prints the specification of the Lambert Azimuthal Equal Area grid on which this data resides.

wgrib2 -V ukv_agl_temperature_1.5_12.grib2 
	Lambert Azimuthal Equal Area grid: (1042 x 970) input WE:SN output WE:SN res 48
	Lat1 44.517153 Lon1 2164.600777 Cen Lon -2.500000 Std Par 54.900000
	Dx 2000.000000 m Dy 2000.000000 m mode 48

The problem is in this Lon1 2164.600777 value. That should actually be -17.11713 instead!

This can be seen by running the following wgrib2 command:

wgrib2 ukv_agl_temperature_1.5_12.grib2 -get_byte 3 43 4 -get_int 3 43 1

1:0:3-43=129,5,47,201:3-43=-17117129

The GRIB2 Template 3.140 (for LAEA grids), indicates that "LO1" is stored in bytes 43 to 46. The -get_int option for bytes 43 to 46 correctly returns a value of -17117129 in micro-degrees, or -17.117129 in degrees.

Note that I also confirmed this by running the grib_dump utility from ECMWF:

grib_dump ukv_agl_temperature_1.5_12.grib2

longitudeOfFirstGridPointInDegrees = -17.1171;

The issue seems to be that the grib2c library isn't correctly parsing the leading negative sign when populating the igdstmpl array. This issue may also exist for the central longitude value of -2.5. While wgrib2 does correctly report its negative value, the MET code on which I work did not. You can see a comment about this issue in this line of the MET code.

FYI, my current workaround in the MET code is calling the following parse_int4() utility to reprocess the bytes checking for the leading negative sign.

int parse_int4(g2int i) {
   unsigned char c[4];
   unsigned long n = i;
 
   c[0] = (n >> 24) & 0xFF;
   c[1] = (n >> 16) & 0xFF;
   c[2] = (n >> 8) & 0xFF;
   c[3] = n & 0xFF;

   //  convert unsigned char to signed integer
   int i_val;
   if(c[0] & 0x80) {
      i_val = -(((c[0] & 0x7f) << 24) + (c[1] << 16) + (c[2] << 8) + c[3]);
   }
   else {
      i_val = (c[0] << 24) + (c[1] << 16) + (c[2] << 8) + c[3];
   }

   return(i_val);
}

Metadata

Metadata

Labels

bugSomething isn't working

Type

No type

Projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions