Memory allocation failing during elevation and pbf reading

hi there,

recently we are having problems building global graphs as during the pbf reading phase, we are getting memory allocation errors. We use code based on the GH 0.10 version, and everything worked ok up until recently when we updated the url for the cgiar data as in our code base it was pointing to the old url, and that is the only difference in relation to elevation that I can see between the code we use and the base GH 0.10 version.

We are building single profiles using the multi elevation provider options on a 128Gb machine, with Xmx and Xms set to 100g. Below is the error message that comes out of it. Is there something that we are just missing that could cause this as we have managed to get it working a couple of times by forcing it to download the elevation data again, but that then adds a lot of time to the build process?

I can try and provide more info if that will help, but I am running around in circles at the minute and so not sure what I should provide…

Thanks.

Caused by: java.lang.RuntimeException: Couldn't map buffer 218 of 264 for 70s090e_20101117_gmted_mea075.gh at position 228589668 for 276480000 bytes with offset 100, new fileLength:276824164
	at com.graphhopper.storage.MMapDataAccess.mapIt(MMapDataAccess.java:141)
	at com.graphhopper.storage.MMapDataAccess.ensureCapacity(MMapDataAccess.java:98)
	at com.graphhopper.storage.MMapDataAccess.create(MMapDataAccess.java:82)
	at com.graphhopper.storage.MMapDataAccess.create(MMapDataAccess.java:51)
	at com.graphhopper.reader.dem.AbstractTiffElevationProvider.getEle(AbstractTiffElevationProvider.java:131)
	at com.graphhopper.reader.dem.MultiSourceElevationProvider.getEle(MultiSourceElevationProvider.java:54)
	at com.graphhopper.reader.osm.OSMReader.getElevation(OSMReader.java:782)
	at com.graphhopper.reader.osm.OSMReader.addNode(OSMReader.java:755)
	at com.graphhopper.reader.osm.OSMReader.processNode(OSMReader.java:723)
	at com.graphhopper.reader.osm.OSMReader.writeOsm2Graph(OSMReader.java:373)
	... 15 more
Caused by: java.io.IOException: Map failed
	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:938)
	at com.graphhopper.storage.MMapDataAccess.newByteBuffer(MMapDataAccess.java:156)
	at com.graphhopper.storage.MMapDataAccess.mapIt(MMapDataAccess.java:134)
	... 24 more
Caused by: java.lang.OutOfMemoryError: Map failed
	at sun.nio.ch.FileChannelImpl.map0(Native Method)
	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:935)
	... 26 more
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
mmap failed for CEN and END part of zip file
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32744 bytes for ChunkPool::allocate
# An error report file with more information is saved as:
# /tmp/hs_err_pid14154.log

Try lowering the Xmx setting when you use memory mapped storage functions. Because memory mapping requires enough memory outside the scope of the JVM and Xmx defines the memory for the JVM.

If this does not help or for the next time please include your config.yml, operating system version, JVM version and try with the latest stable GH version.

OK thanks, we will give it a try on a different server aswell to see if it is some weird memory issue there. It’s a bit strange as it all used to work without problem, and switching off the gmted elevation then allows building.

We can’t really reduce the Xmx value though as we have extra things within our code that adds quite a bit of memory consumption, so will try on a bigger server aswell to see if that helps

Strangely - just ran it on a 200Gb machine with Xmx and Xms set to 100g, and get the same error… I am trying it with a pure GH build as well so I will see how that one goes

# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/hs_err_pid1574.log