OOM on planet foot profile

I’m using GH 4.0 and I’m hitting an OOM error on planet-wide import. My setup is:

  • Latest OSM planet data
  • JVM running with -Xmx112G
  • foot profile only, but I’m storing some custom weights as well (4 bytes per edge)
  • LM and CH disabled via the config (profiles_ch: [] and profiles_lm: [])
  • Elevation enabled, with the multi provider and smoothing enabled
  • Using the defaults of graph.dataaccess=RAM_STORE for the graph, and graph.elevation.dataaccess=MMAP for elevation data access

The OOM always seems to occur when processing the elevation data.

Note: I do not hit the OOM error when using the cgiar elevation provider alone. Of course, that means I miss elevation data in some parts of the world.

I’m going to re-run it now with graph.dataaccess=MMAP, and will report back later.

But I thought I’d ask here in case there was something silly I was missing?

As an side: I don’t say it often enough, but thanks again for such a wonderful open source product!

Edit: stacktrace below, in case it helps:

Jan 10 17:32:55 osm4 java[136339]: Caused by: java.lang.RuntimeException: Couldn't map buffer 64 of 69 with 1048576 for srtm_21_22.gh at position 67108964 for 72000000 bytes with offset 100, new fileLength:72351844, totalMB:114688, usedMB:39372
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.storage.MMapDataAccess.mapIt(MMapDataAccess.java:229)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.storage.MMapDataAccess.ensureCapacity(MMapDataAccess.java:184)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.storage.MMapDataAccess.create(MMapDataAccess.java:178)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.storage.MMapDataAccess.create(MMapDataAccess.java:58)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.reader.dem.AbstractTiffElevationProvider.getEle(AbstractTiffElevationProvider.java:144)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.reader.dem.MultiSourceElevationProvider.getEle(MultiSourceElevationProvider.java:53)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.reader.dem.ElevationProvider.getEle(ElevationProvider.java:52)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.reader.osm.OSMReader.addNode(OSMReader.java:530)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.reader.osm.OSMReader.processNode(OSMReader.java:511)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.reader.osm.OSMReader.writeOsmToGraph(OSMReader.java:269)
Jan 10 17:32:55 osm4 java[136339]:         ... 19 common frames omitted
Jan 10 17:32:55 osm4 java[136339]: Caused by: java.io.IOException: Map failed
Jan 10 17:32:55 osm4 java[136339]:         at java.base/sun.nio.ch.FileChannelImpl.mapInternal(FileChannelImpl.java:1103)
Jan 10 17:32:55 osm4 java[136339]:         at java.base/sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:1008)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.storage.MMapDataAccess.newByteBuffer(MMapDataAccess.java:242)
Jan 10 17:32:55 osm4 java[136339]:         at com.graphhopper.storage.MMapDataAccess.mapIt(MMapDataAccess.java:220)
Jan 10 17:32:55 osm4 java[136339]:         ... 28 common frames omitted
Jan 10 17:32:55 osm4 java[136339]: Caused by: java.lang.OutOfMemoryError: Map failed
Jan 10 17:32:55 osm4 java[136339]:         at java.base/sun.nio.ch.FileChannelImpl.map0(Native Method)
Jan 10 17:32:55 osm4 java[136339]:         at java.base/sun.nio.ch.FileChannelImpl.mapInternal(FileChannelImpl.java:1100)
Jan 10 17:32:55 osm4 java[136339]:         ... 31 common frames omitted
Jan 10 17:32:55 osm4 java[136339]: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007faa24720000, 65536, 0) failed; error='Not enough space' (errno=12)
Jan 10 17:32:55 osm4 java[136339]: #
Jan 10 17:32:55 osm4 java[136339]: # There is insufficient memory for the Java Runtime Environment to continue.
Jan 10 17:32:55 osm4 java[136339]: # Native memory allocation (mmap) failed to map 65536 bytes for committing reserved memory.

How much RAM has the machine? Make sure that you do not use everything for the heap (i.e. decrease -Xmx), the JVM and compiler need RAM too. And when you use MMAP then those things need RAM too.

Also make sure that you have the system properly configured. See https://github.com/graphhopper/graphhopper/blob/master/docs/core/deploy.md#worldwide-setup

Thanks for the quick reply. I’ve already followed those steps. The system has 256GB RAM, so there’s plenty of room. But I think I see the likely issue…

There’s a limit on the maximum number of mmap’s per process, which defaults to ~65k in Linux. And I can see that I’m running verrrry close to this whilst my import is running right now:

# Current mmap count for GH
root@osm4 ~ # wc -l /proc/$(pidof java)/maps
59926 /proc/137642/maps

# This is the limit
root@osm4 ~ # sysctl vm.max_map_count
vm.max_map_count = 65530

So I’m going to bump up vm.max_map_count and will retry. This isn’t without precedent: Virtual memory | Elasticsearch Guide [7.16] | Elastic

I’ll report back here and will submit a PR for the docs if this turns out to be correct.

1 Like

Confirmed - increasing vm.max_map_count has resolved the issue. The system peaked at 66004 mmap entries, i.e ~500 over the default limit. I don’t know if my extra storage for my custom weightings has tipped it over the edge, or if it’s something else.

I was about to submit a PR for the docs change, but I see you’ve already done it. Thanks!

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.