Out of Memory when loading US map when Turn Costs enabled

When loading a large PBF file (us-latest.osm.pbf) I am unable to get Graphhopper to process the file and create the “-gh” directory when turn costs are enabled. Instead I receive an Out Of Memory error after 50 minutes citing the Java heap space. This happens no matter how much memory I have tried throwing at it (up to 30G). If I disable turn costs then the processing succeeds with a modest amount of memory (<10G).

Some details:
Graphhopper Version: 2.1
Java Options: -Xmx30G
Data Access: MMAP_STORE
CH: Enabled for “car”
LM: Disabled
Error Message:

Exception in thread "main" java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
    	at com.graphhopper.routing.ch.CHPreparationHandler.prepare(CHPreparationHandler.java:207)
    	at com.graphhopper.GraphHopper.prepareCH(GraphHopper.java:1081)
    	at com.graphhopper.GraphHopper.postProcessing(GraphHopper.java:952)
    	at com.graphhopper.GraphHopper.process(GraphHopper.java:665)
    	at com.graphhopper.GraphHopper.importOrLoad(GraphHopper.java:628)
    Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
    	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
    	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
    	at com.graphhopper.routing.ch.CHPreparationHandler.prepare(CHPreparationHandler.java:203)
    	... 7 more
    Caused by: java.lang.OutOfMemoryError: Java heap space
    	at com.graphhopper.routing.ch.EdgeBasedWitnessPathSearcher.initStorage(EdgeBasedWitnessPathSearcher.java:329)
    	at com.graphhopper.routing.ch.EdgeBasedWitnessPathSearcher.<init>(EdgeBasedWitnessPathSearcher.java:112)
    	at com.graphhopper.routing.ch.EdgeBasedNodeContractor.initFromGraph(EdgeBasedNodeContractor.java:100)
    	at com.graphhopper.routing.ch.PrepareContractionHierarchies.initFromGraph(PrepareContractionHierarchies.java:186)
    	at com.graphhopper.routing.ch.PrepareContractionHierarchies.doSpecificWork(PrepareContractionHierarchies.java:128)
    	at com.graphhopper.routing.util.AbstractAlgoPreparation.doWork(AbstractAlgoPreparation.java:30)
    	at com.graphhopper.routing.ch.CHPreparationHandler.lambda$prepare$0(CHPreparationHandler.java:191)
    	at com.graphhopper.routing.ch.CHPreparationHandler$$Lambda$36/0x0000000840103040.run(Unknown Source)

I guess my questions are (1) Is this expected, and (2) is there a way to work around this high memory requirement during the initial processing of a PBF file with turn costs (and CH) enabled?

I have been battling with this for a while now so any help would be appreciated.

(1) Yes, this has to be expected. Not just because turn costs are enabled, but because turn costs are enabled and you are using CH. This requires edge-based CH, which has much higher memory requirements.

(2) You are already doing the right thing using MMAP instead of RAM. I’m a bit surprised it still does not work. Did you verify in your logs, that MMAP is actually being used? How many nodes/edges does your graph have? You should be able to see this in your import log before the CH preparation starts. Another thing you could try is using the import command rather than using the server command (or if you are using the Java API use GraphHopper#importAndClose rather than GraphHopper#importOrLoad). Maybe try another GC as well using -XX:+UseParallelGC.

Thanks easbar.

From the beginning of the logs it appears that MMAP is actually being used:

[main] INFO com.graphhopper.reader.osm.GraphHopperOSM - using CH|car|MMAP_STORE|2D|turn_cost|,,,,, memory:totalMB:500, usedMB:8

Here is what the logs say about the nodes/edges:

[main] INFO com.graphhopper.reader.osm.GraphHopperOSM - nodes: 46 006 891, edges: 58 535 847

I have retried the process with importAndClose and the different GC as well, but the result is the same.

Here are some more log entries prior to it running out of memory. Maybe something in here will help.

> [main] INFO com.graphhopper.storage.index.LocationIndexTree - location index created in 90.852776s, size:65 700 323, leafs:16 773 476, precision:500, depth:8, checksum:46006891, entries:[16, 16, 16, 16, 16, 16, 4, 4], entriesPerLeaf:3.9169176
> [main] INFO com.graphhopper.routing.ch.CHPreparationHandler - Creating CH preparations, totalMB:7934, usedMB:3097
> [main] INFO com.graphhopper.routing.ch.CHPreparationHandler - 1/1 calling CH prepare.doWork for profile 'car' EDGE_BASED ... (totalMB:7934, usedMB:3098)
> [car] INFO com.graphhopper.routing.ch.PrepareContractionHierarchies - Creating CH prepare graph, totalMB:7934, usedMB:3099
> [car] INFO com.graphhopper.routing.ch.PrepareContractionHierarchies - Building CH prepare graph, totalMB:7934, usedMB:1381
> [car] INFO com.graphhopper.routing.ch.PrepareContractionHierarchies - Finished building CH prepare graph, took: 41.38786s, totalMB:7934, usedMB:7596
> Exception in thread "main" java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space

Keen to know if there is something else I can do. In the meantime I am going to seek out a different snapshot of the US maps to see if the results are the same.

I tried this myself using -Xmx40g -Xms40g and the import finished after about six hours. Not sure if it is possible to further restrict the heap space. If you are definitely limited to Xmx30g I don’t think there is on easy option to make this possible. You could try setting graph.dataaccess.segment_size: 16777216, but whether or not this helps will depend on how far away your memory limit (30g) is from the actual amount of memory that is required. You could also try using GH 1.0 which was released before the memory requirements for CH preparation increased in https://github.com/graphhopper/graphhopper/pull/2132.

There is still potential to actually reduce the memory requirements. If you really want to dig into this you should first produce a heap dump to find out what the memory is used/required for. It should be mostly due to CHPreparationGraph. This is kept in memory and memory usage could probably be drastically reduced if we implemented this (or parts of it) based on DataAccess so we can use MMAP for this as well. But I’m pretty sure this won’t be an easy task (and very likely not justified considering that you had no problem if you could just use a machine with more memory).

By the way the resulting graph folder for the US import (using the “car” profile with CH and turn costs) is just 9.1G in size. So after running the import (on a machine with sufficient memory) you will be able to start the server using Xmx30g even without using MMAP.

Cheers for the info. I was hoping to be able to import this map locally but my machine is clearly not spec’ed high enough for this. So I have switched to doing the import process on a remote VM with more memory and then exporting the GH artifacts. (This is probably the best long term option for my situation anyway.)

As you say, after doing the import the memory requirements are much lower so there should not be any issues running the server.

Thanks again for your help.

@easbar you wrote

the resulting graph folder for the US import (using the “car” profile with CH and turn costs) is just 9.1G in size. So after running the import (on a machine with sufficient memory) you will be able to start the server using Xmx30g even without using MMAP

for reference: what is the ratio between size of imported data ( -gh folder) and RAM required in RAM_STORE to just run it? 1:1? 2:1?


Using RAM the data of the -gh folder will be loaded into memory 1:1, but you need some additional memory for running the server and answering routing queries. The memory overhead for CH queries is very small (basically negligible). It can be higher for Dijkstra/AStar queries (ch/lm.disable=true) where lots of nodes are explored, or for /spt and /isochrone queries. And of course if your server answers very many requests in parallel. That said, the ratio you were asking about is much closer to 1:1 than to 1:2 I would assume.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.