Creating graph stops without error code

Hi everyone!

I have a question related to building the graph:

I recently made changes to the handleWayTags function of the MotorcycleFlagEncoder.java file. I first built the graph locally with the netherlands-latest.pbf map. This has been successful.

After this was successful, I also wanted to build the graph on a bigger server, with the planet.pbf.

Unfortunately, the process of building the graph with the planet.pbf fails.
I can’t figure out what the error message is from the logs, as it isn’t given. It is now also very unclear to me what is the reason why the graph cannot be built.

Below the log:
graphhopper-2021-12-16.log (52.8 KB)

Server specs:

  • 8 vCPU
  • 64GiB

I’ve never had any problems with this before. I’ve been able to build the graph more often with the planes.pbf. But unfortunately I’m now running into a problem where I don’t know the cause.

I’d love to hear what your ideas are.

Greetings,
cat

Hi,
I guess your server run out of memory. You should look into system log (where you likely find that the process is being killed). Processing the planet with 64GiB of RAM was never possible for me.

Cheers,
Olaf

Hi Olaf,

How much RAM did you use to process the planet?

btw: the import is definitely possible with 64GB RAM but how much RAM is used depends on the config:
how many profiles, if LM or CH is enabled (and if edged-based), if MMAP is used, …

Hi,

See below my config:

graphhopper:
  datareader.file: planet.osm.pbf
  graph.location: graph-gh
  profiles:
    - name: motorcycle
      vehicle: motorcycle
      weighting: custom
      custom_model_file: empty
    - name: bike
      vehicle: bike
      weighting: custom
      custom_model_file: empty
    - name: foot
      vehicle: foot
      weighting: custom
      custom_model_file: empty
  graph.flag_encoders: foot,bike,motorcycle|turn_costs=true
  graph.encoded_values: toll, surface
  graph.elevation.provider: srtm
# PRODUCTION
  graph.elevation.cache_dir: /data/srtm/
  graph.dataaccess: RAM_STORE

server:
  application_connectors:
    - type: http
      port: 8989
      # DEVELOPMENT
#      bind_host: localhost
  admin_connectors:
    - type: http
      port: 8990
      # DEVELOPMENT
#      bind_host: localhost

# PRODUCTION
logging:
  appenders:
    - type: file
      time_zone: UTC+1
      current_log_filename: /data/logs/graphhopper.log
      log_format: "%d{YYYY-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
      archive: true
      archived_log_filename_pattern: /data/logs/graphhopper-%d.log.gz
      archived_file_count: 7
      never_block: true
    - type: console
      time_zone: UTC+1
      log_format: "%d{YYYY-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"

I don’t receive any “OutOfMemory” exception. I used to get this error message when I used too little RAM. But with the 64GiB I don’t get an error message, it just stops by itself.

So you both think the problem is related to my RAM? How much RAM do you advise me to use, given my config?

Sorry if I’m asking simple questions, I’m new to using Graphhopper :slight_smile: but here to learn!

There are quite a few factors affecting RAM usage (notably if you are using CH and the amount of encoded values. )
For instance to get my import jobs done I need around 500GB .
The version of the JVM is important as well, since they implement better memory handling/GC strategies in newer JVM. I would advice to use at least a JDK 11 version built from OpenJDK sources with Hotspot engine enabled.
And don’t forget to supply a -Xmx fitting together with the RAM available (for instance if you have 64g you should use -Xmx63g on the command line .
I am pretty sure you have some kind of diagnostics, but you didn’t found the right spot where it is logged :slight_smile:

1 Like

It seems you are not using CH and so IMO this import should be even doable with less than 32GB RAM. Did you specify -Xmx like suggest from @OlafFlebbeBosch?

If you enable CH (then especially due to turn_costs=true) you might need more RAM than 64GB for 3 profiles. In general you can reduce RAM usage via:

  graph.dataaccess: MMAP

but this might make the import slower. (And if later used for serving your requests it makes the requests slower too but then you could use RAM_STORE.)

it just stops by itself.

See How to Find Which Process Was Killed by Linux OOM Killer | Baeldung on Linux if this could be the case.

This depends on the JVM and GC you are using. Some GCs tend to use several GB and so you need to leave room for this otherwise the linux OOM killer might decide to pick you app :slight_smile:

1 Like

@karussell You are right, I was only hinting to leave some extra space for non-heap stuff.

1 Like

Thank you both! I will try it out. :smile:

Hi @karussell and @OlafFlebbeBosch ,

Your feedback did really help, thank you again. The process has now been running for 24+ hours instead of stopping after half an hour.

Yesterday the process stopped. I unfortunately ran into another error that I never had before either. See log below:
graphhopper (1).log (9.8 KB)

What does this error message mean and how can I best solve it?

Can anyone help with the error message🥺?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.