Deployed Graphhopper Instance Crashes Every Few Days

I expect this is an easy fix, and a novice question…

So I deployed graphhopper to a Unbuntu 20.04 x64 server and it operates perfectly well, but seems to crash every few days. I followed the deploy documentation, but am very elementary at server-side development - and likely overlooking an elementary detail. What is causing the crash and/or is there a way to automatically run the startup script command? The log files don’t seem to have any detail on the crashes other than logging the route requests which all seem normal.


  • Input region was 11MB city-sized osm.pbf file
  • Using Graphhopper 1.0
  • Graphs on my local computer and uploaded to server
  • My server-space…
    • Ubuntu 20.04 x64
    • 1vCore
    • RAM: 1024MB
    • Storage: 25GB
    • Bandwidth: usually less than ~1 GB out of 1000 GB per month

When starting graphhopper I run:

export JAVA_OPTS="-server -Xconcurrentio -Xmx512m -Xms512m"
./ -a web -i ./data/TucsonMetro.osm.pbf -o ./data/TucsonMetro-gh -d --port 8989`

My config.yml file:


  # OpenStreetMap input file PBF or XML, can be changed via command line -Ddw.graphhopper.datareader.file=some.pbf
  datareader.file: ""
  # Local folder used by graphhopper to store its data
  graph.location: graph-cache

  ##### Vehicles #####

  # More options: foot,hike,bike,bike2,mtb,racingbike,motorcycle,car4wd,wheelchair (comma separated)
  # bike2 takes elevation data into account (like up-hill is slower than down-hill) and requires enabling graph.elevation.provider below.
  graph.flag_encoders: mtb|turn_costs=true, car|turn_costs=true #,  bike|turn_costs=true

  # Enable turn restrictions for car or motorcycle.
  # graph.flag_encoders: car|turn_costs=true

  # Add additional information to every edge. Used for path details (#1548), better instructions (#1844) and tunnel/bridge interpolation (#798).
  # Default values are: road_class,road_class_link,road_environment,max_speed,road_access (since #1805)
  # More are: surface,max_width,max_height,max_weight,max_axle_load,max_length,hazmat,hazmat_tunnel,hazmat_water,toll,track_type
  # graph.encoded_values: surface,toll,track_type

  ##### Routing Profiles ####

  # Routing can be done for the following list of profiles. Note that it is required to specify all the profiles you
  # would like to use here. The fields of each profile are as follows:
  # - name (required): a unique string identifier for the profile
  # - vehicle (required): refers to the `graph.flag_encoders` used for this profile
  # - weighting (required): the weighting used for this profile, e.g. fastest,shortest or short_fastest
  # - turn_costs (true/false, default: false): whether or not turn restrictions should be applied for this profile.
  #   this will only work if the `graph.flag_encoders` for the given `vehicle` is configured with `|turn_costs=true`.
  # Depending on the above fields there are other properties that can be used, e.g.
  # - distance_factor: 0.1 (can be used to fine tune the time/distance trade-off of short_fastest weighting)
  # - u_turn_costs: 60 (time-penalty for doing a u-turn in seconds (only possible when `turn_costs: true`)).
  #   Note that since the u-turn costs are given in seconds the weighting you use should also calculate the weight
  #   in seconds, so for example it does not work with shortest weighting.
  # - custom_model_file: when you specified "weighting: custom" you need to set a yaml file that defines the custom_model.
  #   If you want an empty model you can also set "custom_model_file: empty".
  #   For more information about profiles and especially custom profiles have a look into the documentation
  #   at docs/core/ or the examples under web/src/test/resources/com/graphhopper/http/resources/ or
  #   the CustomWeighting class for the raw details.
  # To prevent long running routing queries you should usually enable either speed or hybrid mode for all the given
  # profiles (see below). Otherwise you should at least limit the number of `routing.max_visited_nodes`.
    - name: pathways1
      vehicle: mtb
      weighting: custom
      custom_model_file: pathways1.yml

    - name: pathways2
      vehicle: mtb
      weighting: custom
      custom_model_file: pathways2.yml

    - name: pathways3
      vehicle: mtb
      weighting: custom
      custom_model_file: pathways3.yml

    - name: pathways4
      vehicle: mtb
      weighting: custom
      custom_model_file: pathways4.yml

    - name: pathways5
      vehicle: mtb
      weighting: custom
      custom_model_file: pathways5.yml

    - name: pathways6
      vehicle: mtb
      weighting: shortest

    - name: co2car
      vehicle: car
      weighting: short_fastest

  #  - name: car_with_turn_costs
  #    vehicle: car
  #    weighting: short_fastest
  #    distance_factor: 0.1
  #    turn_costs: true
  #    u_turn_costs: 60

  # Speed mode:
  # Its possible to speed up routing by doing a special graph preparation (Contraction Hierarchies, CH). This requires
  # more RAM/disk space for holding the prepared graph but also means less memory usage per request. Using the following
  # list you can define for which of the above routing profiles such preparation shall be performed. Note that to support
  # profiles with `turn_costs: true` a more elaborate preparation is required (longer preparation time and more memory
  # usage) and the routing will also be slower than with `turn_costs: false`.
    - profile: pathways1
    - profile: pathways2
    - profile: pathways3
    - profile: pathways4
    - profile: pathways5
    - profile: pathways6
    - profile: co2car
  #   - profile: car_with_turn_costs

  # Hybrid mode:
  # Similar to speed mode, the hybrid mode (Landmarks, LM) also speeds up routing by doing calculating auxiliary data
  # in advance. Its not as fast as speed mode, but more flexible.
  # Advanced usage: It is possible to use the same preparation for multiple profiles which saves memory and preparation
  # time. To do this use e.g. `preparation_profile: my_other_profile` where `my_other_profile` is the name of another
  # profile for which an LM profile exists. Important: This only will give correct routing results if the weights
  # calculated for the profile are equal or larger (for every edge) than those calculated for the profile that was used
  # for the preparation (`my_other_profile`)
  profiles_lm: []

  ##### Elevation #####

  # To populate your graph with elevation data use SRTM, default is noop (no elevation). Read more about it in docs/core/
  # graph.elevation.provider: srtm

  # default location for cache is /tmp/srtm
  # graph.elevation.cache_dir: ./srtmprovider/

  # If you have a slow disk or plenty of RAM change the default MMAP to:
  # graph.elevation.dataaccess: RAM_STORE

  # To enable bilinear interpolation when sampling elevation at points (default uses nearest neighbor):
  # graph.elevation.interpolate: bilinear

  # To increase elevation profile resolution, use the following two parameters to tune the extra resolution you need
  # against the additional storage space used for edge geometries. You should enable bilinear interpolation when using
  # these features (see #1953 for details).
  # - first, set the distance (in meters) at which elevation samples should be taken on long edges
  # graph.elevation.long_edge_sampling_distance: 60
  # - second, set the elevation tolerance (in meters) to use when simplifying polylines since the default ignores
  #   elevation and will remove the extra points that long edge sampling added
  # graph.elevation.way_point_max_distance: 10

  #### Speed, hybrid and flexible mode ####

  # To make CH preparation faster for multiple profiles you can increase the default threads if you have enough RAM.
  # Change this setting only if you know what you are doing and if the default worked for you.
  # 1

  # To tune the performance vs. memory usage for the hybrid mode use
  # prepare.lm.landmarks: 16

  # Make landmark preparation parallel if you have enough RAM. Change this only if you know what you are doing and if
  # the default worked for you.
  # prepare.lm.threads: 1

  # In many cases the road network consists of independent components without any routes going in between. In
  # the most simple case you can imagine an island without a bridge or ferry connection. The following parameter
  # allows setting a minimum size (number of nodes) for such detached components. This can be used to reduce the number
  # of cases where a connection between locations might not be found.
  prepare.min_network_size: 200

  ##### Routing #####

  # You can define the maximum visited nodes when routing. This may result in not found connections if there is no
  # connection between two points within the given visited nodes. The default is Integer.MAX_VALUE. Useful for flexibility mode
  # routing.max_visited_nodes: 1000000

  # If enabled, allows a user to run flexibility requests even if speed mode is enabled. Every request then has to include a hint ch.disable=true.
  # Attention, non-CH route calculations take way more time and resources, compared to CH routing.
  # A possible attacker might exploit this to slow down your service. Only enable it if you need it and with routing.maxVisitedNodes
  # true

  # If enabled, allows a user to run flexible mode requests even if the hybrid mode is enabled. Every such request then has to include a hint routing.lm.disable=true.
  # routing.lm.disabling_allowed: true

  # Control how many active landmarks are picked per default, this can improve query performance
  # routing.lm.active_landmarks: 4

  # You can limit the max distance between two consecutive waypoints of flexible routing requests to be less or equal
  # the given distance in meter. Default is set to 1000km.
  routing.non_ch.max_waypoint_distance: 1000000

  ##### Storage #####

  # configure the memory access, use RAM_STORE for well equipped servers (default and recommended)
  graph.dataaccess: RAM_STORE

  # will write way names in the preferred language (language code as defined in ISO 639-1 or ISO 639-2):
  # datareader.preferred_language: en

  # Sort the graph after import to make requests roughly ~10% faster. Note that this requires significantly more RAM on import.
  # graph.do_sort: true

  ##### Spatial Rules #####
  # Spatial Rules require some configuration and only work with the DataFlagEncoder.

  # Spatial Rules require you to provide Polygons in which the rules are enforced
  # The line below contains the default location for the files which define these borders
  # spatial_rules.borders_directory: core/files/spatialrules

  # You can define the maximum BBox for which spatial rules are loaded.
  # You might want to do this if you are only importing a small area and don't need rules for other countries.
  # Having less rules, might result in a smaller graph. The line below contains the world-wide bounding box, uncomment and adapt to your need.
  # spatial_rules.max_bbox: -180,180,-90,90

# Uncomment the following to point /maps to the source directory in the filesystem instead of
# the Java resource path. Helpful for development of the web client.
# Assumes that the web module is the working directory.
# assets:
#  overrides:
#    /maps: web/target/classes/assets/

# Dropwizard server configuration
  - type: http
    port: 8989
    # for security reasons bind to localhost
#    bind_host: localhost
      appenders: []
  - type: http
    port: 8990
#    bind_host: localhost
# See
  - type: file
    time_zone: UTC
    current_log_filename: logs/graphhopper.log
    log_format: "%d{YYYY-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
    archive: true
    archived_log_filename_pattern: ./logs/graphhopper-%d.log.gz
    archived_file_count: 30
    never_block: true
  - type: console
    time_zone: UTC
    log_format: "%d{YYYY-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"

Thanks in advance!

Is the server/OS/Ubuntu crashing or just the JVM (Java Virtual Machine)?

Is the server/OS/Ubuntu crashing or just the JVM (Java Virtual Machine)?

Just the Java Virtual Machine

JVM crashes are often due to insufficient memory. You can try increasing your Xmx setting and also make sure the JVM creates a heap dump (using -XX:+HeapDumpOnOutOfMemoryError) that you can use to analyze what took all the memory at the time it crashed. You can probably find much better resources about how to analyze JVM crashes online. Right now I do not think this problem is specific for GraphHopper.

1 Like

Do you have any output that you can post here? “it crashes” is pretty broad.

I know. I’m sorry. I don’t know where to look. Graphhopper should be working on my web-app now. I wasn’t able to find anything informative in the web-server graphhopper/logs/graphhopper.log file. However, when I logged into (where I have my linux server running the graphhopper instance), and when I clicked on their in-browser console terminal I saw a list of what looks like out of memory crashes (?) - screenshot attached below. I don’t normally see this information when ssh into the server from my computer’s command line, but it said things such as:

Pathways-Routing login: [ 1990.048046] Out of Memory: Killed process 1204 (java) 
total-vm:2888576kB anon-rss:443048kB, file-rss:0kB, 
shmem-rss:0kB, UID:0 pgtables:1136kB oom_score_adj:0

After reading through this old issue thread on github, I’m going to see I can figure out how to get monit installed on my server space and use that to automatically restart graphhopper after crashing.

If you have any thoughts on this I’d appreciate it – I’m self-teaching my way through all of this! Ugh!

I guess 1024MB for a Linux Server running GraphHopper just is not a whole lot of memory. You should check the memory usage of the server when it is not running GraphHopper yet. The memory GraphHopper will be able to use is limited by the Xmx flag of the java command that starts GraphHopper. GraphHopper will use the more memory the larger the map is you use. Do you use a map for all of Arizona or just Tucson? You can see how much memory GraphHopper will use for the map by running the import and then checking the size of the graph-cache (or arizona-gh or whatever it is called) folder on disk. Still your easiest option might be increasing the size of the server and see if the problem goes away

Also for such tiny instances make sure you use a special GC like ParallelGC or SerialGC that does not use too much (native) memory. The default for JDK11 is G1 that can use more memory than others. Please note that this change might slow down your request speed (a lot).

Input region was 11MB city-sized osm.pbf file

You can try using a very small Xmx setting as @easbar proposed (-Xmx100M or similar)

And be aware of any non-CH routing request. It could blow away your server for every heavier request if you do not limit the visited nodes, something like:

routing.max_visited_nodes: 500000

Oh I missed that you already said it was only an 11MB map file. But you have quite a lot of profiles. Each profile will use memory. You should really check the size of the graph-cache folder to see how much memory is needed for the graph and the CH profiles.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.

Powered by Discourse