Sanity checking planet.osm build times

Hey all,

Thanks again to the super responsive folks on this board.

I’m building planet.osm GH indexes with car|turn_costs, bike2 and foot. On an ec2 m5d.8xlarge (32 vCPU, 128 “elastic compute units”, 128gb RAM, 2 x 600 NVMe SSD), I give the GH build 80gb of ram and run everything off the SSDs, and a full world import takes about 24 hours. Is that a reasonable build time? Is there anything else I could be tuning to speed it up?

I tried to set “” which appeared to deadlock the build into an out of memory state. That’s the only major optimization I’ve tried. Experimenting with those variables is a bit painful because it takes so long to figure out if they’re going to work.

I get that building three routing graphs for the entire planet is a ton of work. If 24 hours is what it takes, so be it. Just wondering if it can get any faster.

Here’s my config.yml

  # OpenStreetMap input file
  # datareader.file: some.pbf

  ##### Vehicles #####

  # More options: foot,bike,bike2,mtb,racingbike,motorcycle (comma separated)
  # bike2 takes elevation data into account (like up-hill is slower than down-hill) and requires enabling graph.elevation.provider below.
  graph.flag_encoders: car|turn_costs=true,foot,bike2

  # Enable turn restrictions for car or motorcycle.
  # graph.flag_encoders: car|turn_costs=true

  # Add additional information to every edge. Used for path details.
  # If road_environment is added and elevation is enabled then also a tunnel and bridge interpolation is done, see #798.
  # More options are: surface,max_width,max_height,max_weight,max_axle_load,max_length,hazmat,hazmat_tunnel,hazmat_water,toll,track_type
  graph.encoded_values: road_class,road_class_link,road_environment,max_speed,road_access

  ##### Elevation #####

  # To populate your graph with elevation data use SRTM, default is noop (no elevation)
  graph.elevation.provider: srtm

  # default location for cache is /tmp/srtm
  graph.elevation.cache_dir: ./srtmprovider/

  # If you have a slow disk or plenty of RAM change the default MMAP to:
  # graph.elevation.dataaccess: RAM_STORE

  #### Speed, hybrid and flexible mode ####

  # By default the speed mode with the 'fastest' weighting is used. Internally a graph preparation via
  # contraction hierarchies (CH) is done to speed routing up. This requires more RAM/disc space for holding the
  # graph but less for every request. You can also setup multiple weightings, by providing a comma separated list.
  # To enable finite u-turn costs use something like fastest|u_turn_costs=30, where 30 are the u-turn costs in seconds
  # (given as integer). Note that since the u-turn costs are given in seconds the weighting you use should also
  # calculate the weight in seconds. The u-turn costs will only be applied for edge_based, see below. fastest

  # To enable turn-costs in speed mode (contraction hierarchies) edge-based graph traversal and a more elaborate
  # pre-processing is required. Using this option you can either turn off the edge-based pre-processing (choose 'off'),
  # use edge-based pre-processing for all encoders/vehicles with turn_costs=true (choose 'edge_or_node') or use node-based
  # pre-processing for all encoders/vehicles and additional edge-based pre-processing for all encoders/vehicles with
  # edge_and_node edge_and_node

  # Disable the speed mode. Should be used only with routing.max_visited_nodes or when the hybrid mode is enabled instead
  # no

  # To make CH preparation faster for multiple flagEncoders you can increase the default threads if you have enough RAM.
  # Change this setting only if you know what you are doing and if the default worked for you.
  # 1

  # The hybrid mode can be enabled with
  # prepare.lm.weightings: fastest

  # To tune the performance vs. memory usage for the hybrid mode use
  # prepare.lm.landmarks: 16

  # Make landmark preparation parallel if you have enough RAM. Change this only if you know what you are doing and if the default worked for you.
  # prepare.lm.threads: 1

  # avoid being stuck in a (oneway) subnetwork, see
  prepare.min_network_size: 200
  prepare.min_one_way_network_size: 200

  ##### Routing #####

  # You can define the maximum visited nodes when routing. This may result in not found connections if there is no
  # connection between two points within the given visited nodes. The default is Integer.MAX_VALUE. Useful for flexibility mode
  # routing.max_visited_nodes: 1000000

  # If enabled, allows a user to run flexibility requests even if speed mode is enabled. Every request then has to include a hint ch.disable=true.
  # Attention, non-CH route calculations take way more time and resources, compared to CH routing.
  # A possible attacker might exploit this to slow down your service. Only enable it if you need it and with routing.maxVisitedNodes
  # true

  # If enabled, allows a user to run flexible mode requests even if the hybrid mode is enabled. Every such request then has to include a hint routing.lm.disable=true.
  # routing.lm.disabling_allowed: true

  # Control how many active landmarks are picked per default, this can improve query performance
  # routing.lm.active_landmarks: 4

  # You can limit the max distance between two consecutive waypoints of flexible routing requests to be less or equal
  # the given distance in meter. Default is set to 1000km.
  routing.non_ch.max_waypoint_distance: 1000000

  ##### Storage #####

  # configure the memory access, use RAM_STORE for well equipped servers (default and recommended)
  graph.dataaccess: RAM_STORE

  # will write way names in the preferred language (language code as defined in ISO 639-1 or ISO 639-2):
  # datareader.preferred_language: en

  # Sort the graph after import to make requests roughly ~10% faster. Note that this requires significantly more RAM on import.
  # graph.do_sort: true

  ##### Spatial Rules #####
  # Spatial Rules require some configuration and only work with the DataFlagEncoder.

  # Spatial Rules require you to provide Polygons in which the rules are enforced
  # The line below contains the default location for these rules
  # spatial_rules.location: core/files/spatialrules/countries.geo.json

  # You can define the maximum BBox for which spatial rules are loaded.
  # You might want to do this if you are only importing a small area and don't need rules for other countries.
  # Having less rules, might result in a smaller graph. The line below contains the world-wide bounding box, uncomment and adapt to your need.
  # spatial_rules.max_bbox: -180,180,-90,90

# Uncomment the following to point /maps to the source directory in the filesystem instead of
# the Java resource path. Helpful for development of the web client.
# Assumes that the web module is the working directory.
# assets:
#  overrides:
#    /maps: web/src/main/resources/assets/

# Dropwizard server configuration
  - type: http
    port: 8989
    # for security reasons bind to localhost
      appenders: []
  - type: http
    port: 8990

  level: INFO
    "com.graphhopper.resources": WARN

I was doing pretty well with ~24 hour build times but adding car|turn_costs + edge_or_node to this config means I have a build that’s been running for 36+ hours and still isn’t done.

Is there any way to tell how far along this build is? Would throwing more memory at it help?

INFO [2020-01-27 22:32:10,247] nodes: 46 936 118, shortcuts: 629 530 907, updates: 0, checked-nodes: 356 949 221, t(total): 117187.15, t(period): 10187.20, t(lazy): 85105.88, t(neighbor): 0.00, t(contr): 21404.39, t(other) : 489.66, dijkstra-ratio: 83.18%, sc-handler-count: time: 90783.06s, nodes-handled: 587 239 046, loopsAvoided: 0, sc-handler-contract: time: 21386.71s, nodes-handled: 183 353 707, loopsAvoided: 0, last batch: limit-exhaustion: 28.7 %, avg-settled: 43.5, avg-max-settled: 151.8, avg-polled-edges: 48.3 total: limit-exhaustion: 19.4 %, avg-settled: 20.7, avg-max-settled: 106.7, avg-polled-edges: 22.7, totalMB:81920, usedMB:70642

Yes this is expected. Unfortunately edge-based preparation is still much slower than node-based.

The first entry in the log lines (nodes: 46.936.778) is counting down to zero and you can roughly check how long it took since the last log line. However, this is not linear, and its getting slower towards the end of the preparation so its hard to give a real estimate.
You can take a look at this issue for some benchmarks for Europe which should give you an idea how much longer edge-based preparation takes compared to node-based:

As long as you are not swapping to disk I do not think so. However, processor speed can make a difference (maybe a factor of two). If you are experimenting here you probably want to do it with a smaller map (a single contintent or so) first. It should be possible to roughly extrapolate the expected preparation time from a contintent to planet-wide import (the preparation time should be about proportional to the number of nodes or edges of the road network).

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.