I self-host graphhopper for a use-case that typically involves shorter routes where users (runners, walkers, and bikers) want to see all of the bumps along the way and know how much they’ve climbed/descended. Currently, I sample the 2D route and find the elevation at each point along the route to get elevation profiles. I’m interested in adding elevation into graphhopper data to save on latency, bandwidth, leverage bridge/tunnel interpolation, and get more accurate total ascent/descent over long routes. I’ve noticed a few issues though that make the graphhopper elevation profiles not quite usable for this purpose:
Elevation data can be “blocky” when going up/down climbs that I know are steady because the nearest elevation data is used at each point without interpolating along the slope (for example this gradual climb/descent looks like this in graphhopper)
To fix these I was thinking of submitting these PRs that I’ve prototyped and appear to solve the problems but I want to get some feedback on the overall plan first:
Implement bilinear interpolation in HeightTile.java to approximate elevations better between elevation data samples
Post-process all way geometries to insert “sampled” points between any adjacent points that are further apart than a configurable threshold. So if you set the sample distance to 30m and 2 consecutive points are 25m apart it does nothing, but if they are 60m apart, one point would be inserted halfway between them.
Make the Douglas Peucker polyline simplification implementation elevation-aware so that it still throws out points along a long straight road in the desert, but it keeps the important points defining the elevation profile for a long straight road over a mountain.
Let me know if you think these proposed changes make sense, what you think should be enabled by default vs. opt-in config, or if you’d rather move this discussion into a github issue.
Yes, I opened that PR sorry, my github handle and username here don’t match up! My site uses mapzen data by downloading the elevation-encoded PNGs and decoding in the browser (more details here). When I prototyped these proposals with mapzen SRTM data in graphhopper, the elevation profiles looked almost exactly the same as the site shows now.
What does this mean exactly? Will it remove the point only if elevation delta is less than e.g. 1m? As we apply the douglas peucker on import and on query: is this change only required for the import?
2d Douglas Peucker removes a point if the line moves less than some threshold (1m) away from that point without it, 3d Douglas Peucker does the same thing, it just is does the distance computation in 3d space (see https://github.com/mourner/simplify-js/blob/3d/simplify.js as an example implementation). We could probably have a configurable weight on the elevation parameter, for example Douglas Peucker could have a 1m threshold in lat/lon but 5m in elevation so you can tune increased size vs. resolution. I think we’d want import and query to both use this when elevation is enabled.
Do you know what this means in terms of storage increase for e.g. Germany (or Europe or world wide)?
I remember the edge geometries were bigger when I prototyped this before, but I don’t recall how much. I imagine that would vary quite a bit based on the values you choose for distance threshold and the 3d simplification factor. Maybe I’ll open up an experimental PR to get these both working and post some stats for different parameters so we can decide how/if to proceed?
Alright, I opened https://github.com/graphhopper/graphhopper/pull/1942 for bilinear interpolation and re-prototyped the 3D polyline simplification/edge sampling. Looks like based on the threshold you choose for sampling long edges, and weight given to elevation during simplification the worst case edge geometry file size increase is 30%, but there are reasonable combinations that keep the increase under 10-15%. I also found that there is a separate improvement we can make to reduce edge geometry file size by 15% across the board, which helps offset this increase.
This sounds really good. And yes, it would be nice to have smaller PRs, then review can be faster (in theory ) and certain bigger changes, e.g. that cause the size increase, could be longer tested before merging.
I also found that there is a separate improvement we can make to reduce edge geometry file size by 15% across the board, which helps offset this increase.