Hiyas, I always seem to be trying to solve large problems with many stops. Creating a symmetric distance matrix is always a pain point and I’m trying to think up ways to minimize it.
I’m talking problems with more than 1000 points.
Because there are lots of points, it means that more than likely, the points are relatively clumped together. I was thinking, doing an initial pass to remove calculating distances greater than say 300m
So if I assess point 1 through to 1000 in an initial pass, if (in the unlikely event) p1 to p600 and p1 to p700 is greater than 300m, then don’t work out p600 to p1, and p700 to p1.
This results in an initial pass able to remove two distance checks from GH (you would just set the distances to 9999 meaning JSprit won’t route them due to high cost)
I have assessed this in the past and the results are as such; For 1021 delivery points; Total connections 1021x1021 = 1,042,441 Execution time in seconds : 443.59357 Used < 150m : 37,686 Used >= 150m, < 200m : 19,014 Used >= 200m, < 250m : 24,022 Used >= 250m, < 300m : 28,406 Rejected >= 300m : 933,313
It would reject quite a few point to point calculations.
The initial pass would only need to work out distances in one direction because it is unlikely that 300m one way would have a significantly reduced distance the other way, but would need assessment.
Just more thinking out loud, but wondered if it makes sense? Or is there a better way?
Is the costMatrixBuilder.addTransportDistance(i, n, distance); thread safe?
I find that it slowly gets slower and slower when doing this via multiple threads at once.
If it is, is the workaround just to populate a separate Collections.synchronizedList(new ArrayList()); and building the matrix at the end?
Hi Greg,
Did you have any progress in calculating large distance matrices?
One way to go would be segmenting points with the knowledge of the domain and only calculating distances within the same segment. So let’s say you have 5 segments than for n points connections are 5x(n/5)^2 instead of n^2. (For 1021 point connection size is 208488 which is %20 of original size)
Also, with this approach execution time reduces since you are guiding optimization to solve smaller problems at the same time.
I don’t have a solution as of yet, but I feel that the answer is to use the low level API;
I have not had the time to test and implement.
Your suggestion makes sense, but I do not have a method to segment as all points are quite clustered together. This means that is isn’t a fault in graphhopper taking time calculating large paths between points, but is purely a volume issue.
Opendoor logistics implements a solution quite well for calculating matrices of 1000+ points, but the developer has much more knowledge in the area than I do.
Hiya, so here is where I’m currently at - This processes 896 relations in 3 seconds using a single thread.
I suppose not a perfect implementation but it is a quick way to do an initial assessment to see if current p1 -> p442 is less than 300m then use it in the matrix, if not then exclude.
This means that it takes ~4 seconds (guestimate) to go from 1,042,441 points to 109,128 points.
Note that createLargePointSet is cut down in size to reduce post size.
import com.graphhopper.routing.Dijkstra;
import com.graphhopper.routing.Path;
import com.graphhopper.routing.querygraph.QueryGraph;
import com.graphhopper.routing.util.EdgeFilter;
import com.graphhopper.routing.util.EncodingManager;
import com.graphhopper.routing.util.FlagEncoder;
import com.graphhopper.routing.util.FlagEncoderFactory;
import com.graphhopper.routing.weighting.Weighting;
import com.graphhopper.storage.GraphBuilder;
import com.graphhopper.storage.GraphHopperStorage;
import static com.graphhopper.routing.util.TraversalMode.EDGE_BASED;
import com.graphhopper.routing.weighting.ShortestWeighting;
import com.graphhopper.storage.index.LocationIndexTree;
import com.graphhopper.storage.index.QueryResult;
import com.graphhopper.util.shapes.GHPoint;
public class tester {
private static FlagEncoder carEncoder;
private static EncodingManager em;
private static Weighting defaultWeighting;
public static void main(String[] args) throws Exception {
testGraph();
}
public static void testGraph() {
GHPoint[] points = createLargePointSet();
em = EncodingManager.create(FlagEncoderFactory.FOOT + "|block_private=false," + FlagEncoderFactory.BIKE + "," + FlagEncoderFactory.BIKE2 + "," + FlagEncoderFactory.CAR);
GraphHopperStorage graph = createStorage(em);
graph.loadExisting();
LocationIndexTree index = new LocationIndexTree(graph, graph.getDirectory());
index.prepareIndex();
carEncoder = em.getEncoder("car");
defaultWeighting = new ShortestWeighting(carEncoder);
graph.freeze();
Weighting w = defaultWeighting;
long startNano = System.currentTimeMillis();
System.out.println(points.length + " Points");
for (int i = 0; i < points.length-1; i++) {
QueryResult fromSnap = index.findClosest(points[i].getLat(), points[i].getLon(), EdgeFilter.ALL_EDGES);
QueryResult toSnap = index.findClosest(points[i+1].getLat(), points[i+1].getLon(), EdgeFilter.ALL_EDGES);
QueryGraph queryGraph = QueryGraph.create(graph, fromSnap, toSnap);
Path path = new Dijkstra(queryGraph, w, EDGE_BASED).calcPath(fromSnap.getClosestNode(), toSnap.getClosestNode());
}
long endNano = System.currentTimeMillis();
System.out.println("Done in "+(endNano - startNano)/ 1000+" seconds");
}
private static GraphHopperStorage createStorage(EncodingManager em) {
GraphHopperStorage ghStorage = new GraphBuilder(em).setRAM("C:\\Program Files\\ETL Data Load Processes\\nz_latest_gh\\", true).set3D(true).build();
return ghStorage;
}
public static GHPoint[] createLargePointSet() {
return new GHPoint[] {
new GHPoint(-41.22179883, 174.806107),
new GHPoint(-41.22195814, 174.8058141),
new GHPoint(-41.22193542, 174.805821),
new GHPoint(-41.22190838, 174.8058292)
};
}
}