| Forum | GitHub | Maps | Blog

Computation time increases drastically if there are duplicate location ids


I construct same problem with same number of vehicles and jobs by 2 methods:

1: Each job has unique pickup location id (maybe same lat and lng) i.e: There are multiple location ids with same latlng

2: Each lanLng has unique location id, so there are multiple orders with same pickupLocation ID.

and method 1 takes close to 10 times more time to solve than method 2.

What could be the reason for this?
Shouldn’t computation time be only dependent of number of jobs and vehicles?


Shouldn’t computation time be only dependent of number of jobs and vehicles?

Yes, exactly. What kind of cost matrix do you use?


Hi @stefan,

I’m using vehicleRoutingCostMatrix. I have created a testCode where I create two VRPs with methods described above. For 120 jobs method 1 takes 253 seconds, with method 2 it takes only 115 seconds.




This is surprising. I tried your code with 600 jobs and difference is drastic. 1284 seconds for method 1 and 209 seconds for method 2.




I think reason is not the repetition. Cause when I set repeatRatio = 0. Still there is a huge difference is calculation time.

So this is maybe because of how location id is set.


maybe try setting index to the locations and using fast cost matrix?


I suggest also what @jie31best proposed. Use FastVehicleRoutingCostMatrix which is Array based instead of VehicleRoutingCostMatrix being Map based. I assume that differences in performance are caused by the Map and the size of the Map, but I am not sure. Would you mind to conduct your experiments with FastVehicleRoutingCostMatrix?
BTW: I dont use the Map based matrix anymore since performance is significant worse in comparison to FastVehicleRoutingCostMatrix.