Max iterations best trade-off

Hello community,

I’m looking for a strategy to determine the optimal max iterations value “vra.setMaxIterations(maxIterations)” : trade-off between best final results and lowest time to process jsprit.

I guess it depends on the used algorithm and the number of visited node, am i correct?

For instance, it is not necessary to use a high max iterations for simple problem.

Thank you,
Martin

No answer, nevermind, max iterations is an algo parameter…

By the way, on jpsrit 1.6.2, to build a problem :

final VehicleRoutingProblem vrp = vrpBuilder.build();
final VehicleRoutingAlgorithmBuilder vraBuilder = new VehicleRoutingAlgorithmBuilder(vrp, xml_file);

On jsprit 1.7.0, VehicleRoutingAlgorithmBuilder doesn’t exist anymore, did you remove xml configuration way? Now the appropriate way is to configure algo by coding?

Yes, configure it in code like this: jsprit/jsprit-core/src/main/java/com/graphhopper/jsprit/core/algorithm/box/SchrimpfFactory.java at master · graphhopper/jsprit · GitHub

Unfortunately, you need to find out yourself. Just conduct a sensitivity study based on your problems with varying max iterations. For very simple problems, you dont need many iterations. However, you can also use a number of termination strategies you can find here: jsprit/jsprit-core/src/main/java/com/graphhopper/jsprit/core/algorithm/termination at master · graphhopper/jsprit · GitHub. For example, VariationCoefficientTermination terminates the algo if variations of costs of found solutions are less than a threshold variation.

1 Like

BTW: you can still use the xml config by using the jsprit-io module. It is just removed from the core.

I played around with this setting for sometime. In my case where number of jobs and vehicles are dynamic, I concluded the following to be the best value for maxIterations:

int iterations = vrpBuilder.getLocationMap().size() * vrpBuilder.getAddedJobs().size() * vrpBuilder.getAddedVehicles().size();
iterations = Math.max(10000, iterations);
iterations = Math.min(50000, iterations);

You should consider more variables tham problem properties. The computation time is affected with hardware too, like thread count, speed of CPU, speed of IO. Iterations count can be too overhead in case of very big problems, you should use time termination in case of big data. Next case can be randomly step on best solution in seaching space, so you should consider using some of variation termination algorithm.

You can look here:

and here is some proposal for termination algorithm:

Hope this helps you undestend this better