Folder of gtfs zip files in config file

Hi Team,

I have almost 2000+ zip files downloaded from open source website representing several gtfs individual data across planet. I know we can give multiple zip file names separated by comma in the config file

  # for multiple files you can use: gtfs.file: file1.zip,file2.zip,file3.zip
  gtfs.file: gtfs-vbb.zip

A trivial quick check, could you please confirm where there is a way of giving folder path where all 2000+ zip files are kept available or give path with wild card like data/*.zip?

If no such provision available, can someone suggest best way to handle this or updating the code to loop all zip files is the only solution?

Thank you very much !

-Prem

Currently there is no other way than to specify individual files. But I doubt it will work memory-wise with so many zip files. Try with a few files first and then increase the file size and memory…

Thanks for the reply @karussell . I have plenty of RAM around 250GB.

Now that I had changed the code to handle folder path and loop through all 2000+ zip files to process, however, after building the pt network, code broke while performing interpolate transfers with below exception.

Caused by: java.lang.IllegalStateException: Maximum edge count exceeded: 2147483647
	at com.graphhopper.gtfs.PtGraph.addEdge(PtGraph.java:122)
	at com.graphhopper.gtfs.PtGraph.createEdge(PtGraph.java:283)
	at com.graphhopper.gtfs.GtfsReader.insertTransferEdges(GtfsReader.java:219)
	at com.graphhopper.gtfs.GtfsReader.insertTransferEdges(GtfsReader.java:209)
	at com.graphhopper.gtfs.GraphHopperGtfs.insertInterpolatedTransfer(GraphHopperGtfs.java:185)
	at com.graphhopper.gtfs.GraphHopperGtfs.lambda$interpolateTransfers$6(GraphHopperGtfs.java:172)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.stream.DistinctOps$1$2.accept(DistinctOps.java:174)
	at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
	at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1939)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
	at com.graphhopper.gtfs.GraphHopperGtfs.interpolateTransfers(GraphHopperGtfs.java:153)
	at com.graphhopper.gtfs.GraphHopperGtfs.importPublicTransit(GraphHopperGtfs.java:134)

Can someone please confirm that whether or not graphhopper supports GTFS routing with such a huge number of gtfs zip file building across planet? If it doesn’t, is there any alternative solution to host gtfs routing planet-wide?