The memory usage and CPU time will most likely be dominated by the LM+custom profile requests. For some custom models and large distances such queries could potentially explore the entire graph. You can set a limit for the number of visited nodes which at the same time will give you an idea of the maximum memory usage (and roughly calculation time) per request.
These are a bit more tricky to configure. The higher prepare.lm.landmarks the more memory you will need to even start the server, without necessarily seeing positive effects on the query performance. routing.lm.active_landmarks can improve the A* heuristic, but also comes with an overhead to calculate the heuristic, so you’ll need to experiment with this value to find out what works best for you. Note that especially LM+custom_model-requests might no benefit from many active landmarks at all and it could even be counterproductive depending on the model. When you experiment with these requests you should always compare with lm.disable=true, but the results will depend on the custom model.