Accelerate inference with torch.compile caching Torch.compile caching accelerates model boot times in PyTorch by 2-3 times. 02.04.2026