Back to Blog

2 Seconds, Not 2 Minutes: Guardrail Runtime Performance Tuning

ChangelogProEnterprise

If there's one theme with Zenable tooling, it's that speed matters. Whether it's 200x faster guardrails via local sync, 3.4x faster AI reviews, or rewriting our CLI to bundle all dependencies and cut startup time by 10x, we don't ship slow tools.

This time, we performance-optimized our guardrail checks and found the right balance of batching and limited overhead per guardrail engine. We found that we were able to get a sample 281-file evaluation down from over two minutes to just 2.4 seconds.

Here's what we found when profiling across 281 files:

Batch SizeTime
1 file667s
5 files (previous default)135s
10 files70s
25 files29s
50 files15s
100 files7.7s
All at once2.4s

We found that a typical guardrail engine invocation took between 2 and 3 seconds to start up, and batch size did not meaningfully affect runtime duration. With the old 5-file default, that meant dozens of engine startups adding up to over two minutes. We did see increased max memory usage for larger batches, but on the order of tens of MBs during our testing. By tuning the batch size and optimizing it per guardrail engine, we brought that down to 2.4 seconds, a 58x improvement. The batch size is now variable and tuned per engine, so each one runs at its optimal throughput.

You'll also notice a new progress bar so you can see exactly what's happening during longer runs. Want the details? Pass --profiling to get a full performance breakdown of your command.

We also added stdin support. Pipe anything that outputs file paths into zenable check for targeted scans:

git status --short | zenable check
find src -name '*.py' | zenable check
zenable check --branch

Guardrails are now fast enough, and flexible enough, to fit anywhere in your workflow.

Still not using our IDE integrations? Use our one-command installer and get set up in 30 seconds.