--adapt[=min=#,max=#]
zstd will dynamically adapt compression level to perceived I/O conditions. Compression level adaptation can be observed live by using command -v. Adaptation can be constrained between supplied
min and max levels. The feature works when combined with multi-threading and --long mode. It does not work with --single-thread. It sets window size to 8 MB by default (can be changed manu‐
ally, see wlog). Due to the chaotic nature of dynamic adaptation, compressed result is not reproducible.
I really should have read the documentation! That feature looks awesome, but in a quick test it could only use about 50% of the available output bandwidth. My upload speed is 50 Mbps, but zstd could only send about 25 Mbps.
Similarly, on a local speed test (SSD -> SSD), using a fixed compression level was much faster than --adapt.
"" note : at the time of this writing, --adapt can remain stuck at low speed when combined with multiple worker threads (>=2). ""
There are some ADVANCED COMPRESSION OPTIONS --zstd tunables that might help.
Leave wlog alone unless you're willing to store the value out of band and pass it in again during decompression.
hashLog, bigger number uses more memory to compress but is often faster.
chainLog smaller number compresses faster, but worse ratio.
In your use case monitoring general system utilization to identify bottlenecks might also help. My gut instinct is that you might already have hit a memory bandwidth limit for the platform, at which point REDUCING the hashLog until it fits within your intended performance budget might yield better bandwidth results. Reducing the chainLog value might have the same effect.
if you're running your test over the internet [ fluctuating latency, some packet losses ] - try enabling BBR [1] tcp congestion control algorithm on the sender side to utilize the available bandwidth more efficiently.