I mostly stopped minifying my assets; as these benchmarks show, by far the biggest size reduction is in the gzip compression; for example lodash goes from 543k to 97k by just gzipping it, which can be reduced further to ~25k but that's actually quite a small saving compared to just the compression alone. And even on very slow connections 50k doesn't take too long to load.
The reason I don't really like doing it is that it obfuscates the code. Happened to discover a bug in production (which may not be easy to repro)? A lot easier to examine that if your files aren't minified. It also allows users to examine the scripts if they encounter bugs, which I've done on a number of occasions. It also simplifies the entire build process.
I know about source maps, but many people don't ship them and it all seems quite complicated for very little savings.
When I do minify, I usually just have a little script to remove the comments, which is often the biggest win with the smallest obfuscation.
I suppose that if you have a lot of JavaScript (>1M) it has more value, but in a lot of cases I see people minifying 100k scripts for little benefit "because that's just what you do".
My application ships a small 3K script people can add to their website which isn't minified on purpose (imagine being able to inspect what 3rd-party scripts do exactly!); IIRC minifying it would save 500 bytes or something, but I've had quite literally dozens of people ask me "why isn't it minified?!"
You can always serve source maps for debugging purposes. They only get loaded once you actually open up devtools and start poking around. You can also configure stack traces to use the source maps, but I do understand the complexity increase in doing all this.
It's not just about reducing the download time. Minified JS will parse faster - especially if using a minifier like Closure Compiler that has various heuristics to remove/simplify code.
I couldn't find any decent benchmarks to say exactly how much though. Of course for your 3kb script it's irrelevant, but a large web app may load a few mb of JS in which case it could make a big difference, especially on mobile devices.
You don't need minification for that as such; you can add it to your test step (and probably also run a more advanced static analysis tool while you're at it).
Also, I've also had minification introduce bugs; and while it's uncommon I'd rather not take the risk as these things are not easy to figure out.
What I discovered there is that the best minification is one where you just produce an empty file -- it's 0 bytes of output and extremely fast.
And if you say "wait, I have have the additional requirement that you didn't break the code in the original file, obviously!", you then must define what exactly "break" means in that requirement-- for example, it could be that some user requires that someFunction().toString().charAt(10) == 'a', in which case even removing whitespace counts as breakage.
You might think the above is just being overly pedantic or pathological, but it demonstrates a principle that actually ends up mattering, which is that you can't evaluate a minifier unless you define what a successful minification even means. Particularly in the case of more advanced transformation.
You're not wrong, but if your code breaks when you apply the simplest refactorings you probably aren't thinking about minifying your code. And while Javascript does let you put yourself in that position, I've never seen it in production code. Angularjs dependencies are a notable exception, and have a backup mechanism.
Closure Compiler's advanced mode, besides having the best minification (it is actually a full compiler with many different optimization passes) and being slow, has a feature that often goes untested externally (but is used heavily inside of Google) and that's cross-module-code-motion.
So if you split your code into modules, some of which are loaded later, the compiler can figure out which code in the initial download is used, but not yet and move it into the late loaded modules, whereas ordinarily this code gets retained in the initial load, because it looks like it is referenced.
A long time ago Malte Ubl demonstrated this with the Splittable add on for Babel.
This is available in Rollup (and webpack 5? 6?), and the new generation tools like Snowpack / Vite benefit from native imports, no bundling needed to start with.
How do they implement it? Closure has two passes: Cross Module Method Motion, and Cross Module Code Motion. Cross Module Method Motion can move prototype methods, it leaves behind a stub, and a late-load overwrites the stub. Cross Module Code Motion can move top level global scope stuff (e.g. functions) The latter in particular is sensitive to declaration order effects, so movement is tricky and requires global analysis.
Rollup’s tree shaking should have the same result - the way I understand it, it puts all module exports in the same scope, renaming variables to avoid conflicts, and removing duplicates. Then while “shaking” the dependency tree, a function that is only used by another imported module will end up in one of the dynamically loaded parts, i.e. not the main bundle. Where exactly it was first imported is irrelevant for its location in the output files.
For snowpack, it’s just a natural consequence of runtime imports.
Wow, esbuild comes away as the clear winner overall. It’s always the fastest — often at least 10x as fast as many of the others – and its compression ratio was within 2% or so of the best in all cases.
If you're mostly worried about how quickly you can minify when you do minify, then you're worrying about the wrong thing. You want as small results as possible that can be executed the fastest, how long time it takes doesn't really matter as you only minify on delivery to end-users, not every time you make a change locally.
So instead it seems Google Closure is the best, in the cases the author got it to work. Otherwise it's UglifyJS/Terser, depending on your needs.
It might be a bit of a red herring but build (and thus deploy) times absolutely do matter.
Going from a 30 second deploy to a 2 minute deploy to a 30 minute deploy has severe impact on your workflow, how atomic your changes can be and how immediate your feedback loop is.
I think (without data but from my own experience) it's one of the most massive underlooked productivity-sinks especially in web.
True and I agree with you, fast deploys are one of the most massively underlooked productivity-sinks. Minification tends to be the slowest step in a frontend pipeline (at least in my experience) but it's closer to a two minute step with Google Closure Compiler on a small-size codebase (+20K LoC) than 30 minutes, and the size difference is more important (for us) than the difference between seconds when deploying.
I disagree only because a large e2e test suite can be very long. I often see 20-30 minute e2e suites for large applications. Applications which compile in a few minutes.
Totally agree re: deploy being very important to optimize.
You would also develop and deploy against closure using simple for your dev workflow and only run advanced more infrequently. Advanced also gives deeper type schecking. I have not worked with closure compiler in years, but it is a different and incredibly powerful beast. Easily extendible with your own compile passes as well
The point is that the two are competing priorities. You still care about minimizing end user download size, but you also care about your own developer experience (and concomitant velocity).
Different people may weight those things differently, but it is unlikely that someone would assign a weight of zero to one or the other (so people are unlikely to just throw up their hands and say "no minification").
> If you're mostly worried about how quickly you can minify when you do minify, then you're worrying about the wrong thing.
You don't need to be 'mostly' worried about how quickly you can minify for esbuild to be the top choice- if you're only, say, 10% worried about build times and still 90% ('mostly') worried about size + execution speed, esbuild still comes out on top from these benchmarks.
Or, put in another way: If you're are 10% worried about that you get to do production deploys 10% faster and 90% worried about how fast you can deliver the code to your users when they load your site (and/or bandwidth costs that increases with each user), then esbuild might make more sense for you.
For the rest of us that are 100% focused on the best experience for our users, we stick with the tools that does the best minification while being a bit slower, and throw more hardware at it if needed for the deploys.
If you can do 10x the builds in a day, you can end up catching more issues before your users. Meanwhile the difference for them is in single digit percents. Interesting test to do next would be how performant are the compiled versions.
> Or, put in another way: If you're are 10% worried about that you get to do production deploys 10% faster
Correction: 10x faster, not 10%. The 60-minute deploy is shortened to 6 minutes, not 54 minutes. This is a significant difference in impact on a deployment workflow.
> we stick with the tools that does the best minification while being a bit slower
Correction: not 'a bit slower', _10 times_ slower, which is significant.
Closure compiler can produce truly impressive output, but it comes at a cost. There are additional rules for how one's code must be written – primarily that you must not use reflection/metaprogramming techniques. If any code, including in dependencies, violates these rules, there's often no warning or error. Instead the output JS will just be wrong. Usually it's immediately and unambiguously wrong, with a runtime exception as the application starts, but every once in a while it's wrong in a very subtle way that requires careful debugging.
For various reasons, I end up debugging a disproportionate number of these cases, and I gotta say that the combination of unsafe optimizations plus long slow compile times can make for some unfun times.
IMO the best option is to try very hard not to have a closure-compiler shaped problem. Keep your client-side code small. If you can't do that, compose it out of small, largely independent, lazily loadable modules. And if you can't do _that_, come to terms with it as early as possible and start using Closure Compiler from the beginning.
Because as much trouble as Closure Compiler can be, there's a scale of application where nothing else comes close. It has top tier dead code elimination (not just tree shaking, but proper DCE), can split an application into lazily loadable chunks, will move code between modules, and will perform pervasive type-aware optimizations that AFAIK no other minifier comes close to.
Of course developers prefer faster build times. But what do the end-users who you write code for prefer? (Especially in a typical CI/CD environment where developers often don't need to monitor and wait for builds to finish)
2% sounds small. And it is, if your traffic is small. It's not small when you have millions of users.
It's tiny, and usually negligible in context with all the other data that needs to be transferred, even with millions of users. Millions of users might be the situation where I'd start thinking of integrating slower builds as an alternative, provided they can seamlessly live side-by-side with the fast build tools.
Why would end users don’t care if there are a million other people getting a bit larger download size..? Accounts payable might care, but they aren’t end users.
2% file size reduction is probably an over optimization for a new startup searching for their first users.
But for an established product with substantial traffic, swapping out a js minifier for one that achieves even single digit % compression improvement seems worthwhile to me - if the only downside is adding a few extra seconds to build time.
End users prefer that we ship the features and squash the bugs they care about, which can be done faster with shorter build cycles. Our webpack prod build, which we have to run as sometimes minifying breaks things, takes 6 minutes. It’s the longest build step we have.
Working on codebases where the compilation times can often be 30-60s, a 10x improvement makes a massive difference in development flow and is definitely worth a 2% regression in size. It's the difference between being multi-tasking while compiling and not.
Of course you could use esbuild for dev and terser for prod, but maintaining two may be a support headache.
A 2% difference is pretty small, though. Small enough that the top four or so minifiers appeared to have very similar performance - just depended on which codebase you were using. If you're picking a minifier, I'd definitely go with esbuild given that it falls in that 'top of the pack' group and is vastly vaster.
If someone were only interested in the smallest possible minified size, the appropriate solution would be to use all of these minifiers every time and choose the smallest result.
I'm very bullish on esbuild. But in testing with my own app, it produces output 8-12% bigger than the same files processed with terser. I would bet that it's a small number of optimizations that are missing that benefit codebases like mine (JSX-heavy).
My biggest frustration is that I am not nearly as skilled in Go as I am in other languages, and don't have the confidence jumping into a large existing Go codebase than I would with JS/TS/etc. I have the same issue with Flow, which is written in OCaml.
I’ll warn that the two big perf claimants (esbuild and SWC) have absolutely barfed on perfectly legal syntax in a project I contribute to. SWC didn’t even say what the problem was, esbuild complained about spreading props into a JSX expression even where other props were being spread into other JSX expressions in the same module. I’m not disparaging either effort but they have some rough edges.
I found esbuild a few months ago and... never gonna use a different one when doing things manually, the speed different is abyssal. It takes two orders of magnitude lower than the alternatives as shown in these benchmarks.
On the other hand, with Node.js supporting native import/export, I also find myself bundling+minifying the code less and less. Just wish Create-React-App started using it, since I cannot be bothered to dig there enough to manually set it up (the goal of me using CRA is to avoid manual config).
Not that it makes fast minification (which I don't care that much about, I minify only on deploy, and it's a small % of the time it takes), but it does allow for a very fast workflow in dev mode: auto reload seems almost instant even with heavy transpiling.
I only mention it because I assume people look at this articles and think about their dev process speed as well.
If you haven't, give it a try: it's from the Vue author, so it's really clean, but works with react as well.
I just tried esbuild on a project that was using webpack. Had one bundle go up from 107kb to 111kb, then if I run it through terser, it'll go back down to 109kb. The speed was phenomenal - even when running through the separate terser step, it's so much faster.
Seems almost Google Closure (advanced) fails in half of the tests but otherwise wins the tests if it's successful. Seems to early to claim any "winner" here before someone manages to fix the tests.
A comment reads "google-closure-compiler is no longer maintained" but that's completely misleading. Maybe that's why the author didn't bother to fix their config for Google Closure?
Google Closure's advanced optimizations require annotations throughout your codebase- in many codebases it may require significant effort to get this working. When you need that extra optimization it's great but can be a lot of effort otherwise.
I'm most excited about esbuild though and I'm curious how it'll fit into a larger codebase & ecosystem. The author is aggressive about keeping it simple (for good reason!!)- the focus on speed is awesome.
> it may require significant effort to get this working
You're right, the author probably didn't provide externs and/or exports, so the tests fails. It's not really that much of an effort, especially if you're supposed to be testing the tool itself. Considering that the author is already testing libraries that have externs/exports written for them already, just shows the lack of care of the tests.
Actually no. You rarely need to annotate anything. They type checking is independent from the optimization of you turn off the type based optimizations. You still get good results. This doesn't get you away from the requirements for advanced optimizations (property use restrictions in particular).
I was confused by this comment as well. I think what it's saying is that google-closure-compiler-js (note -js) is no longer maintained, and that to explain why it's using the (still maintained) google-closure-compiler (note no -js).
Awesome benchmarks. Would be interesting how much minification affects actual client and server performance?
And as understand all these tools use a lexing and ast parsing stage. It would be interesting if you could build a very simple minifier (at least a whitespace one) that could skip that step through just working on input buffer?
There is also the fact that more code means longer minification time. You can praise esbuild for building a CRA in 2 seconds but there are workflows/frameworks that compile sub second by using less runtime code.
I wonder why SWC is so far behind the rest in terms of minified size. Kinda makes me not want to use it to be honest. Maybe there's some low hanging problems to tackle to make it better?
Because it comes from the word minify, which I believe was created by Douglas Crockford. To my knowledge he created the first minify tool, JSMin, in 2001. He described using that tool as to minify.
My parse of "minimizing code" is that it's an approach to writing code, with the goal of writing less code. I would expect it to include things like terse idioms, DRY, preferring data over code, preferring configuration over code, preferring libraries over rolling your own, managing scope, etc. All things about writing different code, which solves the problem but is not always semantically equivalent.
By contrast, minification is mechanical semantic-preserving syntax transformations done to a completed program after it written.
I hear you. I always applied the term "minimalism/minimal code" for that style of programming, which I'm a follower of myself (except the lib/diy route, which tends to be more minimal if you diy).
The reason I don't really like doing it is that it obfuscates the code. Happened to discover a bug in production (which may not be easy to repro)? A lot easier to examine that if your files aren't minified. It also allows users to examine the scripts if they encounter bugs, which I've done on a number of occasions. It also simplifies the entire build process.
I know about source maps, but many people don't ship them and it all seems quite complicated for very little savings.
When I do minify, I usually just have a little script to remove the comments, which is often the biggest win with the smallest obfuscation.
I suppose that if you have a lot of JavaScript (>1M) it has more value, but in a lot of cases I see people minifying 100k scripts for little benefit "because that's just what you do".
My application ships a small 3K script people can add to their website which isn't minified on purpose (imagine being able to inspect what 3rd-party scripts do exactly!); IIRC minifying it would save 500 bytes or something, but I've had quite literally dozens of people ask me "why isn't it minified?!"