The first step in compilation is lexing -- converting a character stream to a stream of semantic "tokens", where a token might be "a number literal" or "the 'while' keyword" or a single character token like "{". This process is usually done via regexs.
It is rare for lexers to use actual regex engines, in my experience, as it's impractical if you want to do proper error reporting unless you write your own regex engine in which case people tend to opt for hand-writing lexer rules.
It's not unheard of, and I have the impression it's getting more common, but still only a small proportion of the compilers I've seen over the years explicitly use regexps.
Most tools like that reduces any regexes to dfa's or similar, but most production compilers I've seen don't use tools like that because they're a pain to do proper error reporting for.
Yea, we're not talking about compilers, we're talking about lexers. Regex doesn't even make a little bit of sense in a compiler world. What lexer are you working with?
There are others, certainly, like parsing various configuration formats etc. but they tend to either use tool generated lexers that tend to use dfa's, or handwritten lexers that may or may not use regexes, but there too it's fairly uncommon to see much regex use.
I custom write all of my lexers for exactly the reasons I outlined above.
According to textbooks, lexers are the bottleneck because they are the only part that has to literally look at every byte of the source (I am including hashing as part of the lexing process).
I am not sure if this is true any more. It probably depends on the language.
Even as CPUs grow faster, code bases grow bigger, so the number-of-bytes argument is still important. On the other hand, heavy optimisations and difficult languages like C++ will shift the bottlenecks to later stages.
I think I read that too, but I am not sure that this is actually true. Please correct me if I am wrong, according to the Rust developers code generation seems to be the bottleneck. That's why they are working on incremental compilation to improve compilation times. They detect changes in the input files by comparing the AST to the old version. So parsing is always necessary but later stages can be cached. That seems to confirm that parsing is not the bottleneck, at least not for Rust.
OTOH that probably also depends on your use case, JS for example needs to get parsed at every page load. Some JS-VMs only parse the source code lazily, at first the parser only detects function boundaries. Only if this function is really executed, the function gets parsed completely.
> I think I read that too, but I am not sure that this is actually true. Please correct me if I am wrong, according to the Rust developers code generation seems to be the bottleneck.
You're both right. The problem of C++ is how '#include' works, that it just includes the content of all files and
therefore there's more overhead on the lexing side.
Rust's "equivalent" of '#include' is 'use', which doesn't have this problem, because it doesn't concatenates files.
You can make codegen faster by doing less work and producing less sophisticated code. You can't not look at every byte. It's the limiting factor in fastest possible compilation, not what takes the most time in most production compilers.
I suspect that modern languages try and fix some of this by making the syntax unambiguous. Means you only need to tokenize each file exactly once. Compare with older languages where if you change a header/module/etc you need to reparse the whole shebang.
Possible with rust that moves a bunch of the grant work into the code generator. AKA where as in C/C++ by the time you're generating code all your type information is set in stone, possible in rust a bunch of stuff isn't resolved yet.
Rust uses LLVM, a heavyweight backend with fearsomely thorough optimisation.
It might be that for non-optimised output, the parser becomes the bottleneck again. Or it might be that LLVM just imposes overhead even for the unoptimised case, that pays for itself in the optimised case.
Depending on how source programs are split up into files, parsing can be easily parallelized (think one thread per file), while other compilation tasks are harder due to interdependencies. E.g. semantic analysis requires building type representations, global namespaces, etc. Code generation is (usually) parallelizable as well, but there are a couple very serial steps in the middle, too.
My experience is that parsers for source languages can reach into the 1-10mb/s range, and depending on how complex the IRs and transformations are after that, code generation is usually around 0.5mb-5mb/s. The stuff in the middle (dealing with IRs) is harder to measure in terms of bytes.
Perhaps in an ancient one-pass C compiler. Modern compilers build many trees, usually one for each pass. Later stages do expensive operations on sets (even without optimization one has to do basic register allocation), so I'd say the lexing stage is entirely negligible.
C++ compilers (at least on most current code bases that don't need modules) still need to parse and lex headers though -- so actual executable code is pretty rare. Most compilations are also debug builds which don't optimize(but creating debug information can also be slow; the really slow step is usually linking)
> But I can't imagine a lexer would ever be the performance bottleneck in a compiler.
one rather trivial way to observe the effects of avoiding building a source file is to use ccache. ccache avoids the recompilation (even if you do a make clean, or some such), and it is not uncommon to observe speed ups of factor of 5 or so.
however, once you have crossed that barrier, you hit the linking wall. which is where you would end up spending a large portion of time. gold (https://en.wikipedia.org/wiki/Gold_(linker)) optimizes that i.e. supports incremental linking, but unfortunately, i haven't had any experience in large code-bases where it is being used.
Lexer generators typically allow describing rules with regular expressions, but through Thompson's construction, subset construction, and DFA minimization, you can end up with a blazing fast/optimized DFA that processes your input at maximum speed.