So searching by the terms you give the actual spec lists two possibilities a lui/jalr pair, a auipc/jalr pair. Plus they're not given as 'these are the blessed instruction pairs high performance implementations should seek to fuse' but more 'you can do this if you want'. You could maybe count the return address stack hints but they're not about instruction fusion, just a common branch predictor optimisation.
An unsourced table on Wikichip (which was added in 2019 and not updated since) barely counts as a proposal (in terms of it standing a good chance of becoming part of a ratified RISC-V standard).
No doubt many high performance implementations will choose to use fusion and no doubt they'll all go for different combinations, with different edges cases. Yes there likely will be significant overlap but it could become a bit of a nightmare for a compiler writers. A thorough standardized list of instruction pairs to fuse would definitely help here, but we don't have one.
just one example of weird insanity from that table:
slli rd, rs1, {1,2,3}
add rd, rd, rs2 Fused into a load effective address
...this is so insane. Whoever thought that this is okay and good, has, in my opinion, severe psychological and psychiatric problems and would do well to seek professional help. If this gets "fused" into a lea, why just not implement a hex code for lea? I'm just completely at a loss as to how messed up that is.
You know what, I'd like to know what a person who thinks that this is okay looks and behaves like.
The Bitmanip "LEA" instructions were added primarily for sh1add.uw, sh1add.uw, and sh3add.uw which not only shift and add but also zero-extend the lower 32 bits of the rs1 register to 64 bits before shifting and adding them.
Thus they are replacing not two instructions but three. This addition was indicated because of the number of critical loops in legacy software that, against C recommendations, tries to "optimise" code by using "unsigned" for loop counters and array indexes instead of the natural types int, long, size_t, ptrdiff_t. This can indeed be an optimisation on amd64 and arm64, but it is a pessimisation on RISC-V, MIPS, Alpha, PowerPC.
One codebase that uses "unsigned" in this way is CoreMark and they explicitly prohibit fixing the variable type. But it's also common in SPEC and in much code optimised for x86 and ARM in general, where using "int" pessimises the code. If they used long, unsigned long, or the size_t or ptrdiff_t typedefs the code would run well everywhere.
While the .uw instructions were being added, it was very low cost to add the versions using all the bits of rs1 at the same time.
So, in the context of this discussion, having 32 bit operations sign extend the results to 64 bits is a weirdness. More ISAs do is than zero-extension, but the most common in the market zero-extend. Note that at the time RISC-V was designed arm64 was not yet announced, so only amd64 did zero-extension.