Of all the complaints I have about email, encoding handling is not really one of them. Everybody decided long ago to ignore binarybody, and just send everything base64 encoded. I really think the result is much simpler than HTTP.
The "encoding" here is I think charset encoding, not MIME encoding. The determination of charsets is basically a game of roulette; the last time I looked, only somewhere around the region of 40% of messages actually adhered to the charset they declared themselves as. A classic (and simple) error case is people who write Windows-1252 and then call it ISO-8859-1, not realizing that there actually is a difference between the two.
(Email could use a better binary encoding for attachments than base64, though, since transports are basically 100% 8-bit safe, even if not binary safe. Usenet went with yEnc, which IETF balked at in what is a case of perfect being an enemy of the good).
Email was actually the first network protocol to successfully deal with this problem, by creating the standard that every other protocol uses today to declare your encoding. But broken clients will send broken messages, like they do in any protocol.
I have seen plenty of web sites broken by it too, and it's a problem when moving files between Linux and Windows.
(And yes, things would be better if people standardized on binarybody instead of 7bitmime. unfortunately, the Microsoft server announces its support everywhere, but it's broken, so nobody can rely on that one extension (an inside parenthesis, there is a work-around that works everywhere, but it goes against the standard).)
Charset encoding is definitely a pain everywhere. Email's specific problem is that the charset is essentially mandatory in terms of labeling, but the label is often incorrect. The light at the end of the tunnel is that there is general agreement that the future of charsets is "use UTF-8 everywhere," so it's just a matter of waiting a century to kill off all the legacy stuff.