Yep. I think about this stuff a lot and I still mess this up all the time. I’ve come to realise that string.length in javascript is a footgun. Basically every time I use it, my code is wrong when I test with non-ascii characters.
I’ve spent the last decade writing javascript and I’ve never once actually cared to know what half the UTF16 byte length of a string is.
The only legitimate uses are internal in javascript (since other parts of JS make that quantity meaningful). Like using it in slice() or iterating - though for the latter we now have Unicode string iteration anyway.
I’ve spent the last decade writing javascript and I’ve never once actually cared to know what half the UTF16 byte length of a string is.
The only legitimate uses are internal in javascript (since other parts of JS make that quantity meaningful). Like using it in slice() or iterating - though for the latter we now have Unicode string iteration anyway.