Going from O(n) to O(1) on an operation where n is 10 could be a performance downgrade. That doesn't mean it's useless, it just means there's more to it than the big O.
Asymptotic complexity is about how things scale up. You should care about it when you're working with things that scale. Doesn't mean it's useless if your data is static, just means you need to understand when to go for the high scaling solution and when not to.
① you're writing a real-time control program such as a motor controller and you care about not just the asymptotic complexity of your algorithms but their worst-case execution time, in microseconds; or
② you should just do the computation by hand with pencil and paper instead of writing and debugging a program to do it for you
at the point that the input data becomes too big for ② to be an appealing option, n is at least 100, which means you probably care a lot about whether your program is O(n) or O(n⁴), at least if you wrote it in something slow like python
maybe the time to start worrying about it is after you start the program debugged and running for the first time
It can be useful in general, but it is hard to provide the type of tolerances expressed in physical units you’d expect in an engineered solution when all the constants are ignored.
I'm just curious about what you're talking about. Tolerances and physical units are not usually mentioned in relation to time/space complexity. I'm open to the idea that you may know something I don't, feel free to enlighten me.
Then again, big-O is useless in many cases because real computers have too many arbitrary performance thresholds.
I suspect software engineering is impossible, or at least, nobody has made the model required to do it.