Most of the time, big O is not relevant if you're not writing new algorithm. For the usual CRUD app? Not so much. Heck even for more involved work, you will not touch them because your solution will be called or will call some external tools.
The good news is that, while still valuable, you won't have to lose time creating a new broken wheel. The bad news is that this part of CompSci is now mostly fundamental research.
And then you get needlessly inefficient CRUD apps that take minutes or even hours to generate a simple report because they're doing accidentally quadratic or even cubic work somewhere.
This is just wrong. Understanding the difference between O(1), O(n) etc is essential for literally everyone who writes code. Every single programmer is better with this understanding than without it. You should know the complexity of the code you write - and most of the time that doesn't even require actively thinking about it. If you know the basics of complexity analysis it's just intuitive.
Sure, you don't strictly need it to write working code. But you will very quickly run into situations where you're writing unnecessarily slow code because you don't know what you're doing.
Understanding algorithm costs can be extremely important when building a simple CRUD app because it will help you reason how things will scale with the size of the data you’ll see in the real world.
It’s easy to make things that are accidentally quadratic but are fine until your big customer has 10 times the data and they suddenly aren’t fine anymore.
Doesn’t mean you need to optimise everything, but it doesn’t mean you shouldn’t think about this stuff.
The good news is that, while still valuable, you won't have to lose time creating a new broken wheel. The bad news is that this part of CompSci is now mostly fundamental research.