One of the things that people aim for in writing tests is orthogonality - different tests should not break for the same reason. This promotes the ability to refactor and change your code. I have also seen massive codebases, with masses of tests which were rarely run, and which effectively concreted the code and stopped it from changing.
But doesn't test redundancy reduce the risk of a test isn't testing the thing you thought it was?
If the same situation is tested in 2 different ways, a bug in one test might cause the test to fail to correctly handle all case, but the second test might still catch.
Maybe auto-generation of test cases would be a better technology?
Typically tests can provide fault detection and fault isolation, but to different degrees.
A feature/e2e test typically provides the most effective kind of fault detection, because it combines all the components of the real system with no stubs or mocks. But if it shows a fault, then typically that fault is hard to identify, because it wasn't previously driven out by a more detailed integration or unit test.
Contrariwise, integration or unit tests will typically be best suited to isolating faults, but typically those faults are limited to the class of things you included in your tests.
This is why we have "test pyramids": a handful of slow, brittle feature tests at the top, then an increasing volume of faster, less-brittle integration and unit tests.
TDD is almost orthogonal to the testing pyramid, with one difference from non-TDD tests. In TDD each line of code was, ideally, driven out by a test or tests. Fault detection is increased, because each line was written in response to a manufactured (test-first) fault.