With the right basis functions, yes, you would end up approximating the target function well. The problem is how do you determine the right basis functions? In SVMs, for example, there is no one good way to pick the right kernel today. This is a benefit with ANNs - the universal approximation theorem tells you that there is nothing else you need to decide on once you're using ANNs. Sure there are parameters like the no of layers etc, but those parameters are something you need account for in other techniques too.
Not saying ANNs give the best solution always. Or even the best first model to try out on all problems; only pointing out that there is a convincing case to be made for them, and the universal approximation theorem has a lot to do with it.
Wrt your other point, Turing completeness was proved separately - look for the paper "Turing computability with neural nets" by Siegelmann and Sontag. Not sure your parent comment intended the association.
Not saying ANNs give the best solution always. Or even the best first model to try out on all problems; only pointing out that there is a convincing case to be made for them, and the universal approximation theorem has a lot to do with it.
Wrt your other point, Turing completeness was proved separately - look for the paper "Turing computability with neural nets" by Siegelmann and Sontag. Not sure your parent comment intended the association.