Yes, I think you're confused about regularisation. A regularisation is (usually) a component in the overall loss, which has the goal of simplifying or preventing overfitting, as opposed to the main component which has the goal of fitting. It's not another term or a formal term for penalty.
Thanks for explaining that. It seems that penalty is already used, which is unfortunate. One of the interesting things about ML is that you can learn for a year and a half and still uncover more things you didn't know, which I love.
I guess I'll call loss "punishment." It matches how it feels to make progress in ML anyway.