You've made one huge mistake: Davinci's $0.02 is not just per 1k tokens generated but also context tokens consumed. So if you generate 50 tokens per request with 1k context, the price is actually 20 times as large at $0.40 per 1k tokens generated - much less palatable, costing 3 times as much as the cloud hosted version of this.
And that's not even taking into account the gigantic markup cloud services have.
Most of the computational cost of producing an output token is spent on consuming input tokens (including previous output tokens that are fed back in); only the final unembedding matrix could be eliminated if you don't care about the output logits for the context.
So it's not correct to only modify OpenAI's prices to account for the ratio of context tokens to output tokens. Both of them get multiplied by 20 (if that's what your ratio is).
And that's not even taking into account the gigantic markup cloud services have.