Aside: do you still need to use a cache service when using Postgres? Our Django web app forgoes any caching because “Postgres is fast enough” and “has its own cache”.
Assuming you mean application level cache you can't really generalize it. In general you don't want to cache anything if you can because cache invalidation is a really hard problem.
I read a comment like this and just desperately want to look at the query log to see what’s being run. Followed by explaining a few offenders to see what the planner is working with. You can get really deep into optimising your dB, but often there are a lot of low hanging fruit to work with that you can make “good enough” in no time at all.
That really depends on what you're caching. Sometimes e.g. you might want to have local caches on application servers, not because the database would be too slow or couldn't take the load, but because the roundtrip time to the database make page loads too slow if you hit the DB every time.
Or you have some complex ranking/relationship/aggregation calculations - it might computationally be infeasible for the database, as hard as people worked on its efficiency, to calculate such a query on each request...
Depends on what you're using the cache for. Postgres is definitely fast enough for reads of a K/V pair at a scale your master node can support for most use cases, but there are cases that you can't suit with postgres alone at-scale, e.g:
1. Needing a consistent (meaning reading off replicas is not fresh enough) data source for reads and writes that happens every request to very high-request site. Think sessions, api rate limiting.