Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Supabase has pgvector extension and that’s enough for my limited RAG use cases. I dont really need to use anything beyond postgres. On the other hand, enterprise might find it easier/cheaper to buy a second db than migrating their existing db to whatever the latest version. I dont think it’s as simple


Exactly. We use Supabase too but are at a scale where it just made sense to use a second, dedicated vector db (Pinecone) than to bloat our Postgres db that has a completely different workload


Bloat your DB... or pull in an entirely new vendor and bloat your entire operational outlay.

I'd really love to know what kind of insane scale justifies that tradeoff...


That's a very exaggerated way to look at things lol. Nothing got bloated at all in this process, we are just using the right tools for the job. I'm a solo founder and the only backend developer. I can assure you this decision only made my life easier by choosing the correct tech from the get go.


Nothing exaggerated, your comment implied scale was your justification: in which case there'd better have been some crazy high load that just brought the tool you already had to its knees to justify paying an additional closed source platform and manually having to pipe data to it in addition to your main data store.

Of course if I sounded incredulous it's because I didn't think you had that scale, and it sounds like I was correct?


No, you're not correct. We have over $2M ARR and with the amount of data we are storing it would be downright stupid to use Supabase.

We don't also "pipe" our data to Supabase, we use a couple different data stores depending on the best use case. For example we also use R2 and Durable Objects.

Just because you have a hammer doesn't mean everything is a nail.


Well maybe we have different definitions of scale: I think my team spends about $2M a month on compute, so we don't pride ourselves on randomly pulling in new vendors.


You are incredibly dense


When you're playing checkers people playing by the rules of chess might seem dense.


It's pretty incredible how you know more about our needs, usage, and infrastructure than we do :D

Also don't you think it's funny calling out somebody else's tech choices when you have zero insight into it and when it's worked out perfectly for us?

By the way, how many TBs of vector data are you storing in Postgres and needing to retrieve with minimal latency?


I work on autonomous vehicles: we generate more data every day than you likely generate in 10 lifetimes of your CRUD app escapades.


Bragging about generating lots of data? Wow, cool buddy

Literally irrelevant lol


Don't forget your latency!


They are separate systems. We don't touch Postgres for the same code that needs to access Pinecone.


How do you deal with security and access control across postgres and pinecone?


We use Cloudflare Workers for our API and just handle auth calls by checking the JWTs with Supabase and caching it. So we already had the necessary auth setup to do this.

For basic CRUD we use the Supabase endpoints directly but none of that involves querying a vector db :P


recently ditched Supabase for Weaviate. I was tired of the python bindings not keeping up, no hybrid search, slower search algorithms. Also Supabase has a lot of features that i just don't need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: