Hacker News new | past | comments | ask | show | jobs | submit login
PouchDB, the JavaScript Database That Syncs (pouchdb.com)
321 points by _qc3o on Dec 4, 2016 | hide | past | favorite | 94 comments



I've been using PouchDB in a React Native app for about 6 months with SQLite as its backing store so that we can use more than 50MB of storage on the device. It has been working pretty well to persist data into an offline cache and then sync to a CouchDB 2.0 database in Digital Ocean.

Getting it to work inside React Native was initially very challenging. Keeping our shim up to date with the recent changes to PouchDB has also been challenging. We are currently using PouchDB 5.4.5 because there was a breaking change in 6.x and I haven't had a chance to dive into it to figure out what is going wrong.

The PouchDB community (especially Nolan Lawson) does a great job of showing examples, answering questions, and responding to feedback.


+1 for the community. Everyone who's worked with Pouch/Couch seems to become an evangelist eventually, which speaks volumes for how well designed the platforms are (Flaws and all). People genuinely seem to enjoy working on them, and they solve real problems in very neat and interesting ways.


Did you also consider Couchbase Lite? I'd be interested in your decision making.


I've used pouch on a few projects and also attempted to use cb lite. I found pouch dramatically easier to get up and running with.

Cb lite was complicated since it needs a native plugin rather than being just js. That wasn't too bad.

The thing that put the nail in it for me was all the setup on the server side with sync gateways, multiple databases that then sync with one another, etc etc.


CBL has worked OK so far for us with just plain CouchDB serverside, no sync gateways. But I can see where you're coming from, if your stack is centered around JS I think pouchDB makes more sense. We have potentially large DBs (depends on usecase) where browser context isn't enough and/or provides a bad user experience.


Oh so you're using CBL with a couchdb backend? I didn't even realise that was possible. I thought it only synced with Couchbase which is where all the complexity came in.


Yes it's possible. Here and there there might be a quirk but so far nothing that set us back too much. I'm not a fan of Couchbase's Syncgateways either, I'm not sure the problem they're solving is a very common one.


sync_gateway is basically a couchdb protocol proxy, for couchbase-lite to use, that sits in front of couchbase.

couchbase-lite apparently also works with couchdb (protocol compliant) and cloudant.

I am currently working at a gig where the use of couchbase-lite with sync gateway and couchbase are required for a project.


Couchbase Lite was not something that we considered. Once we got PouchDB up and running it seemed to fit our needs well enough that we stopped considering alternatives.

I've upvoted davidbanham's comment because that seems like a more insightful answer to your question.


Thanks for responding! These kinds of discussions are exactly why I like HN so much.


I explored using Couchbase Lite in an app and realized that PouchDB was still the best client library for it, and at least in my testing of Couchbase Lite's Cordova plugin I saw a bunch of bugs (the Windows version of Couchbase Lite still isn't included in the Cordova plugin; the Cordova plugin is still called a PhoneGap plugin and seems to be lagging behind by several major Cordova versions last I tried to install it), and didn't see much in the way of a performance benefit on even iOS/Android over PouchDB's performance with IndexedDB and/or SQLite.


+1 for a great community with CBL, too! Whenever we've had some trouble or a question, we've had helpful, friendly advice. I'm a big fan of CouchDb, Pouch, and CBL, all of which my team uses. Note that we use CBL with our native mobile apps.


Is CBL able to sync with CouchDB 2.0? Couldn't tell from looking quickly at their website.


How many databases out there work with React Native? I think http://lokijs.org does with Forage, and (I work on) https://github.com/amark/gun people have been getting it to work with adapters. The comment below seems to suggest Couchbase Lite does too? What made you guys decide on Pouch for React Native apps?


Sorry, I haven't yet tried CBL together with react, so I don't know how mature that solution would be - CBL works with native ObjC/Java code, so I guess some adapters would be needed for React to work with it. This at the same time is one of my problems with React Native - it's expensive in programmer time to access resources / native libraries that aren't provided yet by the core. Just to compare with a very small project: python-for-android, there you can directly access any Java function available on the device runtime, it automatically binds to all of it. React native on the other hand seems very frameworky and boxed in, but has a great portable UI definitions and build/prototyping workflow.

I hope one day soon, after already 10 years of iPhone, we finally get a sane development environment again - with stable APIs, responsive UIs, portable code, easy networking and persistence, but at the same time fast to prototype and hook up. All the solutions I know are around 70%-80% there, there's always something that hurts badly.



"not an official couchbase plugin and is not supported in any way by couchbase"

That's what I meant with cost.

Edit: Btw. thanks for the link.


It may not be supported but the wrote a blog post about it a while ago. "Our product can be used on RN! (but don't ask us for support.)"


Do you mind me asking why you choose to use PouchDB in this way instead of Realm for React Native?


Realm looks pretty cool. I'll consider using it in future projects. At the time of starting the project (April 2016), here are some thoughts:

1) Realm works only in React Native and not in React web. There is a regular web component to my project as well and the thought was we could reuse our knowledge of PouchDB across mobile and web. This has paid off in the prototype mode because it allowed us to build some simple reports in our web product. However, this is probably not the right architecture for reporting and we will likely change it soon.

2) Realm was very new and I felt we were already using too many new frameworks. So it was added risk on a tight timeline to get a prototype out the door.

3) On the server side, CouchDB has Futon to simplify our learning curve and give us a basic GUI to have a sanity check when our code wasn't functioning as expected. Not sure what Realm's server side looks like.

I'm sure I can come up with other reasons. But definitely want to check out Realm sometime. Looks like it is shaping up quite nicely.


Realm looks very promising, but the lack of a version that can run in the browser is also why I had to cross it off my list of potential DBs for the project I'm working on. Is a browser port on the roadmap by any chance?


Similar time-frame and platform. We opted to use Realm instead because it offers encrypted storage exposed to React-Native. Would have loved to use *ouchDB but this was a missing critical piece.


1. Realm on the server/sync is closed beta at the moment. It's not ready. 2. Realms design does not gel well if you use Redux architecture.


That's amazing. Did you open-source your react-native shim yet? I'd love to see it


We've been running PouchDB in production for ~15 months now. We chose it because it was a greenfield project and it gave us 2 things: Easy offline support and real-time syncing that makes it easy to create collaboration a-la Google Docs. Because the entire thing is a web app with app cache manifest deploying new versions is very little hassle.

In terms of architecture we have about 250 tenants with separate Couch databases per each. We're still running Couch 1.6. We have yet to evaluate Couch 2.0.

It's been mostly smooth ride for the most part but this being a very unusual architecture we had to tackle few interesting problems that came along.

1. Load times. Once you get over certain db size the initial load time from clean slate takes ages due to PouchDB being super chatty. I'm talking about 15-30 mins to do initial sync of 20-30mb database. We had to resort to pouch-dump to produce dump files periodically. That helped a lot. I think this issue has been rectified with Couch 2.0 and sync protocol update.

2. Browser limits. Once we hit the inherent capacity of some browsers (namely Safari on iOS, 50mb) we had to get creative. Now we're running 2 CouchDB databases for each tenant where 1 has full data and the other only contains last 7-8 days. Pouch syncs to the latter one. We run filtered replications between the full db and the reduced db and do periodic purging. On the client side if a customer tries to go back more than 7 days we just use the Pouch in online only mode where it acts as a client library to remote couch and doesn't sync locally.

3. Dealing with conflicts. This might matter or it might not depending on the domain but you have to be aware of data conflicts. Because CouchDB/PouchDb is eventually consistent multi-master setup and you will get data conflicts where people update the same entity based on the same source revision. PouchDB has nice hooks to let you deal with this but you have to architect for it.

4. Custom back-end logic. Because Pouch talks directly to Couch you can't exactly execute custom back-end logic when needed. We had to introduce a REST back-channel to make sure our back-end runs extra logic when needed.

5. We had some nasty one-off surprises. Last one was with an object that had 1700 or so revisions in couch and once it synced to PouchDB it would crash the Chrome tab in a matter of seconds. Due to the way PouchDB stores revision tree (lot's of nested arrays) Chrome would choke during JSON.parse() call and eat up memory until crash. We resolved this one by reducing the revision history limit that is kept.


That chattiness is what has driven me away from Pouch, sadly. It's a flaw in the Couch replication protocol design that won't be fixed until the spec is changed.


The chattiness is mostly addressed with _bulk_get in CouchDB 2.0 - Pouch will automatically use it if the server supports it. Another option is to stick a HTTP/2 proxy in front of your CouchDB instance - the chatter to the db is ultimately still there but it significantly reduces the latency cost to the PouchDB client. There are plans to add first class HTTP/2 support to Couch but for remote client architectures just adding a proxy should be a significant improvement. Projects like https://github.com/cloudant-labs/envoy take this a step further and provide an extensible proxy (e.g. you can do sub-database access control, etc).


> We had some nasty one-off surprises. Last one was with an object that had 1700 or so revisions in couch and once it synced to PouchDB it would crash the Chrome tab in a matter of seconds. Due to the way PouchDB stores revision tree (lot's of nested arrays) Chrome would choke during JSON.parse() call and eat up memory until crash. We resolved this one by reducing the revision history limit that is kept.

I think I remember this issue (I was formally a heavily contributor to PouchDB) I think Nolan ended up writing a non recursive JSON parser to deal with this and there was some debate about whether it made sense to be used as it was significantly slower (though could handle deeply nested structures)


Yup, exactly. We use JSON.parse inside of a try/catch and then fall back to vuvuzela (https://github.com/nolanlawson/vuvuzela) which is a non-recursive JSON parser in cases of stack overflows (here's the code: https://github.com/pouchdb/pouchdb/blob/62be5fed959bbdf91758...).

Unfortunately the only way to resolve this without vuvuzela would have been to change the structure of the stored documents which would have required a large migration, so I'm glad to hear that the vuvuzela solution was the right way to go.


This is very interesting to read. I'm currently working on an Electron app that uses PouchDB and that has a lot to do with revisions- one of the big reasons I choose PouchDB.

According to your 3rd point on conflicts, could you shed some more light on:

>PouchDB has nice hooks to let you deal with this but you have to architect for it.



Good report! On #4, have you considered a client-db-server approach? Where your server just listen to changes in do and act accordingly. Is there something in your specific case that prevents this approach?


Surprised to see this on HN, but I am one of the maintainers of this (I expect Nolan will end up finding this too) so happy to answer any questions about it.


Funny coincidence, I was actually reading the docs for it last night, and I am struggling to understand the lifetime of a local database.

From how I understand it, a local database will persist until the browser clears its cache. What happens in the situation where the cache clearing takes place while you are using the pouchdb database? Can that condition be handled?

I thought pouch might be a good fit for a web app you could use offline - you'd need connectivity to login or whatever, and sync the database initially. Then you'd modify the local one and sync it when it's all done, saving round trips to the server.

But that also raises the question - on the server side, is one database per user feasible? IIRC Couch can only handle 100 or so different databases on one instance. And you can't do views across them.


(PouchDB contributor here.)

There are actually several layers of "cache" inside of a browser, including the traditional HTTP cache (which is global) as well as the site storage, which includes stuff like IndexedDB, WebSQL, LocalStorage, AppCache, and window.caches (all of which is per origin).

This site storage _can_ be cleared by the browser (it's "temporary" per the spec), but in practice it isn't very frequently cleared unless the machine is running low on space. E.g. Chrome only does it if a site exceeds 20% of total per-origin browser storage (https://developer.chrome.com/apps/offline_storage) whereas Edge is extremely conservative with clearing IDB because it's considered user data (e.g. email drafts in Outlook). In any case, when the browser does clear this storage, it clears everything at once for that origin, so the user essentially has the experience of visiting the site for the first time. This is why it's a good practice to periodically sync your PouchDB data to CouchDB because it can be lost in rare cases.

Also there is a new Storage spec that allows site authors to designate certain buckets of per-origin storage to be persistent, but this typically requires a user permission and isn't widely supported yet: https://storage.spec.whatwg.org/


I've been wondering the same. What would the alternative be? Sync between two pouchdb instances (one client and one server) and then sync the server-side pouchdb to couchdb?


Client side pouch happily syncs directly with server side couch


Right, but how do you scale that out on the server-side while protecting data on a per-user basis? Sorry if I'm missing something obvious.


Usually you create a separate database for every user.

This sounds _nuts_ from a traditional database mindset, but works great in Couch.

You can then create another database that replicates from all the user databases in order to perform your aggregate queries on the back end.


Seconded. This is the standard CouchDB approach: one database per user.

Sounds weird coming from my Postgres and MySQL background, but works great in practice, depending on your use case and if you clear out old revisions and unused docs, which for our use case can number into the thousands fairly rapidly.


> You can then create another database that replicates from all the user databases in order to perform your aggregate queries on the back end.

That sounds horribly space inefficent.


Yep.

Everything in software is a trade-off. This trades space efficiency for multi master replication with first class offline app experiences.

For many cases, that's a fine trade. Disks are cheap. If that isn't a fine trade for a particularly large dataset, use a different technology that's better at space efficiency and worse at other stuff.


I let the user decide to logout and destroy or just logout if it's a trusted device but obviously the local db could be accessed. Most users don't logout and I set a long cookie expiration.

You could also encrypt data.


Is the project sponsored by Couchbase? If not, have you thought about publishing a storage API that the replication protocol calls so that it could be adapted to work with any backend db? What about P2P sync - (eg. serverless) has anyone thought of making PouchDB do that?


Not sponsored by Couchbase at all (I worked at Couchbase during its formation, this started near the end of my time there).

For pluggable storage engines our node.js adapter uses leveldown, so any *down backend can be plugged in. https://pouchdb.com/adapters.html has some more information about this.

For p2p yup its been something quite a few people have been using pouchdb for, one of the more notable examples has been http://thaliproject.org/, the core storage format is entirely compatible with p2p (inherited from couchdb)


AFAIK, Couchbase is a CouchDB as a service tuned for speed.

PouchDB is a CouchDB implementation in the browser. I don't think your sync with other db would be easy and I believe there is no such desire in the project roadmap.


No, Couchbase is more or less a merge of Membase and Couchdb. With Couchdb it only shares its lineage and its creator; apart from that they are entirely different beasts.


It also shares more than half its name. The recognisable bit. The other bit is the generic and largely interchangeable "base" / "db". I don't know if it's intentional, but there's no wonder people get them confused.


The idea of making pouch sync with other databases is covered in their faq:

https://pouchdb.com/faq.html#sync_non_couchdb

Basically, your other data base probably handles conflicts in a different way to Couch that would make syncing this way nonsensical.

If you're happy to handle that yourself, the Couch replication protocol is well documented and there are plenty of libraries written for it.


Surprised? A post about PouchDB appears on HN approx every 90 days, most hit the front page. Great project, but it's more popular than you may realize.


I've been running PouchDB + CouchDB 2.0 in production for a while now (financier.io). It's worked great, and I really recommend you to check it out. A couple things:

1. CouchDB 2.0 is still rough around the edges, particularly with its new Fauxton interface (ex: completely broken when proxied into subfolder).

2. CouchDB 2.0 brings the _bulk_get API which has improved sync by an order of magnitude(s).

3. I do have custom logic for logging in overriding the _session API in order to do rate limiting. (I proxy with nginx for IP rate limiting and node.js for failed password attempts rate limiting.) I also have custom logic for provisioning a new CouchDB database per user and setting up permissions.

4. I host on Digital Ocean but I use their new block storage solution so that a growing db does not become unwieldy/expensive.

5. My SaaS subscription system is kind of unique: When your subscription expires you'll simply lose write access to the server (CouchDB), but you can still pull down your data to PouchDB.


For one of the best uses of PouchDB that blew my mind when I read about it, read this http://www.pocketjavascript.com/blog/2015/11/23/introducing-...


The article I linked has been discussed on here before https://news.ycombinator.com/item?id=10619933


I'm interested in PouchDB to make my JavaScript app easily sync to the server, but I don't want to switch my server's database from Postgres to CouchDB. Surely I'm not the only one in this situation?


You aren't, its mentioned in our faq - https://pouchdb.com/faq.html#sync_non_couchdb.

A lot of people would like their current data to just be able to sync, but it almost always needs changes in the way data is stored and complementary changes to the application code


Any experience doing it the hard way? I'm in the same boat. Really don't want to throw away what postgres provides.


We've been using Pouch in a progressive web app designed to be used on the field in remote locations, and while there was a learning curve in understanding how the replication protocol works, and as highlighted in another comment the way Chrome stores data for a web app - we can't be happier with pouch/couch.

Additionally, moving out of Cloudant and into CouchDB with an openresty based reverse proxy has made things even better, and really fun. This is one of those stacks that feels easy and simple at the same time. (Ref:https://www.infoq.com/presentations/Simple-Made-Easy).


Any guidance on moving from Cloudant to CouchDB? Are you hosting it yourself? If so, has the amount of maintenance been more than you expected, or was it mostly setup time and then forget about it?


Yup, hosting it ourselves. Its a peach. There are few things that it doesnt come with out of the box - clustering, Full text search, geoindexing, chained map reduce, auto compaction, index auto-updation. Once thats done, if anything it was more forget about it than Cloudant, which bills on requests / thoroughput. This can catch you out because continuous replications between databases on the same cloudant account are also counted as requests and billed as such. And continuous replication is very chatty. So if you have a particularly creative multi-master setup, like a per user db -> masterdb kind of thing going, this can eat up your thoroughput / push up your bills with no practical benefit.

Its really openresty + couch that does it for me. The idea of writing security / validations / routing etc right into ngnix combines beautifully with the CouchDB way of thinking.


Ah, yeah, you weren't the only one bitten by that. We actually went and changed the Cloudant metering model recently so that you're billed on provisioned throughput rather than total request volume. You get dramatically more predictable billing, with the tradeoff that clients need to handle a 429 Too Many Requests response if the Cloudant instance is under-sized. More here:

https://www.ibm.com/blogs/bluemix/2016/09/new-cloudant-lite-...


We (Cloudant) recently changed the pricing model to help with this. You can now take a fixed-cost plan that charges based on reserved throughput capacity instead of metered use. This should help with the replication scenario. See

https://www.ibm.com/blogs/bluemix/2016/09/new-cloudant-lite-...

Stefan Kruger, IBM Cloudant Offering Manager


Anyone use this system in production? Care to share your experiences?


We have been using it in an Ember app and it was very easy to setup. There is an Ember Data adapter that works well.

Main advantage is to have an app that works offline and all sync happening under the hood.

If you are going to give a try, highly recommend watching Nolan Lawson's videos in YouTube.


The current project I am working on has been developing an offline-first, messaging/calling app that started as a webapp but was decided to also have native apps for Android/iOS. But because the web app came first, the web-centered architecture was kept, and the javascript developers have chosen PouchDB/CouchDB as the single data storage/sync mechanism.

I was peripherally involved in the part of the services that dealt with user data - basically, massaging data from the address book and call history, dealing with the backend.

This is a collection of things I believe I can say after working on this for almost two years:

- PouchDB + CouchDB works well, as long as you are already used to model your application in terms of CouchApps. No SQL, no E-R representation of your data, etc. If you are not comfortable with the couchDB model and don't know your way around views, you are not going to have a good time.

- This is not an issue with PouchDB, but it if you are planning on having native apps, I'd strongly advise against. All of our native app developers struggled with the change in mindset and the not so mature state of couchbase for Android/iOS. And because they can use SQLite, forcing them to adopt couchbase was not but pain in our team.

- You need to figure out security and authentication: if you take the direct approach of using one couchdb per user and just replicate that, you need to figure out on your own how to secure access. CouchDB only supports OAuth 1 out of the box. We are using oAuth 2 for our services, and I basically had to implement a proxy server that checked oauth tokens before passing requests to our couchdb server.

- CouchDB that does not allow cross-database views, so if you take the "one-database-per-user" approach and you need to check anything that spans more than one database, you are on your own to create more databases/replicate/index/aggregate the data.

- The solution for sync helps, and continuous replication makes for a good demo, but it is not magic. Imagine if you have already accumulated some good amount of data in your database. The moment you start your application on a different browser, PouchDB will start desperately to sync everything you have. You need to know how filtered replication works.

- Very resource intensive, more so if you keep continuous replication. Constant usage of CPU and network I/O can make your app feel sluggish.


(PouchDB contributor here.) Just a quick note that slow replication is one of the things that is largely fixed in CouchDB 2.0 thanks to the new _bulk_get API (https://issues.apache.org/jira/browse/COUCHDB-2310) and is set to get even better in CouchDB 3.0 once it has HTTP 2.

As for your app feeling sluggish, my hunch is that you're seeing this in a Chromium or Gecko browser (e.g. Android WebView) in which cases IndexedDB does quite a lot of heavy operations on the UI thread (http://nolanlawson.com/2015/09/29/indexeddb-websql-localstor...), which can be mitigated by moving it to a Web Worker (as Pokedex.org does) or a Service Worker (as HospitalRun.io does).


Check out Cloudant Envoy to get around the "one-database-per-user" anti-pattern: https://github.com/cloudant-labs/envoy


Have been using it for a long time inside a Chrome extension. The use case was to keep user's preferences and data/history stored on both client and server and in sync with each other easily and PouchDB accomplished that task superbly. Was a pleasure to use, no issues that I can remember. Would use it again for similar use cases.


I'm using it in a Chrome extension (which is technically in production but only has one user, me). It's excellent. It took a bit of work to get my head around what _rev means, and I'm not using the sync functionality yet (but I will, hence using it now) but I've not found it to be at all problematic. I'd definitely recommend using it if it fits your project.


that isn't production :(


My bad if the answer is obvious but (besides backward compatibility/shims + abstraction), what's the benefit of using PouchDB as opposed to vanilla localStorage functionality?


As someone else answered the main use case of PouchDB over plain browser storage is its ability to sync data.

However just wanted to clarify that PouchDB doesnt natively use localstorage for storage, its primarily IndexedDB or WebSQL in the browser (leveldb in node).

We do actually use localstorage for cross tab messaging, but thats mostly a hack due to the lack of idb event listeners (that are coming in v2)


It can sync data between CouchDB servers (e.g. IBM Cloudant) and the browser.


Honestly I didn't know that this is a feature people want (no criticism intended). So, you have a database object in your browser and that takes care of getting the data to the server himself? Otherwise every website syncs data between its browser instances and its database, right? That's how we get state.


The sync happens in an unmanaged fashion, without the user or the application programmer having to care about the state of the connection.

Your PouchDB application works locally on your device, whether the connection is up and down, and the data is synched with the remote database whenever there is connection.

The alternatives to this PouchDB to CouchDB synching mechanism would have to be either:

- the user checks whether the connection is up, and manually manages the sync, or

- the application programmer saves the user the trouble by adding code that checks whether the connection is up, and automatically manages the sync


Using a synced data model provides a few features for users, data access is efficient and far far faster (disk/memory is faster than network) and it allows offline usage (+ failure tolerance, the server goes down the client still works). It can also enable p2p use cases and a bunch of other things, the above 2 are the main drivers though


localStorage quota is 5MB (per domain IIRC), PouchDB uses IndexedDB or WebSQL internally, which gives you a lot more quota to work with, and I believe each DB has it's own quota.

localStorage is not indexed, you have to implement some sort of lookup yourself. If you go that way, a tip is to store values as objects with ids as keys, that way you get a sort of hash map as index thing going which alleviates things a bit. PouchDB is indexed and provides a way to query your documents.


Computed views.


Been experimenting with PouchDB for a while and really liking the simplicity of the project. Looking to implement it in a mobile hybrid app to allow users to take data 'offline' then sync up later.

The only downside I've found so far is that the PouchDB Inspector on my Chrome browser tends to go rogue from time to time and suck up > 50% of the CPU time and has to be shutdown manually.


Could I in theory host my own pouchdb instance to keep using applications that are no longer supported by their vendors? Imagine an IoT vendor uses pouchdb shuts down. My devices don't work anymore but I can host my own pouchdb instance. A javascript based database is a bit heavy but it's an acceptable tradeoff to me.


In principle yes any CouchDB/PouchDB database can sync with one another because they all share the same protocol, but OTOH I've seen very few examples of apps that offer users a simple "export to your own CouchDB/PouchDB" feature.


Datomic-ClojureScript in the browser backed by PouchDB would be killer.

Datomic already has a CouchDB-compatible datastore via its support for Couchbase. That would mean Datomic-ClojureScript on PouchDB could run in the browser and sync with Datomic-Clojure backed by CouchDB/Couchbase on the the server -- that would be killer and give Datomic massive reach.

Has the Datomic team considered PouchDB as a possible datastore for Datomic-ClojureScript in the browser?

NB: Just posted the question to the Datomic discussion group: https://groups.google.com/d/msg/datomic/uqBQE4QlnzI/VuZ14pqO...


Since when did arguments become functions?

    db.replicate.to('http://example.com/mydb');
IMHO, should be

    db.replicate(to: 'http://example.com/mydb');


I'm personally really a fan of this syntax mostly because I like how it reads when you chain them together.

    db.replicate
      .from('http://example.com/mydb1')
      .to('http://example.com/mydb2')
      .on('complete', function() {
          
      })
      .on('error', function() {
         
      })
      .then(function() {
        
      });


What happens when you do:

  db.replicate
      .from('http://example.com/mydb1')
or

  db.replicate
      .from('http://example.com/mydb1')
      .from('http://example.com/mydb2')
      .to('http://example.com/mydb3')
or

  db.replicate
      .from('http://example.com/mydb1')
      .to('http://example.com/mydb2')
      .to('http://example.com/mydb3')
? serious question


1: replicates from mydb1 into the pouchdb object represented by 'db'

2 & 3: I'm pretty sure chaining replications doesn't work in that way, although thats a pretty interesting thought.

If you have to chain replications and achieve the structure in 2, you'd have to define the pouch objects as:

db = new PouchDB('localDB');

mydb1 = new PouchDB('http://example.com/mydb1');

etc ..

and then do:

mydb2.replicate.to('mydb3',{live:true});

mydb2.replicate.to('mydb1',{live:true});

mydb1.replicate.to('db',{live:true});

the 'live' flag, as you'd imagine, makes the replication live/continuous, as opposed to one-time.


It's not clear which 'arguments' are required when it's done this way. What's stopping you from calling 'from' without 'to'?


I think it would just not do anything. I like this type of syntax but only when the required parameters are done up front. Then all chainable are optional / modifiers / events. This is how jQuery works and how I modeled msngr.js.

Unless Pouch has anything required missing I think it's fine though maybe a little unintuitive (like can you do multiple froms? Multiple tos? Etc)


By chaining calls I can just drop in anything that has the correct interface. For example, if I wrote a "to_stdout" function I'd need to modify replicate() in order to use it. This way I can do;

    db.replicate.to_stdout('http://example.com/mydb');


PouchDB's replication capability is interesting, but is there a way to make it lazy load to the local DB instead of doing everything up front? I hesitate to use it for a web project with 10+ MB of docs where it would otherwise be ideal.


You can provide a server-side filter function to replication and progressively filter partial replications until eventually everything gets replicated. At that point it becomes a question of architecture of your documents: how much is needed to replicate before a user may be productive?

You can also explore pouchdb-replication-stream to build bundles that PouchDB can bootstrap from a little bit faster than a chatty replication.

That said, I've found initial replications of large databases (one I've worked with this week is a 25+ MB CouchDB database full of photos) is quick enough (and mostly bandwidth constrained) that I haven't had much in the way of concern over it.


Cool library! I'm a huge fan of Meteor.js, which provides a similar isomorphic database. Cool to see this implemented in such a lightweight package!


I really love the project, but I never used it in production due to the large file size :(


Your solution has arrived: https://pouchdb.com/custom.html


I just now learned about this DB and was intrigued but havent researched much about the cons yet, so I am curious to hear your thoughts about the file size?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: