So far this is probably the best "intro to CRDTs for a developer" I've read. I built a product around CRDTs, essentially, and my god was it painful trying to engage with. Showing actual code, explaining that `merge` is the fundamental operation, etc, is really all a developer needs to know IMO.
Also, the fact that we always use text editing as the de-facto solution is so weird to me since that problem is both niche and extremely complex. IMO a better example would be something like "Can this person drink alcohol?". Age moves in one direction so it has a simple merge function:
A property of this is that if I query your age and if you're 21 I can cache that age forever. You'll only ever be >= 21, after all. If I add new queries that care about you being 25 (for a hotel) I can satisfy the "drinking age" queries from a stale cache and then retrieve the true value (<25) when I need to check if you can book a hotel.
This means you can have distributed caches without invalidation logic. A pretty amazing property since cache invalidation is a hugely complex problem and has seriously negative performance/ storage implications.
It also means you can drop writes. If my system gets information that a person was 18, but that information is out of date, I can drop that write, and I can do so by examining the cache and viewing stale information, only checking the real value if the cache value is < 18.
This whole thing lets you push computation to the edge, drop expensive writes, ignore any cache invalidation logic, cache values forever, potentially answer queries from stale cache values, etc.
Anyway, kudos for the writeup. I skimmed the second half but the first half was great and the second half looked legit.
> the fact that we always use text editing as the de-facto solution is so weird to me since that problem is both niche and extremely complex
The reason we use that is because it is complex enough to show the problems that CRDTs solve. I would argue that this painting example is too simple. The core merge loop is:
if pixel.created_at < newPixel.created_at {
pixel = newPixel;
}
this is maybe good as a first step, but I don't think it is enough to even really called an "Intro". A last-write-wins register is trivial.
Simple text inserts with a simple "insert after" CRDTs is not much more complicated but involves things like generated unique IDs without communication and how to resolve conflicts with some sort of globally consistent ordering.
I think the problem is that CRDTs don't solve the text editing problem as far as I know. They can solve a constrained version of it and they have to be sorta mixed up with other algorithms and approaches. It's like a worst case scenario for distributed systems that requires tons of complex solutions.
It's something to build up to maybe, not to start with.
I agree that it's simple, but that's exactly what makes it so powerful — it's easy to understand and yet you can do a ton with it. I've been working on a vector editor as well that's also built with just registers and maps. That one is a bit long for a blog post (although I might do a high level overview of some techniques like fractional indexing). But the point I'm trying to drive at here is that you can get really far just by combining simple CRDTs.
I realize this is a straw man argument/example - but it feels hairy to me. So much fuss about age and cache invalidation ... age should not be persisted anywhere. When you make age a calculated property from birthday it is never inaccurate or stale or wrong. "set age" should not be a possible operation in any system imho.
Even if you persist a timestamp like 'birth date' into a database it doesn't matter - you can still cache the resulting 'age' calculations as CRDTs. But yes, it is a made up example. Another might be 'first/last observed time' for an IP address.
I don't think we need to design the entire "can you legally drink" SaaS, hopefully my example is clear enough and people can leverage the concept for more reasonable circumstances.
Given that I keep getting ads on reddit about some API to predict age and gender from a given dataset of users and the OP of that ad keeps saying that it's hugely succesful, this business is practically guaranteed to succeed.
Seriously, want to make some easy money? Build this.
Please elaborate what is “fun” about that fact? The drinking age is 19 in most of Canada, 18 is the exception. And the drinking age varies all over the world.
It used to be between 21 in most of the US until the voting age was lowered to 18 in 1971. It was then either 18 or 19 in most states (generally the more liberal ones) until 1984 when the national minimum was passed. 19 really is sort of a sweet spot for socially liberal North America.
> text editing as the de-facto solution is so weird to me since that problem is both niche and extremely complex.
My first foray into collaborative editing was for my text editor. Indeed, things get super linearly harder as you add basic functionality of editors such as deleting and replacing, especially when those space multiple lines.
Instead, I reached for Fraser’s differential syncing. https://neil.fraser.name/writing/sync/. There’s a lot of ambiguity and nuances in various versions of the prose and white paper that I could never really flesh out.
I think anyone attempting to relay a collaborative editing algorithm needs to do is start with the simplest scenarios: append only / monotonically increasing data.
I regularly deal with people older than the my country's identity documents. Their "official" dates of birth are frequently off by several years and have to be manually corrected in all databases and systems.
Yeah, as I’m reading this more thoroughly I see that there wouldn’t be many states to express, and the merging itself isn’t something you’d express in states.
I initially thought more of the inner workings could be managed this way, but it seems better implemented as it is in the article.
I’ve built a successful business from a TTRPG campaign manager (LegendKeeper) using CRDTs, specifically the Yjs kind. It’s been great, and the UX of CRDT-powered stuff is excellent. Zero latency, eventually consistent; overall users love the performance and offline capability.
That said, there are a lot of trade offs. Some things that are easy in a traditional server-client model become difficult in the local-first context that CRDTs provide. Role based authorization is hard, data model changes must be done additive (never mutative), and knowing what state a client is in when debugging is tough too, without a lot of “full-surveillance”-level tooling. Also with the automatic , bidirectional syncing a lot of CRDT architectures afford you, a bug in production that corrupts data can virally propagate and cause a huge headache.
Investor-funded services like Liveblocks are starting to pop up that promise to make this stuff easier, but as an indie I find them expensive; I’m sure they’re a great value for big corps or funded teams though. Rolling my own infrastructure for Yjs has been taxing, but I’ve learned a lot, and have been able to tailor it exactly to my needs.
You mention the investor-funded services that pop up to make this stuff easier -- our goal with Y-Sweet is to build the same type of DX you’d get from those services, but build it on a fully open-source (MIT) platform with Yjs at the core: https://github.com/drifting-in-space/y-sweet
A couple of questions about y-sweet, based on the experiences I had with CRDTs:
1) Does the server keeps in memory the "active" documents? In other words, does the server need to open a document and keep it in RAM while clients are connected to it (I assume there's a websocket connection somewhere in the client that keeps it hot)? Or is the server stateless - just connects to the store when needed? I found the latter very hard to do.
2) Does the client persist entries using indexeddb? If yes, does opening many tabs cause redundant writes as they all sync with the server? If not, does the client need to fully re-sync with the server anytime it wakes up?
3) Is it possible to observe updates on the client as they come? One of the major use cases of CRDTs is to index data on the client - then you can have a dumb server that just syncs data between clients and a smart client that does search, graphs, visualizations etc. on the data it receives. To do that, the client needs to observe updates one by one and process them to create secondary indexes. Is it possible to do with y-sweet without forking its source code? I remember getting updates yn Y.js being quite inefficient as you need to replay them all or something similar, but that was a couple of years ago.
The server keeps the documents in memory when they are open, but it is horizontally scalable by hosting using CloudFlare Durable Objects. (We also plan to support Plane.dev but that's not built out yet).
> 2
The client is based on Yjs, so it's compatible with Yjs' y-indexeddb provider to store in IndexedDB. Tabs synchronize state between each other using a local broadcast channel. The client only synchronizes unsynced state with the server, so if one tab has already pushed the local offline edits to the server, the other tabs can discover that and avoid pushing them. That said, I'm not 100% sure if Yjs deals with the race condition where two tabs wake up at the same time so the server has not yet received offline edits from either, I'd have to check on that.
Thanks! Re: 1, the docs are not really deep on that, but from your answer it seems it possible to self-host y-sweet on Cloudflare workers (I guess) with Durable Objects as storage?
Also, if you guys are going to have a paid plan, how do you see the prices going? Comparable to, say, Supabase per user, less or more?
Sorry about that, I’ll clarify the docs. You can self-host on Cloudflare, but the storage is R2/S3/S3-compatible blob storage.
Our tentative pricing is $25/month + $10/10k minutes of “open connection” time (per-document, not per-connection, so multiple users with the same doc open are not double-counted). Storage is free if you bring your own R2/S3 bucket, or a nominal fee if you use ours.
Unlike supabase we don’t do any of the relational stuff, but for Figma-like apps where a lot of documents are never touched, I think our hot/cold storage model can be significantly cheaper at scale than a hosted postgres database like supabase.
Yep, though coming from mobile development this was somewhat familiar. A lot of mobile apps are local-first, but I don't think it was called that back when I was doing mobile. Most mobile platforms expect this and provide tools to ease migrations, like Room on Android. Since CRDT approaches are still fresh, I imagine most people are rolling their own adhoc migration strategies. Ink & Switch is working on this: https://www.inkandswitch.com/cambria/
Is LegendKeeper built on top of the Websocket Yjs provider? If so, do you run the Websocket server yourself? If not, do you use WebRTC and have you had any STUN/TURN issues with that?
LegendKeeper looks really awesome btw, I might bring this up for my own campaign use. I've been thinking of using Yjs to build some character sheet builders myself which is why I'm asking.
I built a custom solution based on y-websocket that handles multiplexing and syncing multitudes of Ydocs at once; a single YDoc is not really enough for complex apps. I run the server myself; originally on GKE because I was learning it for work, but then once I went FT on LegendKeeper, k8s became super-overkill minus the learning context. Finally switching to Render after tiring of fighting weird K8s internal DNS issues.
How does it scale? What happens if you have many users or a bunch of users with huge history on their data? If you haven't hit these limitations, what do you plan to do when (if) you hit them?
I don't know how well the original y-websocket provider scales, as it holds ydocs in memory. I imagine lots of folks are using it just fine in production, though. I wrote a less-stateful version of y-websocket that uses the Yjs Differential Updates API to save and serve updates without loading the docs into memory.
As long as you have garbage collection turned on for your Ydocs, they stay pretty small, especially if you are avoiding using YMaps. (The strings that serve as YMap keys can't be GC'd, from what I understand. YMaps are great for bounded domain objects, but not so great for storing collections, dictionary-style. Y-KeyValue solves this problem.)
I eventually added a X MB document size limit on the backend, but only after doing a statistical analysis on existing documents. I found a size threshold that was a strong indicator of abnormal/buggy behavior, and set a limit under that. Without the limit, occasionally I had huge Ydocs, usually created by a bug or weird user behavior, clogging up database resources. Now I block those ydocs on the backend and send a messsage to the user with some mitigation/recovery tips. I plan to add automatic document repair, but just haven't gotten to it yet. As LK matures and I get better with Yjs, these bugs become much rarer.
In practice, most apps will only need Last-Writer-Wins registers and not the more complicated sequence CRDT's that you find in Y.js and Automerge.
We've built a auto-syncing database that uses CRDTs under the hood but never exposes them through the API. So if you want all of the benefits of CRDTs e.g. offline-first user experience, checkout our project, Triplit!
Yep. I figured this out the hard way. Spent a year or so investigating CRDTs before I came across Figma's blog post and realized they fit my use case exactly, and I really didn't need to bang my head against the desk for so long because the solution actually isn't so bad for tree-based editing.
Dumb question time - why didn't/don't you build Triplit on cr-sqlite? I'm guessing cr-sqlite wasn't on your radar when you started the company, but now that it exists... it would give you joins and access to the whole SQLite ecosystem.
Not a dumb question at all! We're aware of cr-sqlite and I've talked to the author, Matt, a few times. Short answer: SQLite has serious shortcomings when it comes to reactivity and we think we can be as fast as SQLite for the application-type queries we aim to support. The long answer would be about supporting all of features we don't need in SQLite and all of the quirks that come with it like having null as a primary key[1].
Something important to mention when discussing CRDTs is that they are particularly suited for scenarios where clients may go offline often and where it makes sense to resolve conflicts automatically. Not every kind of data lends itself well to automatic conflict resolution as the merged state may not be desirable when all parts are constructed independently without real-time collaborative feedback.
For example, if I have a field which is "color" and one person writes red and the other writes blue, there is no way to automatically resolve that conflict when they both become reconnected. It's physically impossible since the intent cannot be established without the ability to read the minds of both participants. You can't just merge the letters into the word "reblued" nor can you allow one to completely overwrite the other while letting both participants believe that their change was settled when in fact, only one made it through. Often, it's desirable that both participants must be online and better to show one an error message if they're not so that they are not mislead into thinking that they're actually changing the system state when in fact their change hasn't been persisted.
I've worked on realtime systems which don't rely on CRDTs. This was a suitable approach in my case since accuracy of the data was paramount and each section of the data was well isolated from one another and offline editing was not required.
> For example, if I have a field which is "color" and one person writes red and the other writes blue, there is no way to automatically resolve that conflict when they both become reconnected. It's physically impossible since the intent cannot be established without the ability to read the minds of both participants. You can't just merge the letters into the word "reblued" nor can you allow one to completely overwrite the other while letting both participants believe that their change was settled when in fact, only one made it through.
This, for me, is the crux of the issue that I can not understand - a general CRDT library simply cannot work, as the changes are in context of what is being edited.
IOW, I cannot think of a situation where conflicts can be resolved automatically. I think it might be best for the application (which does have context) to display the conflicted state (like the way git does), marking it as a conflict and requiring manual intervention to resolve.
In this example, perhaps the application can display the field? If the field is displayed as text, then display "Conflict: {[joe:~blue~][bob:~red~]}". If the field is being displayed as a colored element in an image, the conflict must be displayed with (for example) an overlay on the conflicted part as a red-outlined box, with the snippet of both changes to the image displayed on mouse-over, or on click (or similar).
It makes no sense, to me, to approach CRDTs as a general mechanism - it'll be a CRDT for text, a different mechanism for rasterised images, another one for vector graphics, another for video, for sound, etc.
I swear, HN somehow tracks what I am doing. The last few days I also looked into CRDTs, Automerge, etc, and here we go. Happens so often, it is uncanny.
To me it seems that while state-based CRDTs are easy to understand, operation-based CRDTs are actually what is used in practice. Furthermore, it seems to me the difference between operation-based Automerge, and operational transform (OT) is actually not that big.
We used CRDTs to build Pennant notebooks (think Jupyter Notebooks with collaborative Google Docs features https://pennant-notebook.github.io/). Getting Yjs to behave for a multi-editor environment took some doing. I highly recommend building your own interface/library for interacting with Yjs and never touching Yjs directly in React itself. The state management and event handler cascade can be incredibly fussy if you don't have a good handle on the whole system.
We've found most multi-user apps running over websocket experience significant degradation in performance in the high teen and low twenties. Beyond that we were able to update nested CRDTs and all presence/user data in one connection with the backend.
TipTap has a great backend called HocusPocus with well documented API. Y-websocket backend is already quite good but the support for user tokens isn't there natively. We were actually able to be backend provider agnostic for well into the project. It's a fun ecosystem.
Once you start adding enough complexity, there will arise cases that the primitives are an awkward place for the merging to happen. There will arise cases where that user expectations and the merge function behavior don't agree. There will arise cases where the server can do a better job than the client at applying the change. There will arise cases where you need to undo but the undo function violates the merge function. And as the author freely states, there will arise cases where sending the whole state is prohibitively slow.
Those are really only issues with state-based CRDTs. The fundamental concepts behind operation-based CRDTs vs operational transforms vs bespoke hybrid approaches aren't really different. It's all about determining an unambiguous order, then getting everyone to update their state as if it had been applied in that order. Much less democratic but much more practical.
Figma's interesting because it's not strictly speaking a CRDT. It borrows heavily from some of the CRDT ideas, but it's really an editable tree where most changes are atomic and thus use a last-writer-wins approach. That, and re-ordering tree nodes uses fractional indexing.
It’s a client-server architecture with a bit of CRDT inspired algorithm sprinkle on for offline mode. The name of the game remains consensus and CRDTs convoluted approach is there to server a niche in the spectrum of distributed consensus. It is slower, more complex, and less transparent. I wouldn’t really use it outside of long lived and erratic P2P nodes — CRDTs solve that problem and that is what they are really designed for: partition prone, long lived, distributed, peer to peer, collaborative global state changes.
Right, there are quite some collaborative applications for which a hybrid approach is useful. We're building a collaborative editor (https://thymer.com) for example, where the underlying data structure is also a tree (as the text documents also support outliner-like features, so a flat list of characters/lines isn't enough). To avoid tree conflicts, insert and move operations look more like OT than CRDT however, where other updates can use a simple CRDT mutation.
Automerge is interesting as it's an op-based CRDT system vs state-based.. This should make use cases involving a central authority easier to work with but.. Their docs lack any detail useful to taking advantage of this haha.
Notion doesn't use OT. Most things are last-write-wins, but we have operations that merge like list re-ordering or permission changes. Today our text is last-write-wins, but we're developing a CRDT solution – if that sounds like something you'd like to work on, shoot me an email jake@makenotion.com or apply https://boards.greenhouse.io/notion/jobs/5602426003
Well, isn't it the sense of CRDTs that you don't see them? I mean, traditionally users are asked what the system should once it finds a lock, but with CRDTs they should never find a lock and therefore the user isn't bothered.
> In Pijul, there are two kinds of conflicts inside a file:
> When two different authors add lines at the same position
in a file, and it is impossible to tell which comes first in the file.
> When one author adds a line in a block of text or code, while another author deletes that block.
I don't think this is true. Two different authors can modify the same line in different ways, which is a conflict that's different than either of these categories.
> It is important to note that conflicts in Pijul always happen between changes, for example we might say that “change A conflicts with change B”.
I also don't think this is true. Conflicts can occur in a history (lineage, sequence, etc.) of concurrent changes, which are different than the delta between any two independent changes.
Notion doesn't use OT or CRDT in production. Most things are last-write-wins, but we have operations that merge like list re-ordering or permission changes. Today our text is last-write-wins, but we're developing a CRDT solution – if that sounds like something you'd like to work on, shoot me an email jake@makenotion.com or apply https://boards.greenhouse.io/notion/jobs/5602426003
Isn't last-write-wins technically a CRDT? It's just not a very good one. For many use-cases though a per-column last-writer-wins CRDT is perfectly adequate.
- Edits can be made on any node at any time independently and without coordinating with other nodes.
- All nodes eventually converge to the same state.
A CRDT can have last-write-wins semantics (as in the article above), but LWW doesn’t fully describe a CRDT because it doesn’t specify a way to determine which write is actually “last”. CRDTs don’t assume that there is a fully-ordered stream of updates, so there is no “last update” per se.
In production, they don’t. This is evident when two users try to edit the same block at the same time — its last writer wins right now, not merging. They have hired some engineers to work on a CRDT text editor implementation though.
No idea whether Notion uses CRDTs, but last writer wins is a (naive) strategy for editing text. You can see this in the article — if you edit the LWW Map, for example, even though the keys are merged, each value will be taken from one peer or the other. Once you get to that last “primitive” CRDT — the register holding each map value — updates are atomic. So Notion may be using CRDTs for e.g. the order of blocks in a page, but not (yet?) using them to merge text.
Google docs, really any online collaborative editor uses them. If you have a distributed system with multiple asynchronous data feeds into the same sink, this is one way of automatically resolving conflicts. A complicated way that most applications probably don’t really need, and that does not guarantee consistency. But they are neat.
I'm pretty sure Google Docs use Operational transformation (OT). Google Docs pre-dates the paper that defined CRDTs. It's certainly possible they've updated their algorithms since then though.
I find that "conflict-free" a little overpromising. If two users simulatiously update the same piece of data to different values, then they have still to agree on a common value manually.
CRDTs just provide a common interface for automatic synchronization of replicated data and uses metadata (timestamps etc.) to resolve conflicts in a best-effort manner. With CRDTs, you still have to accept that cases may occur where the conflict resolution does not reflect the intersubjective intention of all participating users.
Depending on the use case this may work well, e.g., in simultaneous collaborative editing where you can loose just some of you last keystrokes or mouse clicks but less in others like banking applications.
Partykit is a nice open source tool that is trying to make this stuff easier around creating multiplayer applications https://github.com/partykit/partykit - built by ex core react team / cloudflare engineer
I have studied CRDTs at a deeper level for a few weeks and implemented several small prototypes. They are fascinating. As an eventual consistency model for data management, CRDT inspired techniques (op-based or state-based) are useful.
However, for building user-facing applications with CRDTs, their importance is unclear.
The question with CRDTs and local-first paradigms has always been the pressing need (or the lack thereof). The only one plausible 'need' that CRDTs serve is real-time collaboration and that too with a squinting eye.
Real-time collaboration support translates, in practice, to text-editing and picture-editing collaboration. Google docs and the ilk have solved that problem (using central solutions). A CRDT-inspired central-solution like Figma is inspiring, and maybe that's the only place CRDTs fit in their survival quest when combating against central-solutions.
The rest of the claimed advantages seem to not withstand the test of times. This articles talks about 7 features of CRDTs [1].
Fast: Things are already fast with central solutions.
Multi-device: There is multi-device support with almost all solutions (if you decouple the real-time collaboration aspect).
Offline: It's rare, at least in first world countries, to be in a need for offline access (except maybe in airplanes).
Longevity: As can be seen from another comment here, longevity is actually a problem with CRDTs because data model updates are not easy.
Privacy: With BYOK encryption pattern, privacy is not as much an issue.
User control: Even with CRDTs, user is not in control of their data - other peers can mess with your data.
Author here! I think if you're just concerned with efficiency (speed/low overhead/etc) centralized solutions will always beat decentralized ones. The key advantage you can get with CRDTs — and, more generally, decentralized applications — is stability. By which I mean: Figma and Google Docs are great, but they can go out of business or delete your account or up their prices, and everything you've poured your time and energy into making just vanishes.
It's not just this way for collaboration software. Servers make everything more brittle. A few years ago, I tried to restore every website I've ever made. Static files were easy, things that relied on old versions of server-side languages were harder and anything stored on a server or in a database was just gone. That sucks. I want us to be able to keep our memories forever, not lose them because we stopped paying a hosting bill.
Evolution of technology makes things unstable, not specifically servers. A decentralized application is not more stable than a centralized one. It depends on what's prioritized about the product. For example, you can still use SMTP servers developed several decades ago to send email to others.
Also, it is hard to buy the argument that docs based on Google Docs will live less longer than docs served by some CRDT-based collaborative application. It is easy to argue the opposite. My Google doc history shows docs I have even forgotten ever existed, and Google docs play nice with Microsoft Word - making it interoperable with the largest ecosystem around structured documents. Again, this is about product features and prioritization, not underlying building blocks.
CRDTs hold a very special place in my heart. But I also believe they don't offer a differentiated solution - on the user facing side.
Decentralized applications aren’t inherently more stable than applications that rely on central servers. But the ceiling is higher. Infrastructure is coupling, and coupling makes things brittle.
Yes, if everything goes right, a centralized service will probably do a better job of keeping your files around than you will. But I have way more stories where something went wrong and I lost them.
I agree. We had similar conclusions around the implementation of PowerSync (sync engine enabling offline-first applications). Instead of CRDTs we went with the architecture of a central server authority and a form of server reconciliation [1] for consistency.
This is absolutely lovely, well done. I've worked with CRDT's a couple times and it's always mind-bending trying to understand the data flow; these interactive demos make it so much clearer.
On a Last-Write-Wins CRDTs, can I just set my computer's time to like 100 years in the future, and thus make changes that can never be reverted by anyone?
A lot of implementations would favour something like a lamport clock or counter instead of a timestamp for a few different reasons. You can tamper with it, and it will increment predictably. You don't really need to worry about timestamps if you're only interested in the relative order of the events in the CRDT.
What I’ve been doing for a personal project is that clients use a local id for their events, and the server adds a timestamp and its own id as soon as it receives them. So a client time doesn’t have any influence.
So say I go offline for a day, make a change at 12:00, someone else makes a conflicting change at 18:00, I reconnect at 19:00, my change will take precedence? That seems wrong.
Maybe one should take the larger of the server and client times?
I know nothing about this topic, but the server also has the version id of the last state (that you fetch) before disconnect. Your update can then be placed on top of that version and not some version far in the future when you reconnect.
Big shout out to tldraw and whatever they do under the hood to keep it as lightweight and open to integrations - it just handles a tonne of content in group sessions without complaining somehow it's like the perfect mix of simplicity and power features.
Also, the fact that we always use text editing as the de-facto solution is so weird to me since that problem is both niche and extremely complex. IMO a better example would be something like "Can this person drink alcohol?". Age moves in one direction so it has a simple merge function:
A property of this is that if I query your age and if you're 21 I can cache that age forever. You'll only ever be >= 21, after all. If I add new queries that care about you being 25 (for a hotel) I can satisfy the "drinking age" queries from a stale cache and then retrieve the true value (<25) when I need to check if you can book a hotel.This means you can have distributed caches without invalidation logic. A pretty amazing property since cache invalidation is a hugely complex problem and has seriously negative performance/ storage implications.
It also means you can drop writes. If my system gets information that a person was 18, but that information is out of date, I can drop that write, and I can do so by examining the cache and viewing stale information, only checking the real value if the cache value is < 18.
This whole thing lets you push computation to the edge, drop expensive writes, ignore any cache invalidation logic, cache values forever, potentially answer queries from stale cache values, etc.
Anyway, kudos for the writeup. I skimmed the second half but the first half was great and the second half looked legit.