Hacker News new | past | comments | ask | show | jobs | submit login

First, thanks for taking the time.

Alright, I realize that what I wrote out on the spot was quite hand-wavy. The devil is in the details, also I honestly am a bit perplexed why you are so insistent on not looking at the (terrible) paper but willing to invest in the discussion at all.

For example the spam comments; in the paper I mention a construction from a project called Vac that deals with this. Bandwidth can be throttled per node per epoch by leaking secret shares of a key that put up collateral.

Regarding shortest path; interesting, I thought it was. Guess I was wrong. Still it is a valid way to do routing and is how I'd intended to do it in the prototype network. The big idea of versioning the operators is to be able to converge on the desired semantics.

The reason I focus on the proof of trust part is because that is how a bunch of decisions are made (such as amount of bandwidth to allocate). You still need a bunch of other constructions to use this data.

The anti-sybil is not just anti-spam it is also about finding correlated intent and packaging that whole echo chamber up as an individual so that you can rate limit / adjust visibility. Major news from an uninteresting group may be more interesting than minor news from an interesting group.

Regarding "making up words" - there are precise definitions that I didn't reproduce because I figured you would want to hear a different narration than the one presented in the paper / the channel.

The way nodes participate in the network is by pinning content addresses and gossiping metadata about pins. At any given time the "commons" is made explicit because it is pinned by a bunch of nodes. The nodes build metadata on top of the bottom (commons), when a consensus is reached, i.e. there is some metadata that everyone has join-ed into their state, then we have a meet consensus and the pin can be moved.

You are still thinking in terms of first order BFT which is why your criticism is a bit.. off base. The network is not trying to start off automatic but rather manual and then discovered patterns will be automated once proven correct.

If you read (and understand) how the amoveo oracles work and look into the properties of prediction markets then you can see that this is actually a social network built on a data-interchange format that is optimized for building authenticated datastructures (i.e. amenable to automation of any first order problems that can be solved with MPC).

Edit: W.r.t. Datafun "trying to" do type inference. Datafun is a research project, he has found most of the type inference rules but some are still missing. The language is not really usable as-is but it "tries to" be, carrying around two typing contexts (one for lattice types and another for algebraic types) and having modal subtyping and such things is complicated and hard to do well. Datalisp is less interested in such things and is closer to being a probabilistic datalog. Again, the versioning of operators is so that we can converge on desired semantics; the mvp is going to be using bayesian factors for inferences and odds of signal:noise for priors attached to facts, this is just for pragmatic reasons (easy to code / reason about) it is probably insufficient for the economic model and composition of prediction markets.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: