Hacker News new | past | comments | ask | show | jobs | submit | zozbot234's comments login

Static type checking is a kind of formal verification of software - there are formal requirements (the program doesn't go "wrong" in a number of rigorously defined ways) that are automatically checked. And you can certainly do "design and development" together in type-safe languages.

> Capabilities ... They're not really compatible with C's flat memory model ... Capabilities mean having multiple types of memory

C is not really dependent on a flat memory model - instead, it models memory allocations as separate "objects" (quite reniniscent of "object orientation in hardware" which is yet another name for capabilities), and a pointer to "object" A cannot be offset to point into some distinct "object" B.

> A Truly Relational Language

This is broadly speaking how PROLOG and other logic-programming languages work. The foundational operation in such languages is a knowledge-base query, and "relations" are the unifying concept as opposed to functions with predefined inputs and outputs.


Possibly the nearest to applying capabilities to C is pointer authentication: https://lwn.net/Articles/718888/

(This is one of those times where the C memory model as described in the spec is very different from the mental PDP-11 that C programmers actually use to reason about)


CheriBSD is definitely closer: https://www.cheribsd.org/. Or one of the other CHERI projects.

C's file descriptors are capabilities.

You can refer to the Linked Open Data (LOD) cloud for a network of openly available RDF datasets. Wikidata currently serves as the unofficial "hub" for the cloud, a role that was formerly played by DBpedia.

This gives me an idea. I'm going to create a super complex and unintuitive programming language where the only error message you get is "no", and call it miniKaren. :-P

> unintuitive programming language where the only error message you get is "no",

You know. "Consent" is everything these days.- :)

PS. Preferably in written form ...


Does RISC-V org itself list/document suggested insn fusion sequences anywhere? I can't quite figure out whether they do because the technical wiki they use is quite hard to navigate and search, and their unofficial Github repos are not much better.

The ISA spec itself suggests one fusion - that of mul and mulh (lower and upper halves of n-bit * n-bit -> 2n-bit product).

The only other source I know of was this 2016 paper that suggested specific macro fusions. https://arxiv.org/abs/1607.02318 This predates the B extension so some of those fusions are dedicated instructions now, but I suppose Xiangshan still implements them for the sake of software that was not compiled with B enabled.


You might find this[0] reddit thread useful.

0. https://old.reddit.com/r/RISCV/comments/1hnm5y1/where_did_th...


That's talking about the list of pseudo-instructions, which are just special-cased assembly mnemonics for a single RISC insn. I'm wondering whether there's also a documented list of suggested fusable sequences, comprising multiple insns each.

You'll find that a lot of them are equivalent to a sequence of -i.e. more than one- instructions.

Refer to the "base instruction" column in the pseudoinstruction table[0].

0. https://github.com/riscv-non-isa/riscv-asm-manual/blob/main/...


OP has enough money to live like an actual Doge [0]. (And get a pet Shiba Inu K-9 while he's at it.)

[0] https://en.wikipedia.org/wiki/Doge_(title)


The really nice thing about this is that the AI can now acquire these newly-decoded texts as part of its training set, and begin learning at a geometric rate.

With our current methods, feeding back even fairly small amounts of outputs back in as training data leads to declining performance.

Just think of it abstractly. The AI will be trained on the errors the previous generation made. As long as it keeps making new errors each generation, they will tend to multiply.


Degradation of autoregressive models being fed their own unfiltered output is pretty obvious: it's, basically, noise being injected into the ground truth probability distribution.

But. "Our current methods" include reinforcement learning. So long as there's a signal indicating better solutions, performance tends to improve.


Why not just feed it random data? It's so smart that it will figure out which parts are random, so eventually you will generate some good data randomly, and it will feed on it, and become exponentially smarter exponentially fast.

This is actually hilarious and I'm sad you are getting downvoted for it.

errors in => errors out

Don't forget to spice it up with some bias!

https://x.com/i/grok/share/uMwJwGkl2XVUep0N4ZPV1QUx6


But do I want to see ancient programming advice written in Linear B?

> it's possible to run the AI directly in the browser, but for some reason they didnt do it.

Perhaps this is a proof of concept and they will have optional Firefox integration at a later time. Firefox uses local AI for webpage translation already.


A locally run 7B model would consume a lot of resources for most people, but it would be nice as an option.

Todat 1B models are better that the 7B models they use currently.

> The equivalence class of Cauchy sequences is vastly larger and misleading compared to those of integers and rational numbers. You can take any finite sequence and prepend it to a Cauchy sequence and it will represent the same real number. ...

This can be addressed practically enough by introducing the notion of a 'modulus of convergence'.


Cauchy sequences can be made constructive (providing a nice foundation for numerical analysis); Dedekind cuts cannot.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: