Is it sustainable, if it took millions to bootstrap? I mean I guess if it can just keep running, but the investors might not agree if they need to wait 15 years to make their money back.
It would be interesting to see the performance difference from a wasm version, but in the end I found the human(ish) readable expression to be quite useful too.
Originally I created an interpreter for a code as a texture maker for code golfed javascripted games. https://github.com/Lerc/stackie
There's potential for a WASM implementation to be both smaller than the small version and Faster than the fast version.
I too like f-droid but have found it too difficult to figure out how to publish packages for it. Last time I looked at it all the documentation covered tool chains I don't use. I see that there are Godot games on f-droid now so I hope that means there is an easy path to go from a Godot project to f-droid package. Also is there any support for PWAs on F-droid?
I had to write (heavily improve? I don't remember) the React Native guide when I release my first RN app there (non RN apps were easy). Though I assume/hope things are better now?
You need a chip for VGA->HDMI but they exist, and you can buy simple adapters. I think HDMI->VGA adapters might be cheaper (I have one in a draw somewhere) , One of the more tricky points with HDMI is that they are stricter on what they call a valid image and make weird assumptions like All your pixels are the same width.
A CRT can make do with signals to say "go to the next line now", "go back to the top now". and then just output whatever is coming in on the colour signal. It really means there is no concept of a display mode. It's all just in the timing of the signals on the wires. Plenty of modern hardware with digital internals look at a lot of that and just say "That's not normal so I quit".
Analog devices may make a high pitched whine and then explode, but at least they'll attempt the task they have been given.
I have never understood why the failure to answer the strawberry question has seen as a compelling argument as to the limits of AI. The AIs that suffer from this problem have difficulty counting. That has never been denied. Those AI's also do not see the letters of the words they are processing. Counting the letters in a word is a task that it is quite unsurprising that it fails. I Would say it is more surprising that that they can perform spelling tasks at all. More importantly the models where such weaknesses became apparent are all from the same timeframe where the models advanced so much that those weaknesses were visible only after so many other greater weaknesses had been overcome.
People didn't think that planes flying so high that pilots couldn't breathe exposed a fundamental limitation of flight, just that their success had revealed the next hurdle.
The assertion that an LLM is X and therefore not intelligent is not a useful claim to make without either proof that it is X and proof that X is insufficient. You could say brains are interconnected cells that send pulses at intervals dictated by a combination of the pulses they sense, and there is nothing intelligent about that. The premises must be true and you have to demonstrate that the conclusion follows from those premises. For the record I think your premises are false and your conclusion doesn't follow.
Without a proof you could hypothesise reasons why such a system might not be intelligent and come up with an example of a task that no system that satisfies the premises could accomplish. While that example is unsolved the hypothesis remains unrefuted. What would you suggest as a test that shows a problem that could not be solved by such a machine? It must be solvable by at least one intelligent entity to show that it is solvable by intelligence. It must be undeniable when the problem is solved.
The AIs that suffer from this problem have difficulty counting.
Nope, its not a counting problem. It's a reasoning problem. Thing is, no matter how much hype they get, the AIs have no reasoning capabilities at all, and they can fail in the silliest ways. Same as with Larry Ellison: Don't fall into the trap of anthropomorphizing the AI.
Is that like 80% LLM slop? the allusion for failures to improve productivity in competent developers was cited in the initial response.
The Strawberry test exposes one of the many subtle problems LLMs inherently offer in the Tokenization approach.
The clown car of Phds may be able to entertain the venture capital folks for awhile, but eventually a VR girlfriend chat-bot convinces a kid to kill themselves like last year.
Again, cognitive development like ethics development is currently impossible for LLM as they are lacking any form of intelligence (artificial or otherwise.) People have patched directives into the model, but these weights are likely fundamentally statistically insignificant due to cultural sarcasm in the data sets.
You suspect my words of being AI generated while at the same time arguing that AI cannot possibly reason.
It seems like you see AI where there is not, this compromises your ability to assess the limitations of AI.
You say that LLMs cannot have any form of intelligence but for some definitions of intelligence it is obvious they do. Existing models are not capable in all areas but they have some abilities. You are asserting that they cannot be intelligent which implies that you have a different definition of intelligence and that LLMs will never satisfy that definition.
What is that definition for intelligence? How would you prove something does not have it?
That is a very open-ended detractor question, and is philosophically loaded with taboo violations of human neurology. i.e. It could seriously harm people to hear my opinion on the matter... so I will insist I am a USB connected turnip for now ... =)
"How would you prove something does not have it?"
A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)
>A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)
This is just your claim, restated. In short it is saying they don't think because they fundamentally can't think.
There is no support as to why this is the case. Any plain assertion that they don't understand is unprovable because you can't measure directly measure understanding.
Please come up with just one measurable property that you can demonstrate is required for intelligence that LLMs fundamentally lack.
We are at a logical impasse... i.e. failure to understand the noted ROC curve is often a metric that matters in ML development, and LLMs are trivially broken at the tokenization layer:
Note, introducing a straw-man argument and or bot slop in an unrelated topic is silly. My anecdotal opinion does not really matter on the subject of algorithmic performance standards. yawn... super boring like ML... lol
There is an interesting aspect of this behaviour used in the byte latent transformer model.
Encoding tokens from source text can be done a number of ways, byte pair encoding, dictionaries etc.
You can also just encode text into tokens (or directly into embeddings) with yet another model.
The problem arises that if you are doing variable length tokens, how many characters do you put into any particular token, and then because that token must represent the text if you use it for decoding, where do you store count of characters stored in any particular token.
The byte latent transformer model solves this by using the entropy for the next character. A small character model receives the history character by character and predicts the next one. If the entropy spikes from low to high they count that as a token boundary. Decoding the same characters from the latent one at a time produces the same sequence and deterministically spikes at the same point in the decoding indicating that it is at the end of the token without the length being required to be explicitly encoded.
(disclaimer: My layman's view of it anyway, I may be completely wrong)
reply