I'm always curious how the licensing aspects will play out. It's quite clear to me that most of the LLMs contain copyrighted material without correct rights. And then they turn around and put up some restrictive licensing on it (for example the Salesforce model here).
I know they add valuable input to it, but CC-BY-NC is really rubbing me the wrong way.
It's really two different things. If using data to train is either fair use or not a use at all (I belive it is, it's for all intents and purposes the same as reading it), the copyright on the training data is irrelevant.
Whether weights can be copyrighted at all (which is the basis of these licenses) is also unclear. Though again I think they should be, they are just as creative a work as a computer program for any nontrivial model release (though really I think it's important to be able to enforce copyleft on them more than anything).
Also, all these laws work to the benefit of who ever has the deepest pockets, so salesforce will win against most others by virtue of this, regardless of how the law shakes out.
I'd be really impressed with Mozilla if they could do the entire thing (llamafile + llamaindex) in one, or even two files. Having to set up a separate python install just for this task and pull in all the llamaindex python deps defeats the point of using llamafile.
I’d love if Firefox would feed the text content of each website I visit locally and would allow me to RAG search this database. So often do I want to re-visit a website I visited weeks earlier but can’t find it again.
You might not want them to have that information, but I think Google's history search now supports that for Chrome users: https://myactivity.google.com/myactivity
I have built a Chrome extension to do this one year ago: [0]
Here is the list of technological problems:
1. When is a page ready to be indexed? Many websites are dynamic.
2. How to find the relevant content? (To avoid indexing noise)
3. How to keep an acceptable performance? Computing embeddings on each page is enough to transform a laptop into a small helicopter with its fans. (I used 384 as the embedding dimension. Below, too imprecise; above, too compute-heavy).
4. How to chunk a page? It is not enough to split the content into sentences. You must add context to them.
5. How to rank the results of a search? PageRank is not applicable here.
I'm working on something like this! It's simple in concept, but there are lots of fiddly bits. A big one is performance (at least, without spending $$$$$ on GPUs.) I haven't found that much in terms of how to tune/deploy LLMs on commodity cloud hardware, which is what I'm trying this out on.
You can use ONXX versions of embedding models. Those run faster on CPU.
Also, don’t discount plain old BM25 and fastText. For many queries, keyword or bag-of-words based search works just as well as fancy 1536 dim vectors.
You can also do things like tokenize your text using the tokenizer that GPT-4 uses (via tiktoken for instance) and then index those tokens instead of words in BM25.
Could you sidestep inference altogether? Just return the top N results by cosine similarity (or full text search) and let the user find what they need?
https://ollama.com models also works really well on most modern hardware
I'm running ollama, but it's still slow (it's actually quite fast on my M2). My working theory is that with standard cloud VMs, memory <-> CPU bandwidth is an issue. I'm looking into vLLM.
And as to sidestepping inference, I can totally do that. But I think it's so much better to be able to ask the LLM a question, run a vector similarity search to pull relevant content, and then have the LLM summarize this all in a way that answers my question.
historious indexes everything, whereas Pinboard (as far as I know) only indexes things you select. I haven't used Pinboard much, so I can't really say much.
I know they add valuable input to it, but CC-BY-NC is really rubbing me the wrong way.