Regarding its “learning” - it is still a model that needs data. The best you can expect is it will take actual UI sessions (as in users interacting with the website) for specific tasks to build its scripts, and as with any current “large” model it’s not going to update in realtime based on user input alone.
Sure but that’s all in the future. All of the selling points of this device are in future tense. The “model” does not seem to exist, but it’s being “worked on”. Their client app was taken apart and there is nothing interesting there. Their servers were hacked into, and made to run Doom which is funny, and there is no trace of any AI model there.
One of their former engineers gave a statement that LAM is just a marketing term and nothing like that exists.
If all the selling points are in future tense at what point can we call it a scam?
Edit: also the founder’s previous gig was a crypto scam that also promised AI on the blockchain
There is evidently a LAM of sorts given the nature of the queries it can answer. It is able to use agents - something like langchain or ChatGPT tools - in order to perform tasks that may be dependent on other tasks.
The problem is their LAM sucks, and is likely no more than just a task builder prompt on GPT (instead of a model specifically tuned for generating these tasks) using lang chain for resolution. They also have limited tooling, and some of it is already broken.
As for it being a scam. I definitely don’t see how you can offer lifetime ChatGPT with no subscription. So unless they are going to bring in additional revenue somehow it is effectively a ponzi scheme.
Regarding its “learning” - it is still a model that needs data. The best you can expect is it will take actual UI sessions (as in users interacting with the website) for specific tasks to build its scripts, and as with any current “large” model it’s not going to update in realtime based on user input alone.