1. Find, annotate, aggregate, organize, summarize, etc all of my knowledge from notes
2. A Google substitute with direct answers in place of SEO junktext and countless ads
3. Writing boilerplate code, especially in unfamiliar languages
4. Dynamic, general, richly nuanced multimodal content moderation without the human labor bill
5. As an extremely effective personal tutor for learning nearly anything
I view AI as commoditizing general intelligence. You can supply it, like turning on the tap, wherever intelligence helps. I inject intelligence into moderating Discord message harassment, to detect when my 3D prints fail, to filter fluff from articles, clean up unstructured data, flag inappropriate images, etc. (All with the same model!) The world is overwhelmingly starved of intelligence. What extremely limited supply we have of this scarce resource (via humans) is woefully insufficient, and often extreme overkill where deployed. I now have access to a pennies-on-the-dollar supply of (low/mediocre quality) intelligence. Bet that I'll use it anywhere possible to unlock personal value and free up my intelligence for use where it's actually needed.
This sounds compelling but where i always get stuck is on trust of what the LLM / agent spits back out. Every time I've tried to use it for one of the above use cases you mentioned and then actually dug into the sources it may or may not mention, it's almost always highly imprecise, missing really important details, or straight up completely lying or hallucinating.
how do you get around this issue?
Granted on (3), you can just verify yourself by running the code, so trust/accuracy isn't as much an issue here but still annoying when things don't work.
Frame your question in human terms. LLM -> employee, hallucination -> false belief, etc. Same hiring problems. Same solutions.
You have a problem. The candidate must reliably solve it. What are their skills, general aptitudes, and observed reliability for this problem? Set them up to succeed, but move on if you distrust them to meet the role’s responsibility. We are all flawed, and that’s the nature of uncertainty when working with others.
Past that, there’s little situational advice that one can give about a general intelligence. If you want specific advice, give your specific attempt at a solution!
1. Find, annotate, aggregate, organize, summarize, etc all of my knowledge from notes
2. A Google substitute with direct answers in place of SEO junktext and countless ads
3. Writing boilerplate code, especially in unfamiliar languages
4. Dynamic, general, richly nuanced multimodal content moderation without the human labor bill
5. As an extremely effective personal tutor for learning nearly anything
I view AI as commoditizing general intelligence. You can supply it, like turning on the tap, wherever intelligence helps. I inject intelligence into moderating Discord message harassment, to detect when my 3D prints fail, to filter fluff from articles, clean up unstructured data, flag inappropriate images, etc. (All with the same model!) The world is overwhelmingly starved of intelligence. What extremely limited supply we have of this scarce resource (via humans) is woefully insufficient, and often extreme overkill where deployed. I now have access to a pennies-on-the-dollar supply of (low/mediocre quality) intelligence. Bet that I'll use it anywhere possible to unlock personal value and free up my intelligence for use where it's actually needed.