I am a founder of a company that generates LLM-generated output (the users know that). I am curious if folks would prefer we make it more "human-like", or do you prefer we just add more disclaimers of the text being LLM generated?
Of course, do assume that the content itself is accurate. The question is on the style of output
My cynical take is that HR teams don't know how to manage or engage remote teams. Instead of picking up that skill, they are forcing a return to office.
I don't follow. If using a third party LLM, there is a risk of prompt injection and unless there are advances I haven't heard of, it's not something they can fix?
1. I agree with your point that Prompt Injection can still affect the consumer of a third party LLM
2. I prefer to categorize it as a supply chain security issue, since the vulnerability is with a software provider that you are consuming.
(Author of the newsletter here)
It's early days, but the simplest use case has been to improve employee productivity (Github Copilot, ChatGPT etc.). The Stripe CEO just tweeted that over half of their employees are using an internal LLM tool they built (folks who build internal tooling know how hard it is to drive adoption to a non-mandatory tool): https://twitter.com/patrickc/status/1681699442817368064?s=20
There are other companies which are doing some crazy experimental things which may have a large impact. For instance, Truveta is cleaning up on millions of medical records, training a model on that data and using that to drive research about patient care. Too early to tell if LLMs will actually transform companies beyond slight bumps in productivity, but to me, it feels like the cloud computing moment from 12-15yrs ago.
> It's early days, but the simplest use case has been to improve employee productivity
Does anyone know if the impact has been properly measured? It’s one thing to say that “developers are more productive” and another to really have faster feature delivery (or any other metric).
How does this address the privacy concerns? OpenAI could still accidentally provide your data to other users, they could retain, sell or misuse it despite agreeing not to, or a rogue employee or an intruder could do so, or they could be forced to do so by a court order, etc.
There are privacy concerns with using ChatGPT, as data is collected by openai - opt out. Using the API has less privacy concerns as it is not used for training by openai.
The CI service is free, with some limitations on how long the findings stay on the dashboard, SSO integration and maybe a few others. The paid version was $40/usr/mo the last time I checked
Once we figured it out, it takes us a few minutes to onboard a new repo to Semgrep
I do the same with Whatsapp using a neat trick someone told me about.
Create a group(call it "Share with self" or similar) with you and another person. Then, remove that person from the group.. that's it! Functionally it's the same as sending yourself Slack messages, but on WA, which is good
I guess the idea is that TLS is sufficiently complicated that you can take tangents during the interview and establish if the candidate can understand and communicate complex concepts