I love Elm but I think it's pretty clear that Evan is effectively declaring it as abandonware because he couldn't figure out a business model to sustain him.
What’s Evan Working On?
Sounds like he's talking to a handful of companies that he knows uses Elm and doing some tinkering without any defined objectives.
I’m waiting for the same signal. There are essentially 2 vastly different states of the world depending on whether GPT-5 is an incremental change vs a step change compared to GPT-4.
Would you be interested in merging with Vanna in some way?
You’re ahead of us in terms of interface but we’re ahead of you in terms of adoption (because of specific choices we’ve made and partnerships we’ve done).
Thank you, we have not done that comparison yet, but we will check these 2 out to learn more. We calculated the accuracy with a test data set which is part of the repo, we will see how can compare this with others.
Yep! There's a streaming API. It's a little more technically involved, but you can stream back at the point at which an application "halts", meaning you pause and give control back to the user. It only updates state after the stream completes.
Then you would call `application.stream_result()` function, which would give you back a container object that you can stream to the user:
streaming_result_container = application.stream_result(...)
action_we_just_ran = streaming_result_container.get()
print(f"getting streaming results for action={action_we_just_ran.name}")
for result_component in streaming_result_container:
print(result_component['response']) # this assumes you have a response key in your result
# get the final result
final_state, final_result = streaming_result_container.get()
Its nice in a web-server or a streamlit app where you can use streaming responses to connect to the frontend. Here's how we use it in a streamlit app -- we plan for a streaming web server soon: https://github.com/DAGWorks-Inc/burr/blob/main/examples/stre....
I've been looking for something like this! Does it optimize the prompt template for LangChain only or is there a way I can get it to generate a raw system prompt that I can pass to the OpenAI API directly?
Hello, I'm glad you find it useful. I aimed to create something that would serve a purpose. If you can provide me details about use case you are trying to solve, I may add a feature to llmdantic to support it. Right now:
After initialize llmdantic you can get the prompt by running the following command:
"""
from llmdantic import LLMdantic, LLMdanticConfig
Not OP, but would it be possible to use a standardized license? Every time a special purpose license is used for a software that gains adaptation, the lawyers of hundreds to thousands of different companies must spend a lot of time and iterations with the team to figure out if they can actually use this model. There is something magical in the GPL, MIT, Apache, etc licenses because these lawyers have already opined on them once and no longer create a bottleneck.
I’m trying to solve for this with my project using RAG and (at least based on what people say in Discord), it’s working really well for them:
https://github.com/vanna-ai/vanna
I love Elm but I think it's pretty clear that Evan is effectively declaring it as abandonware because he couldn't figure out a business model to sustain him.
What’s Evan Working On?
Sounds like he's talking to a handful of companies that he knows uses Elm and doing some tinkering without any defined objectives.