Hacker News new | past | comments | ask | show | jobs | submit | zainhoda's comments login

What's The State Of Elm?

I love Elm but I think it's pretty clear that Evan is effectively declaring it as abandonware because he couldn't figure out a business model to sustain him.

What’s Evan Working On?

Sounds like he's talking to a handful of companies that he knows uses Elm and doing some tinkering without any defined objectives.


I’m waiting for the same signal. There are essentially 2 vastly different states of the world depending on whether GPT-5 is an incremental change vs a step change compared to GPT-4.


Would you be interested in merging with Vanna in some way?

You’re ahead of us in terms of interface but we’re ahead of you in terms of adoption (because of specific choices we’ve made and partnerships we’ve done).


Nice job getting something released! How does this compare to the other similar open source solutions like Vanna AI and DataHerald?


Thank you, we have not done that comparison yet, but we will check these 2 out to learn more. We calculated the accuracy with a test data set which is part of the repo, we will see how can compare this with others.


Looks really interesting! I saw something in the code about streaming. Could you explain that a bit more?


Yep! There's a streaming API. It's a little more technically involved, but you can stream back at the point at which an application "halts", meaning you pause and give control back to the user. It only updates state after the stream completes.

Technical details follow:

You can define an action like this:

    @streaming_action(reads=["prompt"], writes=["prompt"])
    def streaming_chat_call(state: State, **run_kwargs) -> Generator[dict, None, Tuple[dict, State]]:
        client = openai.Client()
        response = client.chat.completions.create(
            ...,
            stream=True,
        )
        buffer = []
        for chunk in response:
            delta = chunk.choices[0].delta.content
            buffer.append(delta)
            yield {'response': delta}
        full_response = ''.join(buffer)
        return {'response': full_response}, state.append(response=full_response)
Then you would call `application.stream_result()` function, which would give you back a container object that you can stream to the user:

    streaming_result_container = application.stream_result(...)
    action_we_just_ran = streaming_result_container.get()
    print(f"getting streaming results for action={action_we_just_ran.name}")

    for result_component in streaming_result_container:
        print(result_component['response']) # this assumes you have a response key in your result

    # get the final result
    final_state, final_result = streaming_result_container.get()
Its nice in a web-server or a streamlit app where you can use streaming responses to connect to the frontend. Here's how we use it in a streamlit app -- we plan for a streaming web server soon: https://github.com/DAGWorks-Inc/burr/blob/main/examples/stre....


I've been looking for something like this! Does it optimize the prompt template for LangChain only or is there a way I can get it to generate a raw system prompt that I can pass to the OpenAI API directly?


Hello, I'm glad you find it useful. I aimed to create something that would serve a purpose. If you can provide me details about use case you are trying to solve, I may add a feature to llmdantic to support it. Right now:

After initialize llmdantic you can get the prompt by running the following command:

""" from llmdantic import LLMdantic, LLMdanticConfig

from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

config: LLMdanticConfig = LLMdanticConfig( objective="Summarize the text", inp_schema=SummarizeInput, out_schema=SummarizeOutput, retries=3, )

llmdantic = LLMdantic(llm=llm, config=config)

input_data: SummarizeInput = SummarizeInput( text="The quick brown fox jumps over the lazy dog." )

prompt: str = llmdantic.prompt(input_data) """

But here you need to provide a langchain llm model. If you do not want to use langchain llm model, you can use the following code:

""" from llmdantic.prompts.prompt_builder import LLMPromptBuilder

from llmdantic.output_parsers.output_parser import LLMOutputParser

output_parser: LLMOutputParser = LLMOutputParser(pydantic_object=SummarizeOutput)

prompt_builder = LLMPromptBuilder( objective="Summarize the text", inp_model=SummarizeInput, out_model=SummarizeOutput, parser=output_parser, )

data: SummarizeInput = SummarizeInput(text="Some text to summarize")

prompt = prompt_builder.build_template()

print(prompt.format(input=data.model_dump())) """

But here still we use langchain for the prompt building. If you any questions, feel free to ask I will be happy to help you.


I think of all the options for “carrots” and “sticks” that companies are offering for RTO, I like this one the best.


Very cool. Would the license allow for use with Vanna? https://github.com/vanna-ai/vanna


Yes please do! Looks awesome, would love to help any way I can as well.


Not OP, but would it be possible to use a standardized license? Every time a special purpose license is used for a software that gains adaptation, the lawyers of hundreds to thousands of different companies must spend a lot of time and iterations with the team to figure out if they can actually use this model. There is something magical in the GPL, MIT, Apache, etc licenses because these lawyers have already opined on them once and no longer create a bottleneck.


Founder of Vanna AI here -- appreciate the link and I agree, we're solving slightly different problems.


I’m trying to solve for this with my project using RAG and (at least based on what people say in Discord), it’s working really well for them: https://github.com/vanna-ai/vanna


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: