Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Info]: Autogen studio - streaming response to web client #3089

Open
satnair opened this issue Jul 8, 2024 · 2 comments
Open

[Info]: Autogen studio - streaming response to web client #3089

satnair opened this issue Jul 8, 2024 · 2 comments
Labels
studio Related to AutoGen Studio.

Comments

@satnair
Copy link

satnair commented Jul 8, 2024

Describe the issue

Hi,

As part of autogen studio, is there a support for streaming response to the web client, than flushing the response towards the end. Any suggestions or guidelines on how to implement, if not already available.

Thanks.

Steps to reproduce

No response

Screenshots and logs

No response

Additional Information

No response

@satnair satnair changed the title [Info]: Autogen studio - streaming agent output response Jul 8, 2024
@victordibia
Copy link
Collaborator

Hi,

Do you mean streaming responses to a web client other than the autogenstudio web client or streaming responses to the AutoGen studio UI itself.

Currently, AutoGen studio supports streaming complete messages from agents (not tokens generated by an LLM) as then become available. It does this using a queue + threaded implementation where agents can put messages on a shared queue, and a background thread reads from that queue and sends the messages over a socket to the autogen studio UI.
See this video as an example of messages being streamed as they as agents send them

image

I also wrote about some patterns for streaming autogen agent messages to your own app here in case that is useful.

@victordibia victordibia added the studio Related to AutoGen Studio. label Jul 8, 2024
@satnair
Copy link
Author

satnair commented Jul 9, 2024

@victordibia Thanks much for the details. Yes, for sure it is helpful.

I am looking to stream responses, to the autogen studio UI itself as the tokens gets generated by LLM, so that the responses get printed on the UI in an incremental fashion, than waiting for the whole response from an agent to flush it to the UI.

It also improves the user experience , that the user see output getting generated and just don't need to see the spinner on "Agents working on it". Any suggestions on how to implement this in this framework , would be helpful. Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
studio Related to AutoGen Studio.
2 participants