We can create reusable Gradio applications by leveraging tabs.

Let’s say we are creating a platform where we can try different LLMs. We want each LLM to have its own togglable parameters and outputs. A naive approach to this problem would be to make the LLM a part of a dropdown. This does not work for retaining prior settings because those are lost between changing models.

For reference, we are trying to replicate this

image.png

My solution for this is to leverage Gradio Tabs with functions.

with gr.Blocks() as demo:
    gr.Markdown(
        """
        # Playground
        """
    )
    create_model_tab(model_instance, "LLM Name")

We define a create_model_tab like so

def create_model_tab(model, tab_name, api_name):
    with gr.Tab(tab_name):
        with gr.Row():
            with gr.Column(scale=4):
                prompt = gr.Textbox(
                    placeholder="Enter Input",
                    label="Input",
                    lines=14,
                    value=""
                )
            with gr.Column(scale=1):
                gr.Markdown("## Parameters")
                temperature = gr.Slider(0, 1, value=0.0, step=0.1, label="Temperature")
                max_tokens = gr.Slider(0, 4096, value=500, step=1, label="Max Tokens")
                top_probability = gr.Slider(
                    0, 1, value=1, step=0.1, label="Top Probability"
                )
                frequency_penalty = gr.Slider(
                    0, 1, value=0, step=0.1, label="Frequency Penalty"
                )
                presence_penalty = gr.Slider(
                    0, 1, value=0, step=0.1, label="Presence Penalty"
                )
                metadata = gr.Markdown(label="Token usage")

        with gr.Row():
            out = gr.Text(label="Output", lines=12, show_copy_button=True)
            
        btn_clean = gr.Button("Submit")
        btn_clean.click(
            fn=lambda prompt, temperature, max_tokens, top_probability, frequency_penalty, presence_penalty: invoke_model(
                model,
                prompt,
                temperature=temperature,
                max_tokens=max_tokens,
                top_probability=top_probability,
                frequency_penalty=frequency_penalty,
                presence_penalty=presence_penalty
            ),
            inputs=[
                prompt,
                temperature,
                max_tokens,
                top_probability,
                frequency_penalty,
                presence_penalty
            ],
            outputs=[out, metadata],
            api_name=api_name  ## This is the name of the API
        )

All the features of an Azure OpenAI Playground, in the comfort of Gradio

image.png

image.png