Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "Capabilities" endpoint/API #274

Open
ptgoetz opened this issue Apr 6, 2024 · 1 comment
Open

Add "Capabilities" endpoint/API #274

ptgoetz opened this issue Apr 6, 2024 · 1 comment

Comments

@ptgoetz
Copy link
Collaborator

ptgoetz commented Apr 6, 2024

What?

Add a REST API endpoint such as /api/v1/capabilities that returns a nested structure describing what LLMs and Tools the given OpenGPTs API instance supports. This would enable UIs to dynamically show/hide OpenGPTs options like LLMs and Tools.

A hypothetical response to GET /api/v1/capabilities might look something like:

{
   capabilities: {
        models : [],
        tools: []
    }
}

Why?

Currently, the OpenGPTs UI lets you select Models and Tools that may not be configured. The UI will happily let you create assistants with models that aren't configured. When used, the backend will spit out a trace, and the frontend won't do anything.

Implementing this endpoint would enable API clients (e.g. UIs) to only present options that the OpenGPTs instance is actually configured to support.

Implementation Considerations

With the current state of the codebase and dependency stack, the path of least resistance is likely to implement such a feature by checking for existence of LLM/Tool-specific environment variables.

Longer, more scalable solutions might involve moving away from environment variable driven configurations to something like a configuration file.

@ptgoetz
Copy link
Collaborator Author

ptgoetz commented Apr 30, 2024

Here's a proposed response to /api/v1/capabilities.

The idea is to introduce the concept of an "LLM Provider" that supports one or more models.

Whether a tool or model is "enabled", for now, would depend on if the requisite env variables are set or not.

The goal is to make it so UIs consuming the OpenGPTs backend could toggle LLMs and tools on and off based on how a given backend is configured.

{
    "capabilities": {
        "llms": [
            {
                "provider": "OpenAI",
                "models": [
                    {
                        "id": "openai_gpt3_turbo",
                        "title": "OpenAI GPT 3.5 Turbo",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": true
                    },
                    {
                        "id": "openai_gpt4_turbo",
                        "title": "OpenAI GPT 4 Turbo",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": true
                    }
                ]
            },
            {
                "provider": "Anthropic",
                "models": [
                    {
                        "id": "anthropic_claude_2",
                        "title": "Claude 2",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": true
                    }
                ]
            },
            {
                "provider": "Amazon Bedrock",
                "models": [
                    {
                        "id": "amazon_bedrock_claude_2",
                        "title": "Claude 2",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": true
                    }
                ]
            },
            {
                "provider": "Azure",
                "models": [
                    {
                        "id": "azure_gpt4_turbo",
                        "title": "GPT 4 Turbo",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": false
                    }
                ]
            },
            {
                "provider": "Google",
                "models": [
                    {
                        "id": "google_gemini",
                        "title": "Gemini",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": false
                    }
                ]
            },
            {
                "privider": "Ollama",
                "models": [
                    {
                        "id": "ollama_llama2",
                        "title": "Ollma - Llama2",
                        "supports_tools": false,
                        "supports_streaming": true,
                        "enabled": true
                    },
                    {
                        "id": "ollama_mistral",
                        "title": "Ollma - Mistral",
                        "supports_tools": false,
                        "supports_streaming": true,
                        "enabled": true
                    },
                    {
                        "id": "ollama_openchat",
                        "title": "Ollma - Openchat",
                        "supports_tools": false,
                        "supports_streaming": true,
                        "enabled": true
                    },
                    {
                        "id": "ollama_orca2",
                        "title": "Ollma - Orca2",
                        "supports_tools": false,
                        "supports_streaming": true,
                        "enabled": true
                    }
                ]
            }
        ]
    },
    "tools": [
        {
            "id": "action_server_by_robocorp",
            "title": "Action Server by robocorp",
            "description": "Run AI actions with [Robocorp Action Server](https://github.com/robocorp/robocorp).",
            "enabled": true
        },
        {
            "id": "ai_action_runner_by_connery",
            "title": "AI Action Runner by Connery",
            "description": "Connect OpenGPTs to the real world with [Connery](https://github.com/connery-io/connery).",
            "enabled": true
        },
        {
            "id": "ddg_search",
            "title": "DuckDuckGo Search",
            "description": "Search the web with [DuckDuckGo](https://pypi.org/project/duckduckgo-search/).",
            "enabled": true
        },
        {
            "id": "arxiv_search",
            "title": "ArXiv Search",
            "description": "Searches [Arxiv](https://arxiv.org/).",
            "enabled": false
        },
        {
            "id": "you_search",
            "title": "You.com Search",
            "description": "Uses [You.com](https://you.com/) search, optimized responses for LLMs.",
            "enabled": true
        },
        {
            "id": "sec_filings_kai_ai",
            "title": "SEC Filings (Kay.ai)",
            "description": "Searches through SEC filings using [Kay.ai](https://www.kay.ai/).",
            "enabled": true
        },
        {
            "id": "ai_action_runner_by_connery",
            "title": "AI Action Runner by Connery",
            "description": "Connect OpenGPTs to the real world with [Connery](https://github.com/connery-io/connery).",
            "enabled": true
        },
        {
            "id": "wikipedia",
            "title": "Wikipedia",
            "description": "Searches [Wikipedia](https://pypi.org/project/wikipedia/).",
            "enabled": true
        }
    ]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant