The smart edge and AI gateway for agents. Arch is a high-performance proxy server that handles the low-level work in building agents: like applying guardrails, routing prompts to the right agent, and unifying access to LLMs, etc. Natively designed to process prompts, it's framework-agnostic and helps you build agents faster.

ai-gateway ai-gateway-support envoy envoyproxy gateway generative-ai llm-gateway llm-inference llm-proxy llm-routing llmops llms openai prompt proxy proxy-server routing
6 Open Issues Need Help Last updated: Sep 14, 2025

Open Issues Need Help

View All on GitHub
enhancement help wanted

The smart edge and AI gateway for agents. Arch is a high-performance proxy server that handles the low-level work in building agents: like applying guardrails, routing prompts to the right agent, and unifying access to LLMs, etc. Natively designed to process prompts, it's framework-agnostic and helps you build agents faster.

Rust
#ai-gateway#ai-gateway-support#envoy#envoyproxy#gateway#generative-ai#llm-gateway#llm-inference#llm-proxy#llm-routing#llmops#llms#openai#prompt#proxy#proxy-server#routing

AI Summary: The user is unable to configure a custom LLM provider when the `provider_interface` is set to `openai` and the `model` name also starts with `openai/` (e.g., `openai/gpt-oss-120`). Despite the model name following the required `<provider>/<model_id>` format, the system throws an error. A workaround involves using a different prefix for the model name, suggesting a conflict in how the system parses or validates model names that begin with `openai/` when `openai` is also the specified provider interface.

Complexity: 2/5
bug enhancement good first issue help wanted

The smart edge and AI gateway for agents. Arch is a high-performance proxy server that handles the low-level work in building agents: like applying guardrails, routing prompts to the right agent, and unifying access to LLMs, etc. Natively designed to process prompts, it's framework-agnostic and helps you build agents faster.

Rust
#ai-gateway#ai-gateway-support#envoy#envoyproxy#gateway#generative-ai#llm-gateway#llm-inference#llm-proxy#llm-routing#llmops#llms#openai#prompt#proxy#proxy-server#routing

AI Summary: The task requires refactoring the Arch project to separate the `model_server` functionality from the main `archgw` package. This involves decoupling the model serving components (for `arch-routing`, `arch-function`, and `guard` models) and making them accessible via external hosted endpoints, simplifying the `archgw` CLI and reducing its dependencies. The goal is to allow users to utilize the LLM routing capabilities of Arch without needing to install or manage the model server components.

Complexity: 4/5
help wanted

The smart edge and AI gateway for agents. Arch is a high-performance proxy server that handles the low-level work in building agents: like applying guardrails, routing prompts to the right agent, and unifying access to LLMs, etc. Natively designed to process prompts, it's framework-agnostic and helps you build agents faster.

Rust
#ai-gateway#ai-gateway-support#envoy#envoyproxy#gateway#generative-ai#llm-gateway#llm-inference#llm-proxy#llm-routing#llmops#llms#openai#prompt#proxy#proxy-server#routing

AI Summary: The task requires modifying the Arch gateway to prioritize authentication keys provided in the Authorization header over those specified in the `arch_config` file. This involves updating the authentication logic to check the header first and use the key found there if present, otherwise falling back to the configuration file.

Complexity: 4/5
help wanted

The smart edge and AI gateway for agents. Arch is a high-performance proxy server that handles the low-level work in building agents: like applying guardrails, routing prompts to the right agent, and unifying access to LLMs, etc. Natively designed to process prompts, it's framework-agnostic and helps you build agents faster.

Rust
#ai-gateway#ai-gateway-support#envoy#envoyproxy#gateway#generative-ai#llm-gateway#llm-inference#llm-proxy#llm-routing#llmops#llms#openai#prompt#proxy#proxy-server#routing

AI Summary: The task is to enhance the Arch gateway configuration to allow users to specify all models from a provider (e.g., all OpenAI models) using wildcard characters, instead of listing each model individually. This involves modifying the configuration parsing and model selection logic within the Arch gateway to handle wildcard model specifications.

Complexity: 3/5
help wanted

The smart edge and AI gateway for agents. Arch is a high-performance proxy server that handles the low-level work in building agents: like applying guardrails, routing prompts to the right agent, and unifying access to LLMs, etc. Natively designed to process prompts, it's framework-agnostic and helps you build agents faster.

Rust
#ai-gateway#ai-gateway-support#envoy#envoyproxy#gateway#generative-ai#llm-gateway#llm-inference#llm-proxy#llm-routing#llmops#llms#openai#prompt#proxy#proxy-server#routing

AI Summary: Implement path-based routing for LLMs in the Arch gateway. This involves adding a `path_prefix` field to the configuration schema, updating the Envoy configuration to use this field for routing decisions, and potentially modifying the Arch CLI and/or API to handle the new configuration option.

Complexity: 4/5
help wanted

The smart edge and AI gateway for agents. Arch is a high-performance proxy server that handles the low-level work in building agents: like applying guardrails, routing prompts to the right agent, and unifying access to LLMs, etc. Natively designed to process prompts, it's framework-agnostic and helps you build agents faster.

Rust
#ai-gateway#ai-gateway-support#envoy#envoyproxy#gateway#generative-ai#llm-gateway#llm-inference#llm-proxy#llm-routing#llmops#llms#openai#prompt#proxy#proxy-server#routing