A light weight vLLM simulator, for mocking out replicas.

incubating
3 Open Issues Need Help Last updated: Sep 12, 2025

Open Issues Need Help

View All on GitHub

A light weight vLLM simulator, for mocking out replicas.

Go
#incubating
good first issue

A light weight vLLM simulator, for mocking out replicas.

Go
#incubating

AI Summary: Implement a new command-line parameter, `--max-model-len`, in the vLLM simulator. This parameter will define the maximum context window size (in tokens) for the model. Requests exceeding this limit should return a 400 Bad Request error with a specific error message indicating the context length exceeded.

Complexity: 4/5
enhancement good first issue

A light weight vLLM simulator, for mocking out replicas.

Go
#incubating