A Simple OpenAI API relay service built with Bun runtime, supporting various OpenAI-compatible backend services. This relay service provides seamless integration with Kubernetes through Helm charts.
To install the OpenAI Relay service using Helm charts, follow these steps:
helm repo add openai-relay https://yinheli.github.io/openai-relay
helm repo update
helm install openai-relay openai-relay/openai-relay
Configuration:
You can customize the installation by providing a values.yaml
file. Below are the default values you can override:
replicaCount: 1
image:
repository: yinheli/openai-relay
pullPolicy: IfNotPresent
tag: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
resources: {}
# environment variables
envVars:
RELAY_PROVIDER_SILICONCLOUD: https://api.siliconflow.cn
RELAY_MODEL_SILICONCLOUD: siliconcloud-gpt-4o,siliconcloud-gpt-4o-mini
For more detailed configuration options, refer to the values.yaml
and config.ts
file in the repository.
You can run the OpenAI Relay service using Docker:
docker run -d \
--name openai-relay \
-e RELAY_PROVIDER_SILICONCLOUD=https://api.siliconflow.cn \
-e RELAY_MODEL_SILICONCLOUD=siliconcloud-gpt-4o,siliconcloud-gpt-4o-mini \
-p 80:80 \
yinheli/openai-relay:latest
[!NOTE] You can customize the environment variables to fit your needs. and pay attention to the docker image tag, you may need to replace it with the release tag.
We welcome contributions to improve the OpenAI Relay service. Please see our CONTRIBUTING.md for guidelines on how to submit improvements and bug fixes.
This project is licensed under the MIT License. See the LICENSE file for details.