`codex-responses-api-proxy` is designed so that there should be exactly one copy of the API key in memory (that is `mlock`'d on UNIX), but in practice, I was seeing two when I dumped the process data from `/proc/$PID/mem`. It appears that `std::io::stdin()` maintains an internal `BufReader` that we cannot zero out, so this PR changes the implementation on UNIX so that we use a low-level `read(2)` instead. Even though it seems like it would be incredibly unlikely, we also make this logic tolerant of short reads. Either `\n` or `EOF` must be sent to signal the end of the key written to stdin.
codex-responses-api-proxy
A strict HTTP proxy that only forwards POST requests to /v1/responses to the OpenAI API (https://api.openai.com), injecting the Authorization: Bearer $OPENAI_API_KEY header. Everything else is rejected with 403 Forbidden.
Expected Usage
IMPORTANT: codex-responses-api-proxy is designed to be run by a privileged user with access to OPENAI_API_KEY so that an unprivileged user cannot inspect or tamper with the process. Though if --http-shutdown is specified, an unprivileged user can make a GET request to /shutdown to shutdown the server, as an unprivileged user could not send SIGTERM to kill the process.
A privileged user (i.e., root or a user with sudo) who has access to OPENAI_API_KEY would run the following to start the server, as codex-responses-api-proxy reads the auth token from stdin:
printenv OPENAI_API_KEY | env -u OPENAI_API_KEY codex-responses-api-proxy --http-shutdown --server-info /tmp/server-info.json
A non-privileged user would then run Codex as follows, specifying the model_provider dynamically:
PROXY_PORT=$(jq .port /tmp/server-info.json)
PROXY_BASE_URL="http://127.0.0.1:${PROXY_PORT}"
codex exec -c "model_providers.openai-proxy={ name = 'OpenAI Proxy', base_url = '${PROXY_BASE_URL}/v1', wire_api='responses' }" \
-c model_provider="openai-proxy" \
'Your prompt here'
When the unprivileged user was finished, they could shutdown the server using curl (since kill -SIGTERM is not an option):
curl --fail --silent --show-error "${PROXY_BASE_URL}/shutdown"
Behavior
- Reads the API key from
stdin. All callers should pipe the key in (for example,printenv OPENAI_API_KEY | codex-responses-api-proxy). - Formats the header value as
Bearer <key>and attempts tomlock(2)the memory holding that header so it is not swapped to disk. - Listens on the provided port or an ephemeral port if
--portis not specified. - Accepts exactly
POST /v1/responses(no query string). The request body is forwarded tohttps://api.openai.com/v1/responseswithAuthorization: Bearer <key>set. All original request headers (except any incomingAuthorization) are forwarded upstream. For other requests, it responds with403. - Optionally writes a single-line JSON file with server info, currently
{ "port": <u16> }. - Optional
--http-shutdownenablesGET /shutdownto terminate the process with exit code0. This allows one user (e.g.,root) to start the proxy and another unprivileged user on the host to shut it down.
CLI
codex-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdown]
--port <PORT>: Port to bind on127.0.0.1. If omitted, an ephemeral port is chosen.--server-info <FILE>: If set, the proxy writes a single line of JSON with{ "port": <PORT>, "pid": <PID> }once listening.--http-shutdown: If set, enablesGET /shutdownto exit the process with code0.
Notes
- Only
POST /v1/responsesis permitted. No query strings are allowed. - All request headers are forwarded to the upstream call (aside from overriding
Authorization). Response status and content-type are mirrored from upstream.
Hardening Details
Care is taken to restrict access/copying to the value of OPENAI_API_KEY retained in memory:
- We leverage
codex_process_hardeningsocodex-responses-api-proxyis run with standard process-hardening techniques. - At startup, we allocate a
1024byte buffer on the stack and write"Bearer "as the first7bytes. - We then read from
stdin, copying the contents into the buffer after"Bearer ". - After verifying the key matches
/^[a-zA-Z0-9_-]+$/(and does not exceed the buffer), we create aStringfrom that buffer (so the data is now on the heap). - We zero out the stack-allocated buffer using https://crates.io/crates/zeroize so it is not optimized away by the compiler.
- We invoke
.leak()on theStringso we can treat its contents as a&'static str, as it will live for the rest of the process. - On UNIX, we
mlock(2)the memory backing the&'static str. - When using the
&'static strwhen building an HTTP request, we useHeaderValue::from_static()to avoid copying the&str. - We also invoke
.set_sensitive(true)on theHeaderValue, which in theory indicates to other parts of the HTTP stack that the header should be treated with "special care" to avoid leakage: