# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
# test
```
codex-rs % export CODEX_DEVICE_AUTH_BASE_URL=http://localhost:3007
codex-rs % cargo run --bin codex login --experimental_use-device-code
Compiling codex-login v0.0.0 (/Users/rakesh/code/codex/codex-rs/login)
Compiling codex-mcp-server v0.0.0 (/Users/rakesh/code/codex/codex-rs/mcp-server)
Compiling codex-tui v0.0.0 (/Users/rakesh/code/codex/codex-rs/tui)
Compiling codex-cli v0.0.0 (/Users/rakesh/code/codex/codex-rs/cli)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 2.90s
Running `target/debug/codex login --experimental_use-device-code`
To authenticate, enter this code when prompted: 6Q27-KBVRF with interval 5
^C
```
The error in the last line is since the poll endpoint is not yet
implemented
This changes the reqwest client used in tests to be sandbox-friendly,
and skips a bunch of other tests that don't work inside the
sandbox/without network.
When logging in using ChatGPT using the `codex login` command, a
successful login should write a new `auth.json` file with the ChatGPT
token information. The old code attempted to retain the API key and
merge the token information into the existing `auth.json` file. With the
new simplified login mechanism, `auth.json` should have auth information
for only ChatGPT or API Key, not both.
The `codex login --api-key <key>` code path was already doing the right
thing here, but the `codex login` command was incorrect. This PR fixes
the problem and adds test cases for both commands.
The previous config approach had a few issues:
1. It is part of the config but not designed to be used externally
2. It had to be wired through many places (look at the +/- on this PR
3. It wasn't guaranteed to be set consistently everywhere because we
don't have a super well defined way that configs stack. For example, the
extension would configure during newConversation but anything that
happened outside of that (like login) wouldn't get it.
This env var approach is cleaner and also creates one less thing we have
to deal with when coming up with a better holistic story around configs.
One downside is that I removed the unit test testing for the override
because I don't want to deal with setting the global env or spawning
child processes and figuring out how to introspect their originator
header. The new code is sufficiently simple and I tested it e2e that I
feel as if this is still worth it.
This PR addresses an issue that several users have reported. If the
local oauth login server in one codex instance is left running (e.g. the
user abandons the oauth flow), a subsequent codex instance will receive
an error when attempting to log in because the localhost port is already
in use by the dangling web server from the first instance.
This PR adds a cancelation mechanism that the second instance can use to
abort the first login attempt and free up the port.
this dramatically improves time to run `cargo test -p codex-core` (~25x
speedup).
before:
```
cargo test -p codex-core 35.96s user 68.63s system 19% cpu 8:49.80 total
```
after:
```
cargo test -p codex-core 5.51s user 8.16s system 63% cpu 21.407 total
```
both tests measured "hot", i.e. on a 2nd run with no filesystem changes,
to exclude compile times.
approach inspired by [Delete Cargo Integration
Tests](https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html),
we move all test cases in tests/ into a single suite in order to have a
single binary, as there is significant overhead for each test binary
executed, and because test execution is only parallelized with a single
binary.
https://github.com/openai/codex/pull/2373 introduced
`ServerOptions.login_timeout` and `spawn_timeout_watcher()` to use an
extra thread to manage the timeout for the login server. Now that we
have asyncified the login stack, we can use `tokio::time::timeout()`
from "outside" the login library to manage the timeout rather than
having to a commit to a specific "timeout" concept from within.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2395).
* #2399
* #2398
* #2396
* __->__ #2395
* #2394
* #2393
* #2389
This PR adds two new APIs for the MCP server: 1) loginChatGpt, and 2)
cancelLoginChatGpt. The first starts a login server and returns a local
URL that allows for browser-based authentication, and the second
provides a way to cancel the login attempt. If the login attempt
succeeds, a notification (in the form of an event) is sent to a
subscriber.
I also added a timeout mechanism for the existing login server. The
loginChatGpt code path uses a 10-minute timeout by default, so if the
user fails to complete the login flow in that timeframe, the login
server automatically shuts down. I tested the timeout code by manually
setting the timeout to a much lower number and confirming that it works
as expected when used e2e.
This PR:
* Added the clippy.toml to configure allowable expect / unwrap usage in
tests
* Removed as many expect/allow lines as possible from tests
* moved a bunch of allows to expects where possible
Note: in integration tests, non `#[test]` helper functions are not
covered by this so we had to leave a few lingering `expect(expect_used`
checks around