Commit Graph

68 Commits

Author SHA1 Message Date
pakrym-oai
c87fb83d81 Calculate remaining context based on last token usage (#1940)
We should only take last request size (in tokens) into account
2025-08-07 05:17:18 -07:00
ae
7c20160676 feat: /prompts slash command (#1937)
- Shows several example prompts which include @-mentions 

------
https://chatgpt.com/codex/tasks/task_i_6894779ba8cc832ca0c871d17ee06aae
2025-08-07 11:55:59 +01:00
aibrahim-oai
5589c6089b approval ui (#1933)
Asking for approval:

<img width="269" height="41" alt="image"
src="https://github.com/user-attachments/assets/b9ced569-3297-4dae-9ce7-0b015c9e14ea"
/>

Allow:

<img width="400" height="31" alt="image"
src="https://github.com/user-attachments/assets/92056b22-efda-4d49-854d-e2943d5fcf17"
/>

Reject:

<img width="372" height="30" alt="image"
src="https://github.com/user-attachments/assets/be9530a9-7d41-4800-bb42-abb9a24fc3ea"
/>

Always Approve:

<img width="410" height="36" alt="image"
src="https://github.com/user-attachments/assets/acf871ba-4c26-4501-b303-7956d0151754"
/>
2025-08-07 02:02:56 -07:00
aibrahim-oai
ec20e84d80 Change the UI of apply patch (#1907)
<img width="487" height="108" alt="image"
src="https://github.com/user-attachments/assets/3f6ffd56-36f6-40bc-b999-64279705416a"
/>

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-07 05:25:41 +00:00
aibrahim-oai
a5e17cda6b Run command UI (#1897)
Edit how commands show:

<img width="243" height="119" alt="image"
src="https://github.com/user-attachments/assets/13d5608e-3b66-4b8d-8fe7-ce464310d85d"
/>
2025-08-07 00:10:59 +00:00
ae
a575effbb0 feat: interrupt running task on ctrl-z (#1880)
- Arguably a bugfix as previously CTRL-Z didn't do anything.
- Only in TUI mode for now. This may make sense in other modes... to be
researched.
- The TUI runs the terminal in raw mode and the signals arrive as key
events, so we handle CTRL-Z as a key event just like CTRL-C.
- Not adding UI for it as a composer redesign is coming, and we can just
add it then.
- We should follow with CTRL-Z a second time doing the native terminal
action.
2025-08-06 21:56:34 +00:00
Jeremy Rose
081caa5a6b show a transient history cell for commands (#1824)
Adds a new "active history cell" for history bits that need to render
more than once before they're inserted into the history. Only used for
commands right now.


https://github.com/user-attachments/assets/925f01a0-e56d-4613-bc25-fdaa85d8aea5

---------

Co-authored-by: easong-openai <easong@openai.com>
2025-08-06 12:03:45 -07:00
Charlie Weems
ffe24991b7 Initial implementation of /init (#1822)
Basic /init command that appends an instruction to create AGENTS.md to
the conversation history.
2025-08-06 09:10:23 -07:00
ae
d642b07fcc [feat] add /status slash command (#1873)
- Added a `/status` command, which will be useful when we update the
home screen to print less status.
- Moved `create_config_summary_entries` to common since it's used in a
few places.
- Noticed we inconsistently had periods in slash command descriptions
and just removed them everywhere.
- Noticed the diff description was overflowing so made it shorter.
2025-08-05 23:57:52 -07:00
Michael Bolin
eaf2fb5b4f fix: fully enumerate EventMsg in chatwidget.rs (#1866)
https://github.com/openai/codex/pull/1868 is a related fix that was in
flight simultaenously, but after talking to @easong-openai, this:

- logs instead of renders for `BackgroundEvent`
- logs for `TurnDiff`
- renders for `PatchApplyEnd`
2025-08-05 22:44:27 -07:00
aibrahim-oai
31dcae67db Remove Turndiff and Apply patch from the render (#1868)
Make the tui more specific on what to render. Apply patch End and Turn
diff needs special handling.

Avoiding this issue:

<img width="503" height="138" alt="image"
src="https://github.com/user-attachments/assets/4c010ea8-701e-46d2-aa49-88b37fe0e5d9"
/>
2025-08-05 21:32:03 -07:00
easong-openai
e0303dbac0 Rescue chat completion changes (#1846)
https://github.com/openai/codex/pull/1835 has some messed up history.

This adds support for streaming chat completions, which is useful for ollama. We should probably take a very skeptical eye to the code introduced in this PR.

---------

Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-08-05 08:56:13 +00:00
easong-openai
906d449760 Stream model responses (#1810)
Stream models thoughts and responses instead of waiting for the whole
thing to come through. Very rough right now, but I'm making the risk call to push through.
2025-08-05 04:23:22 +00:00
Jeremy Rose
78a1d49fac fix command duration display (#1806)
we were always displaying "0ms" before.

<img width="731" height="101" alt="Screenshot 2025-08-02 at 10 51 22 PM"
src="https://github.com/user-attachments/assets/f56814ed-b9a4-4164-9e78-181c60ce19b7"
/>
2025-08-03 11:33:44 -07:00
Jeremy Rose
d62b703a21 custom textarea (#1794)
This replaces tui-textarea with a custom textarea component.

Key differences:
1. wrapped lines
2. better unicode handling
3. uses the native terminal cursor

This should perhaps be spun out into its own separate crate at some
point, but for now it's convenient to have it in-tree.
2025-08-03 11:31:35 -07:00
aibrahim-oai
f20de21cb6 collabse stdout and stderr delta events into one (#1787) 2025-08-01 14:00:19 -07:00
aibrahim-oai
bc7beddaa2 feat: stream exec stdout events (#1786)
## Summary
- stream command stdout as `ExecCommandStdout` events
- forward streamed stdout to clients and ignore in human output
processor
- adjust call sites for new streaming API
2025-08-01 13:04:34 -07:00
aibrahim-oai
e2c994e32a Add /compact (#1527)
- Add operation to summarize the context so far.
- The operation runs a compact task that summarizes the context.
- The operation clear the previous context to free the context window
- The operation didn't use `run_task` to avoid corrupting the session
- Add /compact in the tui



https://github.com/user-attachments/assets/e06c24e5-dcfb-4806-934a-564d425a919c
2025-07-31 21:34:32 -07:00
easong-openai
575590e4c2 Detect kitty terminals (#1748)
We want to detect kitty terminals so we can preferentially upgrade their UX without degrading older terminals.
2025-08-01 00:30:44 +00:00
Jeremy Rose
ea69a1d72f lighter approval modal (#1768)
The yellow hazard stripes were too scary :)

This also has the added benefit of not rendering anything at the full
width of the terminal, so resizing is a little easier to handle.

<img width="860" height="390" alt="Screenshot 2025-07-31 at 4 03 29 PM"
src="https://github.com/user-attachments/assets/18476e1a-065d-4da9-92fe-e94978ab0fce"
/>

<img width="860" height="390" alt="Screenshot 2025-07-31 at 4 05 03 PM"
src="https://github.com/user-attachments/assets/337db0da-de40-48c6-ae71-0e40f24b87e7"
/>
2025-07-31 17:10:52 -07:00
easong-openai
6ce0a5875b Initial planning tool (#1753)
We need to optimize the prompt, but this causes the model to use the new
planning_tool.

<img width="765" height="110" alt="image"
src="https://github.com/user-attachments/assets/45633f7f-3c85-4e60-8b80-902f1b3b508d"
/>
2025-07-31 20:45:52 +00:00
Jeremy Rose
d86270696e streamline ui (#1733)
Simplify and improve many UI elements.
* Remove all-around borders in most places. These interact badly with
terminal resizing and look heavy. Prefer left-side-only borders.
* Make the viewport adjust to the size of its contents.
* <kbd>/</kbd> and <kbd>@</kbd> autocomplete boxes appear below the
prompt, instead of above it.
* Restyle the keyboard shortcut hints & move them to the left.
* Restyle the approval dialog.
* Use synchronized rendering to avoid flashing during rerenders.


https://github.com/user-attachments/assets/96f044af-283b-411c-b7fc-5e6b8a433c20

<img width="1117" height="858" alt="Screenshot 2025-07-30 at 5 29 20 PM"
src="https://github.com/user-attachments/assets/0cc0af77-8396-429b-b6ee-9feaaccdbee7"
/>
2025-07-31 00:43:21 -07:00
Jeremy Rose
f2134f6633 resizable viewport (#1732)
Proof of concept for a resizable viewport.

The general approach here is to duplicate the `Terminal` struct from
ratatui, but with our own logic. This is a "light fork" in that we are
still using all the base ratatui functions (`Buffer`, `Widget` and so
on), but we're doing our own bookkeeping at the top level to determine
where to draw everything.

This approach could use improvement—e.g, when the window is resized to a
smaller size, if the UI wraps, we don't correctly clear out the
artifacts from wrapping. This is possible with a little work (i.e.
tracking what parts of our UI would have been wrapped), but this
behavior is at least at par with the existing behavior.


https://github.com/user-attachments/assets/4eb17689-09fd-4daa-8315-c7ebc654986d


cc @joshka who might have Thoughts™
2025-07-31 00:06:55 +00:00
Jeremy Rose
347c81ad00 remove conversation history widget (#1727)
this widget is no longer used.
2025-07-30 10:05:40 -07:00
easong-openai
80c19ea77c Fix approval workflow (#1696)
(Hopefully) temporary solution to the invisible approvals problem -
prints commands to history when they need approval and then also prints
the result of the approval. In the near future we should be able to do
some fancy stuff with updating commands before writing them to permanent
history.

Also, ctr-c while in the approval modal now acts as esc (aborts command)
and puts the TUI in the state where one additional ctr-c will exit.
2025-07-28 19:00:06 +00:00
Michael Bolin
2405c40026 chore: update Codex::spawn() to return a struct instead of a tuple (#1677)
Also update `init_codex()` to return a `struct` instead of a tuple, as well.
2025-07-27 20:01:35 -07:00
easong-openai
58bed77ba7 Remove tab focus switching (#1694)
Previously pressing tab would switch TUI focus to the history scrollbox - no longer necessary.
2025-07-27 11:04:09 -07:00
easong-openai
480e82b00d Easily Selectable History (#1672)
This update replaces the previous ratatui history widget with an
append-only log so that the terminal can handle text selection and
scrolling. It also disables streaming responses, which we'll do our best
to bring back in a later PR. It also adds a small summary of token use
after the TUI exits.
2025-07-25 01:56:40 -07:00
aibrahim-oai
b4ab7c1b73 Flaky CI fix (#1647)
Flushing before sending `TaskCompleteEvent` and ending the submission
loop to avoid race conditions.
2025-07-23 15:03:26 -07:00
Gabriel Peal
084236f717 Add call_id to patch approvals and elicitations (#1660)
Builds on https://github.com/openai/codex/pull/1659 and adds call_id to
a few more places for the same reason.
2025-07-23 15:55:35 -04:00
Gabriel Peal
bc944e77f5 Improve messages emitted for exec failures (#1659)
1. Emit call_id to exec approval elicitations for mcp client convenience
2. Remove the `-retry` from the call id for the same reason as above but
upstream the reset behavior to the mcp client
2025-07-23 14:43:53 -04:00
Dylan
18b2b30841 [mcp-server] Add reply tool call (#1643)
## Summary
Adds a new mcp tool call, `codex-reply`, so we can continue existing
sessions. This is a first draft and does not yet support sessions from
previous processes.

## Testing
- [x] tested with mcp client
2025-07-21 21:01:56 -07:00
aibrahim-oai
d5a2148deb Fix ctrl+c interrupt while streaming (#1617)
Interrupting while streaming now causes is broken because we aren't
clearing the delta buffer.
2025-07-18 12:08:25 -07:00
aibrahim-oai
bb30ab9e96 Implement redraw debounce (#1599)
## Summary
- debouce redraw events so repeated requests don't overwhelm the
terminal
- add `RequestRedraw` event and schedule redraws after 100ms

## Testing
- `cargo clippy --tests`
- `cargo test` *(fails: Sandbox Denied errors in landlock tests)*

------
https://chatgpt.com/codex/tasks/task_i_68792a65b8b483218ec90a8f68746cd8

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-07-17 12:54:55 -07:00
aibrahim-oai
643ab1f582 Add streaming to exec and tui (#1594)
Added support for streaming in `tui`
Added support for streaming in `exec`


https://github.com/user-attachments/assets/4215892e-d940-452c-a1d0-416ed0cf14eb
2025-07-16 22:26:31 -07:00
aibrahim-oai
2bd3314886 support deltas in core (#1587)
- Added support for message and reasoning deltas
- Skipped adding the support in the cli and tui for later
- Commented a failing test (wrong merge) that needs fix in a separate
PR.

Side note: I think we need to disable merge when the CI don't pass.
2025-07-16 15:11:18 -07:00
Michael Bolin
5b820c5ce7 feat: ctrl-d only exits when there is no user input (#1589)
While this does make it so that `ctrl-d` will not exit Codex when the
composer is not empty, `ctrl-d` will still exit Codex if it is in the
"working" state.

Fixes https://github.com/openai/codex/issues/1443.
2025-07-16 08:59:26 -07:00
aibrahim-oai
72504f1d9c Add paste summarization to Codex TUI (#1549)
## Summary
- introduce `Paste` event to avoid per-character paste handling
- collapse large pasted blocks to `[Pasted Content X lines]`
- store the real text so submission still includes it
- wire paste handling through `App`, `ChatWidget`, `BottomPane`, and
`ChatComposer`

## Testing
- `cargo test -p codex-tui`


------
https://chatgpt.com/codex/tasks/task_i_6871e24abf80832184d1f3ca0c61a5ee


https://github.com/user-attachments/assets/eda7412f-da30-4474-9f7c-96b49d48fbf8
2025-07-12 15:32:00 -07:00
Michael Bolin
4a341efe92 feat: highlight matching characters in fuzzy file search (#1420)
Using the new file-search API introduced in
https://github.com/openai/codex/pull/1419, matching characters are now
shown in bold in the TUI:


https://github.com/user-attachments/assets/8bbcc6c6-75a3-493f-8ea4-b2a063e09b3a

Fixes https://github.com/openai/codex/issues/1261
2025-06-28 15:04:23 -07:00
Michael Bolin
5a0f236ca4 feat: add support for @ to do file search (#1401)
Introduces support for `@` to trigger a fuzzy-filename search in the
composer. Under the hood, this leverages
https://crates.io/crates/nucleo-matcher to do the fuzzy matching and
https://crates.io/crates/ignore to build up the list of file candidates
(so that it respects `.gitignore`).

For simplicity (at least for now), we do not do any caching between
searches like VS Code does for its file search:


1d89ed699b/src/vs/workbench/services/search/node/rawSearchService.ts (L212-L218)

Because we do not do any caching, I saw queries take up to three seconds
on large repositories with hundreds of thousands of files. To that end,
we do not perform searches synchronously on each keystroke, but instead
dispatch an event to do the search on a background thread that
asynchronously reports back to the UI when the results are available.
This is largely handled by the `FileSearchManager` introduced in this
PR, which also has logic for debouncing requests so there is at most one
search in flight at a time.

While we could potentially polish and tune this feature further, it may
already be overengineered for how it will be used, in practice, so we
can improve things going forward if it turns out that this is not "good
enough" in the wild.

Note this feature does not work like `@` in the TypeScript CLI, which
was more like directory-based tab completion. In the Rust CLI, `@`
triggers a full-repo fuzzy-filename search.

Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
Gabriel Peal
2e293ce903 Handle Ctrl+C quit when idle (#1402)
## Summary
- show `Ctrl+C to quit` hint when pressing Ctrl+C with no active task
- exiting with Ctrl+C if the hint is already visible
- clear the hint when tasks begin or other keys are pressed


https://github.com/user-attachments/assets/931e2d7c-1c80-4b45-9908-d119f74df23c



------
https://chatgpt.com/s/cd_685ec8875a308191beaa95886dc1379e

Fixes #1245
2025-06-27 13:37:11 -04:00
Michael Bolin
fa0e17f83a feat: add support for /diff command (#1389)
Adds support for a `/diff` command comparable to the one available in
the TypeScript CLI.

<img width="1103" alt="Screenshot 2025-06-26 at 12 31 33 PM"
src="https://github.com/user-attachments/assets/5dc646ca-301f-41ff-92a7-595c68db64b6"
/>

While here, changed the `SlashCommand` enum so the declared variant
order is the order the commands appear in the popup menu. This way,
`/toggle-mouse-mode` is listed last, as it is the least likely to be
used.

Fixes https://github.com/openai/codex/issues/1253.
2025-06-26 13:03:31 -07:00
Michael Bolin
fcfe43c7df feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.

When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.

Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

![Screenshot 2025-06-25 at 11 20
55 PM](https://github.com/user-attachments/assets/6fd6982f-7247-4f14-84b2-2e600cb1fd49)

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.

Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:


296996d74e/codex-cli/src/utils/approximate-tokens-used.ts (L3-L16)

Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
Reilly Wood
345a38502d codex-rs: Rename /clear to /new, make it start an entirely new chat (#1264)
I noticed that `/clear` wasn't fully clearing chat history; it would
clear the chat history widgets _in the UI_, but the LLM still had access
to information from previous messages.

This PR renames `/clear` to `/new` for clarity as per Michael's
suggestion, resetting `app_state` to a fresh `ChatWidget`.
2025-06-06 16:29:37 -07:00
Michael Bolin
e81327e5f4 feat: add hide_agent_reasoning config option (#1181)
This PR introduces a `hide_agent_reasoning` config option (that defaults
to `false`) that users can enable to make the output less verbose by
suppressing reasoning output.

To test, verified that this includes agent reasoning in the output:

```
echo hello | just exec
```

whereas this does not:

```
echo hello | just exec --config hide_agent_reasoning=false
```
2025-05-30 23:14:56 -07:00
Michael Bolin
a768a6a41d fix: introduce ResponseInputItem::McpToolCallOutput variant (#1151)
The output of an MCP server tool call can be one of several types, but
to date, we treated all outputs as text by showing the serialized JSON
as the "tool output" in Codex:


25a9949c49/codex-rs/mcp-types/src/lib.rs (L96-L101)

This PR adds support for the `ImageContent` variant so we can now
display an image output from an MCP tool call.

In making this change, we introduce a new
`ResponseInputItem::McpToolCallOutput` variant so that we can work with
the `mcp_types::CallToolResult` directly when the function call is made
to an MCP server.

Though arguably the more significant change is the introduction of
`HistoryCell::CompletedMcpToolCallWithImageOutput`, which is a cell that
uses `ratatui_image` to render an image into the terminal. To support
this, we introduce `ImageRenderCache`, cache a
`ratatui_image::picker::Picker`, and `ensure_image_cache()` to cache the
appropriate scaled image data and dimensions based on the current
terminal size.

To test, I created a minimal `package.json`:

```json
{
  "name": "kitty-mcp",
  "version": "1.0.0",
  "type": "module",
  "description": "MCP that returns image of kitty",
  "main": "index.js",
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.12.0"
  }
}
```

with the following `index.js` to define the MCP server:

```js
#!/usr/bin/env node

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { readFile } from "node:fs/promises";
import { join } from "node:path";

const IMAGE_URI = "image://Ada.png";

const server = new McpServer({
  name: "Demo",
  version: "1.0.0",
});

server.tool(
  "get-cat-image",
  "If you need a cat image, this tool will provide one.",
  async () => ({
    content: [
      { type: "image", data: await getAdaPngBase64(), mimeType: "image/png" },
    ],
  })
);

server.resource("Ada the Cat", IMAGE_URI, async (uri) => {
  const base64Image = await getAdaPngBase64();
  return {
    contents: [
      {
        uri: uri.href,
        mimeType: "image/png",
        blob: base64Image,
      },
    ],
  };
});

async function getAdaPngBase64() {
  const __dirname = new URL(".", import.meta.url).pathname;
  // From 9705ce2c59/assets/Ada.png
  const filePath = join(__dirname, "Ada.png");
  const imageData = await readFile(filePath);
  const base64Image = imageData.toString("base64");
  return base64Image;
}

const transport = new StdioServerTransport();
await server.connect(transport);
```

With the local changes from this PR, I added the following to my
`config.toml`:

```toml
[mcp_servers.kitty]
command = "node"
args = ["/Users/mbolin/code/kitty-mcp/index.js"]
```

Running the TUI from source:

```
cargo run --bin codex -- --model o3 'I need a picture of a cat'
```

I get:

<img width="732" alt="image"
src="https://github.com/user-attachments/assets/bf80b721-9ca0-4d81-aec7-77d6899e2869"
/>

Now, that said, I have only tested in iTerm and there is definitely some
funny business with getting an accurate character-to-pixel ratio
(sometimes the `CompletedMcpToolCallWithImageOutput` thinks it needs 10
rows to render instead of 4), so there is still work to be done here.
2025-05-28 19:03:17 -07:00
Michael Bolin
d766e845b3 feat: experimental --output-last-message flag to exec subcommand (#1037)
This introduces an experimental `--output-last-message` flag that can be
used to identify a file where the final message from the agent will be
written. Two use cases:

- Ultimately, we will likely add a `--quiet` option to `exec`, but even
if the user does not want any output written to the terminal, they
probably want to know what the agent did. Writing the output to a file
makes it possible to get that information in a clean way.
- Relatedly, when using `exec` in CI, it is easier to review the
transcript written "normally," (i.e., not as JSON or something with
extra escapes), but getting programmatic access to the last message is
likely helpful, so writing the last message to a file gets the best of
both worlds.

I am calling this "experimental" because it is possible that we are
overfitting and will want a more general solution to this problem that
would justify removing this flag.
2025-05-19 16:08:18 -07:00
Michael Bolin
f8b6b1db81 fix: ensure the first user message always displays after the session info (#988)
Previously, if the first user message was sent with the command
invocation, e.g.:

```
$ cargo run --bin codex 'hello'
```

Then the user message was added as the first entry in the history and
then `is_first_event` would be `false` here:


031df77dfb/codex-rs/tui/src/conversation_history_widget.rs (L178-L179)

which would prevent the "welcome" message with things like the the model
version from displaying.

The fix in this PR is twofold:

* Reorganize the logic so the `ChatWidget` constructor stores
`initial_user_message` rather than sending it right away. Now inside
`handle_codex_event()`, it waits for the `SessionConfigured` event and
sends the `initial_user_message`, if it exists.
* In `conversation_history_widget.rs`, `add_session_info()` checks to
see whether a `WelcomeMessage` exists in the history when determining
the value of `has_welcome_message`. By construction, we expect that
`WelcomeMessage` is always the first message (in which case the existing
`let is_first_event = self.entries.is_empty();` logic would be sound),
but we decide to be extra defensive in case an `EventMsg::Error` is
processed before `EventMsg::SessionConfigured`.
2025-05-17 09:00:23 -07:00
Michael Bolin
f9143d0361 fix: do not let Tab keypress flow through to composer when used to toggle focus (#977)
One line fix from Codex!
2025-05-16 19:27:49 -07:00
Michael Bolin
1e39189393 feat: add support for file_opener option in Rust, similiar to #911 (#957)
This ports the enhancement introduced in
https://github.com/openai/codex/pull/911 (and the fixes in
https://github.com/openai/codex/pull/919) for the TypeScript CLI to the
Rust one.
2025-05-16 11:33:08 -07:00