feat: record messages from user in ~/.codex/history.jsonl (#939)

This is a large change to support a "history" feature like you would
expect in a shell like Bash.

History events are recorded in `$CODEX_HOME/history.jsonl`. Because it
is a JSONL file, it is straightforward to append new entries (as opposed
to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be
valid JSON, each new entry entails rewriting the entire file). Because
it is possible for there to be multiple instances of Codex CLI writing
to `history.jsonl` at once, we use advisory file locking when working
with `history.jsonl` in `codex-rs/core/src/message_history.rs`.

Because we believe history is a sufficiently useful feature, we enable
it by default. Though to provide some safety, we set the file
permissions of `history.jsonl` to be `o600` so that other users on the
system cannot read the user's history. We do not yet support a default
list of `SENSITIVE_PATTERNS` as the TypeScript CLI does:


3fdf9df133/codex-cli/src/utils/storage/command-history.ts (L10-L17)

We are going to take a more conservative approach to this list in the
Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude
sensitive information like API tokens, it would also exclude valuable
information such as references to Git commits.

As noted in the updated documentation, users can opt-out of history by
adding the following to `config.toml`:

```toml
[history]
persistence = "none" 
```

Because `history.jsonl` could, in theory, be quite large, we take a[n
arguably overly pedantic] approach in reading history entries into
memory. Specifically, we start by telling the client the current number
of entries in the history file (`history_entry_count`) as well as the
inode (`history_log_id`) of `history.jsonl` (see the new fields on
`SessionConfiguredEvent`).

The client is responsible for keeping new entries in memory to create a
"local history," but if the user hits up enough times to go "past" the
end of local history, then the client should use the new
`GetHistoryEntryRequest` in the protocol to fetch older entries.
Specifically, it should pass the `history_log_id` it was given
originally and work backwards from `history_entry_count`. (It should
really fetch history in batches rather than one-at-a-time, but that is
something we can improve upon in subsequent PRs.)

The motivation behind this crazy scheme is that it is designed to defend
against:

* The `history.jsonl` being truncated during the session such that the
index into the history is no longer consistent with what had been read
up to that point. We do not yet have logic to enforce a `max_bytes` for
`history.jsonl`, but once we do, we will aspire to implement it in a way
that should result in a new inode for the file on most systems.
* New items from concurrent Codex CLI sessions amending to the history.
Because, in absence of truncation, `history.jsonl` is an append-only
log, so long as the client reads backwards from `history_entry_count`,
it should always get a consistent view of history. (That said, it will
not be able to read _new_ commands from concurrent sessions, but perhaps
we will introduce a `/` command to reload latest history or something
down the road.)

Admittedly, my testing of this feature thus far has been fairly light. I
expect we will find bugs and introduce enhancements/fixes going forward.
This commit is contained in:
Michael Bolin
2025-05-15 16:26:23 -07:00
committed by GitHub
parent 3fdf9df133
commit ce2ecbe72f
15 changed files with 873 additions and 15 deletions

11
codex-rs/Cargo.lock generated
View File

@@ -523,6 +523,7 @@ dependencies = [
"env-flags",
"eventsource-stream",
"fs-err",
"fs2",
"futures",
"landlock",
"libc",
@@ -1244,6 +1245,16 @@ dependencies = [
"autocfg",
]
[[package]]
name = "fs2"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9564fc758e15025b46aa6643b1b77d047d1a56a1aea6e01002ac0c7026876213"
dependencies = [
"libc",
"winapi",
]
[[package]]
name = "futures"
version = "0.3.31"

View File

@@ -23,7 +23,9 @@ This folder is the root of a Cargo workspace. It contains quite a bit of experim
## Config
The CLI can be configured via `~/.codex/config.toml`. It supports the following options:
The CLI can be configured via a file named `config.toml`. By default, configuration is read from `~/.codex/config.toml`, though the `CODEX_HOME` environment variable can be used to specify a directory other than `~/.codex`.
The `config.toml` file supports the following options:
### model
@@ -297,6 +299,17 @@ To have Codex use this script for notifications, you would configure it via `not
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
```
### history
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
To disable this behavior, configure `[history]` as follows:
```toml
[history]
persistence = "none" # "save-all" is the default value
```
### project_doc_max_bytes
Maximum number of bytes to read from an `AGENTS.md` file to include in the instructions sent with the first turn of a session. Defaults to 32 KiB.

View File

@@ -20,6 +20,7 @@ codex-mcp-client = { path = "../mcp-client" }
dirs = "6"
env-flags = "0.1.1"
eventsource-stream = "0.2.3"
fs2 = "0.4.3"
fs-err = "3.1.0"
futures = "0.3"
mcp-types = { path = "../mcp-types" }

View File

@@ -110,6 +110,7 @@ impl Codex {
cwd: config.cwd.clone(),
};
let config = Arc::new(config);
tokio::spawn(submission_loop(config, rx_sub, tx_event, ctrl_c));
let codex = Codex {
next_id: AtomicU64::new(0),
@@ -483,11 +484,14 @@ impl AgentTask {
}
async fn submission_loop(
config: Config,
config: Arc<Config>,
rx_sub: Receiver<Submission>,
tx_event: Sender<Event>,
ctrl_c: Arc<Notify>,
) {
// Generate a unique ID for the lifetime of this Codex session.
let session_id = Uuid::new_v4();
let mut sess: Option<Arc<Session>> = None;
// shorthand - send an event when there is no active session
let send_no_session_event = |sub_id: String| async {
@@ -608,7 +612,9 @@ async fn submission_loop(
// Attempt to create a RolloutRecorder *before* moving the
// `instructions` value into the Session struct.
let session_id = Uuid::new_v4();
// TODO: if ConfigureSession is sent twice, we will create an
// overlapping rollout file. Consider passing RolloutRecorder
// from above.
let rollout_recorder =
match RolloutRecorder::new(&config, session_id, instructions.clone()).await {
Ok(r) => Some(r),
@@ -633,10 +639,19 @@ async fn submission_loop(
rollout: Mutex::new(rollout_recorder),
}));
// Gather history metadata for SessionConfiguredEvent.
let (history_log_id, history_entry_count) =
crate::message_history::history_metadata(&config).await;
// ack
let events = std::iter::once(Event {
id: sub.id.clone(),
msg: EventMsg::SessionConfigured(SessionConfiguredEvent { session_id, model }),
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id,
model,
history_log_id,
history_entry_count,
}),
})
.chain(mcp_connection_errors.into_iter());
for event in events {
@@ -691,6 +706,46 @@ async fn submission_loop(
other => sess.notify_approval(&id, other),
}
}
Op::AddToHistory { text } => {
let id = session_id;
let config = config.clone();
tokio::spawn(async move {
if let Err(e) = crate::message_history::append_entry(&text, &id, &config).await
{
tracing::warn!("failed to append to message history: {e}");
}
});
}
Op::GetHistoryEntryRequest { offset, log_id } => {
let config = config.clone();
let tx_event = tx_event.clone();
let sub_id = sub.id.clone();
tokio::spawn(async move {
// Run lookup in blocking thread because it does file IO + locking.
let entry_opt = tokio::task::spawn_blocking(move || {
crate::message_history::lookup(log_id, offset, &config)
})
.await
.unwrap_or(None);
let event = Event {
id: sub_id,
msg: EventMsg::GetHistoryEntryResponse(
crate::protocol::GetHistoryEntryResponseEvent {
offset,
log_id,
entry: entry_opt,
},
),
};
if let Err(e) = tx_event.send(event).await {
tracing::warn!("failed to send GetHistoryEntryResponse event: {e}");
}
});
}
}
}
debug!("Agent loop exited");

View File

@@ -81,6 +81,30 @@ pub struct Config {
/// Directory containing all Codex state (defaults to `~/.codex` but can be
/// overridden by the `CODEX_HOME` environment variable).
pub codex_home: PathBuf,
/// Settings that govern if and what will be written to `~/.codex/history.jsonl`.
pub history: History,
}
/// Settings that govern if and what will be written to `~/.codex/history.jsonl`.
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct History {
/// If true, history entries will not be written to disk.
pub persistence: HistoryPersistence,
/// If set, the maximum size of the history file in bytes.
/// TODO(mbolin): Not currently honored.
pub max_bytes: Option<usize>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
#[serde(rename_all = "kebab-case")]
pub enum HistoryPersistence {
/// Save all history entries to disk.
#[default]
SaveAll,
/// Do not write history to disk.
None,
}
/// Base config deserialized from ~/.codex/config.toml.
@@ -130,6 +154,10 @@ pub struct ConfigToml {
/// Named profiles to facilitate switching between different configurations.
#[serde(default)]
pub profiles: HashMap<String, ConfigProfile>,
/// Settings that govern if and what will be written to `~/.codex/history.jsonl`.
#[serde(default)]
pub history: Option<History>,
}
impl ConfigToml {
@@ -297,6 +325,8 @@ impl Config {
}
};
let history = cfg.history.unwrap_or_default();
let config = Self {
model: model
.or(config_profile.model)
@@ -320,6 +350,7 @@ impl Config {
model_providers,
project_doc_max_bytes: cfg.project_doc_max_bytes.unwrap_or(PROJECT_DOC_MAX_BYTES),
codex_home,
history,
};
Ok(config)
}
@@ -468,6 +499,40 @@ mod tests {
);
}
#[test]
fn test_toml_parsing() {
let history_with_persistence = r#"
[history]
persistence = "save-all"
"#;
let history_with_persistence_cfg: ConfigToml =
toml::from_str::<ConfigToml>(history_with_persistence)
.expect("TOML deserialization should succeed");
assert_eq!(
Some(History {
persistence: HistoryPersistence::SaveAll,
max_bytes: None,
}),
history_with_persistence_cfg.history
);
let history_no_persistence = r#"
[history]
persistence = "none"
"#;
let history_no_persistence_cfg: ConfigToml =
toml::from_str::<ConfigToml>(history_no_persistence)
.expect("TOML deserialization should succeed");
assert_eq!(
Some(History {
persistence: HistoryPersistence::None,
max_bytes: None,
}),
history_no_persistence_cfg.history
);
}
/// Deserializing a TOML string containing an *invalid* permission should
/// fail with a helpful error rather than silently defaulting or
/// succeeding.
@@ -620,6 +685,7 @@ disable_response_storage = true
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
codex_home: fixture.codex_home(),
history: History::default(),
},
o3_profile_config
);
@@ -654,6 +720,7 @@ disable_response_storage = true
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
codex_home: fixture.codex_home(),
history: History::default(),
};
assert_eq!(expected_gpt3_profile_config, gpt3_profile_config);
@@ -703,6 +770,7 @@ disable_response_storage = true
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
codex_home: fixture.codex_home(),
history: History::default(),
};
assert_eq!(expected_zdr_profile_config, zdr_profile_config);

View File

@@ -24,6 +24,7 @@ pub mod landlock;
mod mcp_connection_manager;
pub mod mcp_server_config;
mod mcp_tool_call;
mod message_history;
mod model_provider_info;
pub use model_provider_info::ModelProviderInfo;
pub use model_provider_info::WireApi;

View File

@@ -0,0 +1,297 @@
//! Persistence layer for the global, append-only *message history* file.
//!
//! The history is stored at `~/.codex/history.jsonl` with **one JSON object per
//! line** so that it can be efficiently appended to and parsed with standard
//! JSON-Lines tooling. Each record has the following schema:
//!
//! ````text
//! {"session_id":"<uuid>","ts":<unix_seconds>,"text":"<message>"}
//! ````
//!
//! To minimise the chance of interleaved writes when multiple processes are
//! appending concurrently, callers should *prepare the full line* (record +
//! trailing `\n`) and write it with a **single `write(2)` system call** while
//! the file descriptor is opened with the `O_APPEND` flag. POSIX guarantees
//! that writes up to `PIPE_BUF` bytes are atomic in that case.
use std::fs::File;
use std::fs::OpenOptions;
use std::io::Result;
use std::io::Write;
use std::path::PathBuf;
use serde::Deserialize;
use serde::Serialize;
use std::time::Duration;
use tokio::fs;
use tokio::io::AsyncReadExt;
use uuid::Uuid;
use crate::config::Config;
use crate::config::HistoryPersistence;
#[cfg(unix)]
use std::os::unix::fs::OpenOptionsExt;
#[cfg(unix)]
use std::os::unix::fs::PermissionsExt;
/// Filename that stores the message history inside `~/.codex`.
const HISTORY_FILENAME: &str = "history.jsonl";
const MAX_RETRIES: usize = 10;
const RETRY_SLEEP: Duration = Duration::from_millis(100);
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct HistoryEntry {
pub session_id: String,
pub ts: u64,
pub text: String,
}
fn history_filepath(config: &Config) -> PathBuf {
let mut path = config.codex_home.clone();
path.push(HISTORY_FILENAME);
path
}
/// Append a `text` entry associated with `session_id` to the history file. Uses
/// advisory file locking to ensure that concurrent writes do not interleave,
/// which entails a small amount of blocking I/O internally.
pub(crate) async fn append_entry(text: &str, session_id: &Uuid, config: &Config) -> Result<()> {
match config.history.persistence {
HistoryPersistence::SaveAll => {
// Save everything: proceed.
}
HistoryPersistence::None => {
// No history persistence requested.
return Ok(());
}
}
// TODO: check `text` for sensitive patterns
// Resolve `~/.codex/history.jsonl` and ensure the parent directory exists.
let path = history_filepath(config);
if let Some(parent) = path.parent() {
tokio::fs::create_dir_all(parent).await?;
}
// Compute timestamp (seconds since the Unix epoch).
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map_err(|e| std::io::Error::other(format!("system clock before Unix epoch: {e}")))?
.as_secs();
// Construct the JSON line first so we can write it in a single syscall.
let entry = HistoryEntry {
session_id: session_id.to_string(),
ts,
text: text.to_string(),
};
let mut line = serde_json::to_string(&entry)
.map_err(|e| std::io::Error::other(format!("failed to serialise history entry: {e}")))?;
line.push('\n');
// Open in append-only mode.
let mut options = OpenOptions::new();
options.append(true).read(true).create(true);
#[cfg(unix)]
{
options.mode(0o600);
}
let mut history_file = options.open(&path)?;
// Ensure permissions.
ensure_owner_only_permissions(&history_file).await?;
// Lock file.
acquire_exclusive_lock_with_retry(&history_file).await?;
// We use sync I/O with spawn_blocking() because we are using a
// [`std::fs::File`] instead of a [`tokio::fs::File`] to leverage an
// advisory file locking API that is not available in the async API.
tokio::task::spawn_blocking(move || -> Result<()> {
history_file.write_all(line.as_bytes())?;
history_file.flush()?;
Ok(())
})
.await??;
Ok(())
}
/// Attempt to acquire an exclusive advisory lock on `file`, retrying up to 10
/// times if the lock is currently held by another process. This prevents a
/// potential indefinite wait while still giving other writers some time to
/// finish their operation.
async fn acquire_exclusive_lock_with_retry(file: &std::fs::File) -> Result<()> {
use tokio::time::sleep;
for _ in 0..MAX_RETRIES {
match fs2::FileExt::try_lock_exclusive(file) {
Ok(()) => return Ok(()),
Err(e) if e.kind() == std::io::ErrorKind::WouldBlock => {
sleep(RETRY_SLEEP).await;
}
Err(e) => return Err(e),
}
}
Err(std::io::Error::new(
std::io::ErrorKind::WouldBlock,
"could not acquire exclusive lock on history file after multiple attempts",
))
}
/// Asynchronously fetch the history file's *identifier* (inode on Unix) and
/// the current number of entries by counting newline characters.
pub(crate) async fn history_metadata(config: &Config) -> (u64, usize) {
let path = history_filepath(config);
#[cfg(unix)]
let log_id = {
use std::os::unix::fs::MetadataExt;
// Obtain metadata (async) to get the identifier.
let meta = match fs::metadata(&path).await {
Ok(m) => m,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => return (0, 0),
Err(_) => return (0, 0),
};
meta.ino()
};
#[cfg(not(unix))]
let log_id = 0u64;
// Open the file.
let mut file = match fs::File::open(&path).await {
Ok(f) => f,
Err(_) => return (log_id, 0),
};
// Count newline bytes.
let mut buf = [0u8; 8192];
let mut count = 0usize;
loop {
match file.read(&mut buf).await {
Ok(0) => break,
Ok(n) => {
count += buf[..n].iter().filter(|&&b| b == b'\n').count();
}
Err(_) => return (log_id, 0),
}
}
(log_id, count)
}
/// Given a `log_id` (on Unix this is the file's inode number) and a zero-based
/// `offset`, return the corresponding `HistoryEntry` if the identifier matches
/// the current history file **and** the requested offset exists. Any I/O or
/// parsing errors are logged and result in `None`.
///
/// Note this function is not async because it uses a sync advisory file
/// locking API.
#[cfg(unix)]
pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<HistoryEntry> {
use std::io::BufRead;
use std::io::BufReader;
use std::os::unix::fs::MetadataExt;
let path = history_filepath(config);
let file: File = match OpenOptions::new().read(true).open(&path) {
Ok(f) => f,
Err(e) => {
tracing::warn!(error = %e, "failed to open history file");
return None;
}
};
let metadata = match file.metadata() {
Ok(m) => m,
Err(e) => {
tracing::warn!(error = %e, "failed to stat history file");
return None;
}
};
if metadata.ino() != log_id {
return None;
}
// Open & lock file for reading.
if let Err(e) = acquire_shared_lock_with_retry(&file) {
tracing::warn!(error = %e, "failed to acquire shared lock on history file");
return None;
}
let reader = BufReader::new(&file);
for (idx, line_res) in reader.lines().enumerate() {
let line = match line_res {
Ok(l) => l,
Err(e) => {
tracing::warn!(error = %e, "failed to read line from history file");
return None;
}
};
if idx == offset {
match serde_json::from_str::<HistoryEntry>(&line) {
Ok(entry) => return Some(entry),
Err(e) => {
tracing::warn!(error = %e, "failed to parse history entry");
return None;
}
}
}
}
None
}
/// Fallback stub for non-Unix systems: currently always returns `None`.
#[cfg(not(unix))]
pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<HistoryEntry> {
let _ = (log_id, offset, config);
None
}
#[cfg(unix)]
fn acquire_shared_lock_with_retry(file: &File) -> Result<()> {
for _ in 0..MAX_RETRIES {
match fs2::FileExt::try_lock_shared(file) {
Ok(()) => return Ok(()),
Err(e) if e.kind() == std::io::ErrorKind::WouldBlock => {
std::thread::sleep(RETRY_SLEEP);
}
Err(e) => return Err(e),
}
}
Err(std::io::Error::new(
std::io::ErrorKind::WouldBlock,
"could not acquire shared lock on history file after multiple attempts",
))
}
/// On Unix systems ensure the file permissions are `0o600` (rw-------). If the
/// permissions cannot be changed the error is propagated to the caller.
#[cfg(unix)]
async fn ensure_owner_only_permissions(file: &File) -> Result<()> {
let metadata = file.metadata()?;
let current_mode = metadata.permissions().mode() & 0o777;
if current_mode != 0o600 {
let mut perms = metadata.permissions();
perms.set_mode(0o600);
let perms_clone = perms.clone();
let file_clone = file.try_clone()?;
tokio::task::spawn_blocking(move || file_clone.set_permissions(perms_clone)).await??;
}
Ok(())
}
#[cfg(not(unix))]
async fn ensure_owner_only_permissions(_file: &File) -> Result<()> {
// For now, on non-Unix, simply succeed.
Ok(())
}

View File

@@ -12,6 +12,7 @@ use serde::Deserialize;
use serde::Serialize;
use uuid::Uuid;
use crate::message_history::HistoryEntry;
use crate::model_provider_info::ModelProviderInfo;
/// Submission Queue Entry - requests from user
@@ -24,7 +25,7 @@ pub struct Submission {
}
/// Submission operation
#[derive(Debug, Clone, Deserialize, Serialize)]
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
#[allow(clippy::large_enum_variant)]
#[non_exhaustive]
@@ -88,6 +89,18 @@ pub enum Op {
/// The user's decision in response to the request.
decision: ReviewDecision,
},
/// Append an entry to the persistent cross-session message history.
///
/// Note the entry is not guaranteed to be logged if the user has
/// history disabled, it matches the list of "sensitive" patterns, etc.
AddToHistory {
/// The message text to be stored.
text: String,
},
/// Request a single history entry identified by `log_id` + `offset`.
GetHistoryEntryRequest { offset: usize, log_id: u64 },
}
/// Determines how liberally commands are autoapproved by the system.
@@ -270,7 +283,7 @@ pub enum SandboxPermission {
/// User input
#[non_exhaustive]
#[derive(Debug, Clone, Deserialize, Serialize)]
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum InputItem {
Text {
@@ -340,6 +353,9 @@ pub enum EventMsg {
/// Notification that a patch application has finished.
PatchApplyEnd(PatchApplyEndEvent),
/// Response to GetHistoryEntryRequest.
GetHistoryEntryResponse(GetHistoryEntryResponseEvent),
}
// Individual event payload types matching each `EventMsg` variant.
@@ -452,6 +468,15 @@ pub struct PatchApplyEndEvent {
pub success: bool,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct GetHistoryEntryResponseEvent {
pub offset: usize,
pub log_id: u64,
/// The entry at the requested offset, if available and parseable.
#[serde(skip_serializing_if = "Option::is_none")]
pub entry: Option<HistoryEntry>,
}
#[derive(Debug, Default, Clone, Deserialize, Serialize)]
pub struct SessionConfiguredEvent {
/// Unique id for this session.
@@ -459,10 +484,16 @@ pub struct SessionConfiguredEvent {
/// Tell the client what model is being queried.
pub model: String,
/// Identifier of the history log file (inode on Unix, 0 otherwise).
pub history_log_id: u64,
/// Current number of entries in the history log.
pub history_entry_count: usize,
}
/// User's decision in response to an ExecApprovalRequest.
#[derive(Debug, Default, Clone, Copy, Deserialize, Serialize)]
#[derive(Debug, Default, Clone, Copy, Deserialize, Serialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum ReviewDecision {
/// User has approved this command and the agent should execute it.
@@ -519,12 +550,14 @@ mod tests {
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id,
model: "o4-mini".to_string(),
history_log_id: 0,
history_entry_count: 0,
}),
};
let serialized = serde_json::to_string(&event).unwrap();
assert_eq!(
serialized,
r#"{"id":"1234","msg":{"type":"session_configured","session_id":"67e55044-10b1-426f-9247-bb680e5fe0c8","model":"o4-mini"}}"#
r#"{"id":"1234","msg":{"type":"session_configured","session_id":"67e55044-10b1-426f-9247-bb680e5fe0c8","model":"o4-mini","history_log_id":0,"history_entry_count":0}}"#
);
}
}

View File

@@ -375,9 +375,17 @@ impl EventProcessor {
println!("thinking: {}", agent_reasoning_event.text);
}
EventMsg::SessionConfigured(session_configured_event) => {
let SessionConfiguredEvent { session_id, model } = session_configured_event;
let SessionConfiguredEvent {
session_id,
model,
history_log_id: _,
history_entry_count: _,
} = session_configured_event;
println!("session {session_id} with model {model}");
}
EventMsg::GetHistoryEntryResponse(_) => {
// Currently ignored in exec output.
}
}
}
}

View File

@@ -166,7 +166,8 @@ pub async fn run_codex_tool_session(
| EventMsg::ExecCommandEnd(_)
| EventMsg::BackgroundEvent(_)
| EventMsg::PatchApplyBegin(_)
| EventMsg::PatchApplyEnd(_) => {
| EventMsg::PatchApplyEnd(_)
| EventMsg::GetHistoryEntryResponse(_) => {
// For now, we do not do anything extra for these
// events. Note that
// send(codex_event_to_notification(&event)) above has

View File

@@ -13,11 +13,12 @@ use tui_textarea::Input;
use tui_textarea::Key;
use tui_textarea::TextArea;
use super::chat_composer_history::ChatComposerHistory;
use super::command_popup::CommandPopup;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
use super::command_popup::CommandPopup;
/// Minimum number of visible text rows inside the textarea.
const MIN_TEXTAREA_ROWS: usize = 1;
/// Rows consumed by the border.
@@ -33,6 +34,7 @@ pub(crate) struct ChatComposer<'a> {
textarea: TextArea<'a>,
command_popup: Option<CommandPopup>,
app_event_tx: AppEventSender,
history: ChatComposerHistory,
}
impl ChatComposer<'_> {
@@ -45,11 +47,31 @@ impl ChatComposer<'_> {
textarea,
command_popup: None,
app_event_tx,
history: ChatComposerHistory::new(),
};
this.update_border(has_input_focus);
this
}
/// Record the history metadata advertised by `SessionConfiguredEvent` so
/// that the composer can navigate cross-session history.
pub(crate) fn set_history_metadata(&mut self, log_id: u64, entry_count: usize) {
self.history.set_metadata(log_id, entry_count);
}
/// Integrate an asynchronous response to an on-demand history lookup. If
/// the entry is present and the offset matches the current cursor we
/// immediately populate the textarea.
pub(crate) fn on_history_entry_response(
&mut self,
log_id: u64,
offset: usize,
entry: Option<String>,
) -> bool {
self.history
.on_entry_response(log_id, offset, entry, &mut self.textarea)
}
pub fn set_input_focus(&mut self, has_focus: bool) {
self.update_border(has_focus);
}
@@ -133,6 +155,33 @@ impl ChatComposer<'_> {
fn handle_key_event_without_popup(&mut self, key_event: KeyEvent) -> (InputResult, bool) {
let input: Input = key_event.into();
match input {
// -------------------------------------------------------------
// History navigation (Up / Down) only when the composer is not
// empty or when the cursor is at the correct position, to avoid
// interfering with normal cursor movement.
// -------------------------------------------------------------
Input { key: Key::Up, .. } => {
if self.history.should_handle_navigation(&self.textarea) {
let consumed = self
.history
.navigate_up(&mut self.textarea, &self.app_event_tx);
if consumed {
return (InputResult::None, true);
}
}
self.handle_input_basic(input)
}
Input { key: Key::Down, .. } => {
if self.history.should_handle_navigation(&self.textarea) {
let consumed = self
.history
.navigate_down(&mut self.textarea, &self.app_event_tx);
if consumed {
return (InputResult::None, true);
}
}
self.handle_input_basic(input)
}
Input {
key: Key::Enter,
shift: false,
@@ -142,7 +191,13 @@ impl ChatComposer<'_> {
let text = self.textarea.lines().join("\n");
self.textarea.select_all();
self.textarea.cut();
(InputResult::Submitted(text), true)
if text.is_empty() {
(InputResult::None, true)
} else {
self.history.record_local_submission(&text);
(InputResult::Submitted(text), true)
}
}
Input {
key: Key::Enter, ..

View File

@@ -0,0 +1,263 @@
use std::collections::HashMap;
use tui_textarea::CursorMove;
use tui_textarea::TextArea;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
use codex_core::protocol::Op;
/// State machine that manages shell-style history navigation (Up/Down) inside
/// the chat composer. This struct is intentionally decoupled from the
/// rendering widget so the logic remains isolated and easier to test.
pub(crate) struct ChatComposerHistory {
/// Identifier of the history log as reported by `SessionConfiguredEvent`.
history_log_id: Option<u64>,
/// Number of entries already present in the persistent cross-session
/// history file when the session started.
history_entry_count: usize,
/// Messages submitted by the user *during this UI session* (newest at END).
local_history: Vec<String>,
/// Cache of persistent history entries fetched on-demand.
fetched_history: HashMap<usize, String>,
/// Current cursor within the combined (persistent + local) history. `None`
/// indicates the user is *not* currently browsing history.
history_cursor: Option<isize>,
/// The text that was last inserted into the composer as a result of
/// history navigation. Used to decide if further Up/Down presses should be
/// treated as navigation versus normal cursor movement.
last_history_text: Option<String>,
}
impl ChatComposerHistory {
pub fn new() -> Self {
Self {
history_log_id: None,
history_entry_count: 0,
local_history: Vec::new(),
fetched_history: HashMap::new(),
history_cursor: None,
last_history_text: None,
}
}
/// Update metadata when a new session is configured.
pub fn set_metadata(&mut self, log_id: u64, entry_count: usize) {
self.history_log_id = Some(log_id);
self.history_entry_count = entry_count;
self.fetched_history.clear();
self.local_history.clear();
self.history_cursor = None;
self.last_history_text = None;
}
/// Record a message submitted by the user in the current session so it can
/// be recalled later.
pub fn record_local_submission(&mut self, text: &str) {
if !text.is_empty() {
self.local_history.push(text.to_string());
self.history_cursor = None;
self.last_history_text = None;
}
}
/// Should Up/Down key presses be interpreted as history navigation given
/// the current content and cursor position of `textarea`?
pub fn should_handle_navigation(&self, textarea: &TextArea) -> bool {
if self.history_entry_count == 0 && self.local_history.is_empty() {
return false;
}
let lines = textarea.lines();
if lines.len() == 1 && lines[0].is_empty() {
return true;
}
// Textarea is not empty only navigate when cursor is at start and
// text matches last recalled history entry so regular editing is not
// hijacked.
let (row, col) = textarea.cursor();
if row != 0 || col != 0 {
return false;
}
matches!(&self.last_history_text, Some(prev) if prev == &lines.join("\n"))
}
/// Handle <Up>. Returns true when the key was consumed and the caller
/// should request a redraw.
pub fn navigate_up(&mut self, textarea: &mut TextArea, app_event_tx: &AppEventSender) -> bool {
let total_entries = self.history_entry_count + self.local_history.len();
if total_entries == 0 {
return false;
}
let next_idx = match self.history_cursor {
None => (total_entries as isize) - 1,
Some(0) => return true, // already at oldest
Some(idx) => idx - 1,
};
self.history_cursor = Some(next_idx);
self.populate_history_at_index(next_idx as usize, textarea, app_event_tx);
true
}
/// Handle <Down>.
pub fn navigate_down(
&mut self,
textarea: &mut TextArea,
app_event_tx: &AppEventSender,
) -> bool {
let total_entries = self.history_entry_count + self.local_history.len();
if total_entries == 0 {
return false;
}
let next_idx_opt = match self.history_cursor {
None => return false, // not browsing
Some(idx) if (idx as usize) + 1 >= total_entries => None,
Some(idx) => Some(idx + 1),
};
match next_idx_opt {
Some(idx) => {
self.history_cursor = Some(idx);
self.populate_history_at_index(idx as usize, textarea, app_event_tx);
}
None => {
// Past newest clear and exit browsing mode.
self.history_cursor = None;
self.last_history_text = None;
self.replace_textarea_content(textarea, "");
}
}
true
}
/// Integrate a GetHistoryEntryResponse event.
pub fn on_entry_response(
&mut self,
log_id: u64,
offset: usize,
entry: Option<String>,
textarea: &mut TextArea,
) -> bool {
if self.history_log_id != Some(log_id) {
return false;
}
let Some(text) = entry else { return false };
self.fetched_history.insert(offset, text.clone());
if self.history_cursor == Some(offset as isize) {
self.replace_textarea_content(textarea, &text);
return true;
}
false
}
// ---------------------------------------------------------------------
// Internal helpers
// ---------------------------------------------------------------------
fn populate_history_at_index(
&mut self,
global_idx: usize,
textarea: &mut TextArea,
app_event_tx: &AppEventSender,
) {
if global_idx >= self.history_entry_count {
// Local entry.
if let Some(text) = self
.local_history
.get(global_idx - self.history_entry_count)
{
let t = text.clone();
self.replace_textarea_content(textarea, &t);
}
} else if let Some(text) = self.fetched_history.get(&global_idx) {
let t = text.clone();
self.replace_textarea_content(textarea, &t);
} else if let Some(log_id) = self.history_log_id {
let op = Op::GetHistoryEntryRequest {
offset: global_idx,
log_id,
};
app_event_tx.send(AppEvent::CodexOp(op));
}
}
fn replace_textarea_content(&mut self, textarea: &mut TextArea, text: &str) {
textarea.select_all();
textarea.cut();
let _ = textarea.insert_str(text);
textarea.move_cursor(CursorMove::Jump(0, 0));
self.last_history_text = Some(text.to_string());
}
}
#[cfg(test)]
mod tests {
#![expect(clippy::expect_used)]
use super::*;
use crate::app_event::AppEvent;
use codex_core::protocol::Op;
use std::sync::mpsc::channel;
#[test]
fn navigation_with_async_fetch() {
let (tx, rx) = channel::<AppEvent>();
let tx = AppEventSender::new(tx);
let mut history = ChatComposerHistory::new();
// Pretend there are 3 persistent entries.
history.set_metadata(1, 3);
let mut textarea = TextArea::default();
// First Up should request offset 2 (latest) and await async data.
assert!(history.should_handle_navigation(&textarea));
assert!(history.navigate_up(&mut textarea, &tx));
// Verify that an AppEvent::CodexOp with the correct GetHistoryEntryRequest was sent.
let event = rx.try_recv().expect("expected AppEvent to be sent");
let AppEvent::CodexOp(history_request1) = event else {
panic!("unexpected event variant");
};
assert_eq!(
Op::GetHistoryEntryRequest {
log_id: 1,
offset: 2
},
history_request1
);
assert_eq!(textarea.lines().join("\n"), ""); // still empty
// Inject the async response.
assert!(history.on_entry_response(1, 2, Some("latest".into()), &mut textarea));
assert_eq!(textarea.lines().join("\n"), "latest");
// Next Up should move to offset 1.
assert!(history.navigate_up(&mut textarea, &tx));
// Verify second CodexOp event for offset 1.
let event2 = rx.try_recv().expect("expected second event");
let AppEvent::CodexOp(history_request_2) = event2 else {
panic!("unexpected event variant");
};
assert_eq!(
Op::GetHistoryEntryRequest {
log_id: 1,
offset: 1
},
history_request_2
);
history.on_entry_response(1, 1, Some("older".into()), &mut textarea);
assert_eq!(textarea.lines().join("\n"), "older");
}
}

View File

@@ -14,6 +14,7 @@ use crate::user_approval_widget::ApprovalRequest;
mod approval_modal_view;
mod bottom_pane_view;
mod chat_composer;
mod chat_composer_history;
mod command_popup;
mod status_indicator_view;
@@ -165,6 +166,27 @@ impl BottomPane<'_> {
pub(crate) fn is_command_popup_visible(&self) -> bool {
self.active_view.is_none() && self.composer.is_command_popup_visible()
}
// --- History helpers ---
pub(crate) fn set_history_metadata(&mut self, log_id: u64, entry_count: usize) {
self.composer.set_history_metadata(log_id, entry_count);
}
pub(crate) fn on_history_entry_response(
&mut self,
log_id: u64,
offset: usize,
entry: Option<String>,
) {
let updated = self
.composer
.on_history_entry_response(log_id, offset, entry);
if updated {
self.request_redraw();
}
}
}
impl WidgetRef for &BottomPane<'_> {

View File

@@ -173,6 +173,15 @@ impl ChatWidget<'_> {
tracing::error!("failed to send message: {e}");
});
// Persist the text to cross-session message history.
if !text.is_empty() {
self.codex_op_tx
.send(Op::AddToHistory { text: text.clone() })
.unwrap_or_else(|e| {
tracing::error!("failed to send AddHistory op: {e}");
});
}
// Only show text portion in conversation history for now.
if !text.is_empty() {
self.conversation_history.add_user_message(text);
@@ -191,7 +200,12 @@ impl ChatWidget<'_> {
EventMsg::SessionConfigured(event) => {
// Record session information at the top of the conversation.
self.conversation_history
.add_session_info(&self.config, event);
.add_session_info(&self.config, event.clone());
// Forward history metadata to the bottom pane so the chat
// composer can navigate through past messages.
self.bottom_pane
.set_history_metadata(event.history_log_id, event.history_entry_count);
self.request_redraw();
}
EventMsg::AgentMessage(AgentMessageEvent { message }) => {
@@ -309,6 +323,17 @@ impl ChatWidget<'_> {
.record_completed_mcp_tool_call(call_id, success, result);
self.request_redraw();
}
EventMsg::GetHistoryEntryResponse(event) => {
let codex_core::protocol::GetHistoryEntryResponseEvent {
offset,
log_id,
entry,
} = event;
// Inform bottom pane / composer.
self.bottom_pane
.on_history_entry_response(log_id, offset, entry.map(|e| e.text));
}
event => {
self.conversation_history
.add_background_event(format!("{event:?}"));

View File

@@ -100,7 +100,12 @@ impl HistoryCell {
event: SessionConfiguredEvent,
is_first_event: bool,
) -> Self {
let SessionConfiguredEvent { model, session_id } = event;
let SessionConfiguredEvent {
model,
session_id,
history_log_id: _,
history_entry_count: _,
} = event;
if is_first_event {
let mut lines: Vec<Line<'static>> = vec![
Line::from(vec![