feat: add --dangerously-bypass-approvals-and-sandbox (#1384)

This PR reworks `assess_command_safety()` so that the combination of
`AskForApproval::Never` and `SandboxPolicy::DangerFullAccess` ensures
that commands are run without _any_ sandbox and the user should never be
prompted. In turn, it adds support for a new
`--dangerously-bypass-approvals-and-sandbox` flag (that cannot be used
with `--approval-policy` or `--full-auto`) that sets both of those
options.

Fixes https://github.com/openai/codex/issues/1254
This commit is contained in:
Michael Bolin
2025-06-25 12:36:10 -07:00
committed by GitHub
parent 72082164c1
commit 50924101d2
7 changed files with 91 additions and 33 deletions

View File

@@ -231,12 +231,6 @@ impl SandboxPolicy {
}
}
}
// TODO(mbolin): This conflates sandbox policy and approval policy and
// should go away.
pub fn is_unrestricted(&self) -> bool {
matches!(self, SandboxPolicy::DangerFullAccess)
}
}
/// User input