feat: add --dangerously-bypass-approvals-and-sandbox (#1384)
This PR reworks `assess_command_safety()` so that the combination of `AskForApproval::Never` and `SandboxPolicy::DangerFullAccess` ensures that commands are run without _any_ sandbox and the user should never be prompted. In turn, it adds support for a new `--dangerously-bypass-approvals-and-sandbox` flag (that cannot be used with `--approval-policy` or `--full-auto`) that sets both of those options. Fixes https://github.com/openai/codex/issues/1254
This commit is contained in:
@@ -231,12 +231,6 @@ impl SandboxPolicy {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TODO(mbolin): This conflates sandbox policy and approval policy and
|
||||
// should go away.
|
||||
pub fn is_unrestricted(&self) -> bool {
|
||||
matches!(self, SandboxPolicy::DangerFullAccess)
|
||||
}
|
||||
}
|
||||
|
||||
/// User input
|
||||
|
||||
Reference in New Issue
Block a user