Files
llmx/codex-cli/examples/build-codex-demo/task.yaml

89 lines
3.6 KiB
YAML
Raw Normal View History

name: "build-codex-demo"
description: |
I want you to reimplement the original OpenAI Codex demo.
Functionality:
- User types a prompt and hits enter to send
- The prompt is added to the conversation history
- The backend calls the OpenAI API with stream: true
- Tokens are streamed back and appended to the code viewer
- Syntax highlighting updates in real time
- When a full HTML file is received, it is rendered in a sandboxed iframe
- The iframe replaces the previous preview with the new HTML after the stream is complete (i.e. keep the old preview until a new stream is complete)
- Append each assistant and user message to preserve context across turns
- Errors are displayed to user gracefully
- Ensure there is a fixed layout is responsive and faithful to the screenshot design
- Be sure to parse the ouput from OpenAI call to strip the ```html tags code is returned within
- Use the system prompt shared in the API call below to ensure the AI only returns HTML
Support a simple local backend that can:
- Read local env for OPENAI_API_KEY
- Expose an endpoint that streams completions from OpenAI
- Backend should be a simple node.js app
- App should be easy to run locally for development and testing
- Minimal setup preferred — keep dependencies light unless justified
Description of layout and design:
- Two stacked panels, vertically aligned:
- Top Panel: Main interactive area with two main parts
- Left Side: Visual output canvas. Mostly blank space with a small image preview in the upper-left
- Right Side: Code display area
- Light background with code shown in a monospace font
- Comments in green; code aligns vertically like an IDE/snippet view
- Bottom Panel: Prompt/command bar
- A single-line text box with a placeholder prompt
- A green arrow (submit button) on the right side
- Scrolling should only be supported in the code editor and output canvas
Visual style
- Minimalist UI, light and clean
- Neutral white/gray background
- Subtle shadow or border around both panels, giving them card-like elevation
- Code section is color-coded, likely for syntax highlighting
- Interactive feel with the text input styled like a chat/message interface
Here's the latest OpenAI API and prompt to use:
```
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const response = await openai.responses.create({
model: "gpt-4.1",
input: [
{
"role": "system",
"content": [
{
"type": "input_text",
"text": "You are a coding agent that specializes in frontend code. Whenever you are prompted, return only the full HTML file."
}
]
}
],
text: {
"format": {
"type": "text"
}
},
reasoning: {},
tools: [],
temperature: 1,
top_p: 1
});
console.log(response.output_text);
```
Additional things to note:
- Strip any html and tags from the OpenAI response before rendering
- Assume the OpenAI API model response always wraps HTML in markdown-style triple backticks like ```html <code> ```
- The display code window should have syntax highlighting and line numbers.
- Make sure to only display the code, not the backticks or ```html that wrap the code from the model.
- Do not inject raw markdown; only parse and insert pure HTML into the iframe
- Only the code viewer and output panel should scroll
- Keep the previous preview visible until the full new HTML has streamed in
Add a README.md with what you've implemented and how to run it.