
"The agent takes input from the user and prepares a textual prompt for the model. The model then generates a response, which either produces a final answer for the user or requests a tool call (such as running a shell command or reading a file). If the model requests a tool call, the agent executes it, appends the output to the original prompt, and queries the model again. This process repeats until the model stops requesting tools and instead produces an assistant message"
"That looping process has to start somewhere, and Bolin's post reveals how Codex constructs the initial prompt sent to OpenAI's Responses API, which handles model inference. The prompt is built from several components, each with an assigned role that determines its priority: system, developer, user, or assistant. The instructions field comes from either a user-specified configuration file or base instructions bundled with the CLI. The tools field defines what functions the model can call, including shell commands, planning tools, web search capabilities,"
OpenAI and Anthropic open-source their coding CLI clients on GitHub, enabling developers to inspect implementations directly while ChatGPT and the Claude web interface remain closed. An AI agent operates via a repeating agent loop that prepares prompts, receives model responses, and executes tool calls when requested. Tool outputs are appended and the model is queried again until a final assistant message is produced. Codex constructs the initial prompt for the Responses API from prioritized components labeled system, developer, user, and assistant. Prompt fields include instructions (from configuration or bundled defaults), tools (shell, planning, web search, and custom MCP servers), and input (sandbox permissions, optional developer instructions, environment context, and the user's message).
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]