Codex Training Hub

Preview the Codex onboarding tasks without joining a room.

Overall progress0/23 tasks · 23 tasks left

Tasks

Preview mode: progress is saved locally and not synced to a room.

0/23 completed
View for:
  • Install the Codex CLI locally before you dive into the workshop.

    1. Choose an install method:

    Install Codex CLI screenshot

  • Codex on Windows is still early; follow the official guide at https://developers.openai.com/codex/windows.

    1. Native Windows (quick start)
      • Install the Codex CLI and run it from PowerShell.
      • Agent mode works natively and uses an experimental sandbox to restrict files/network. It can’t block writes in folders where Everyone already has write access.
    2. WSL2 (recommended)
      • Follow the documentation from the docs.
  • Confirm you are on the expected release before continuing.

    1. Run codex --version; expect at least codex-cli 0.53.0 (baseline when this workshop was written—update if you're behind).

    Verify Codex version screenshot

  • Make sure Codex is signed in with the same ChatGPT account you use on the web.

    1. Run codex login.
    2. Your browser opens to complete sign-in; approve and return to the terminal once it confirms success.
  • Pull down the sample repository you'll use throughout the exercises.

    1. Clone the repo with git clone https://github.com/openai/agents.md.git agents-md-repository (or download the ZIP from https://github.com/openai/agents.md and rename the folder to agents-md-repository).
    2. Open a separate terminal for the dev server and launch it:
      cd agents-md-repository
      npm install
      npm run dev
      
    3. Leave that terminal running; you'll use a different shell window for Codex commands.

    Get reference repo screenshot

  • Tune Codex to the low reasoning before prompting. When you finish testing.

    1. Run codex, type /model, highlight gpt-5.1-codex-max, press Enter, then select the Low reasoning.

    CLI low reasoning selection

  • Practice asking Codex for analysis without editing files.

    1. With the low reasoning model active, ask: Explain what this repository is doing in short.

    Explain repo in short screenshot

  • Return to the recommended default model before continuing.

    1. Start or resume a session, run /model, and pick gpt-5.1-codex-max, then confirm Medium reasoning—best blend of depth and quality.
    2. Keep this as the default before moving on so follow-up tasks use the best model.
  • Run the following prompt to generate a visualization asset for the repo.

    1. Start a Codex session.
    2. Send:
      Create a small HTML page in assets.html that shows how files are related to each others. I want a nice looking assets.html that explain those concepts.
      
    3. Review the proposed asset to ensure the relationships are accurate before applying the patch.

    Assets HTML generation screenshot

  • Have Codex modify the project by applying a patch.

    1. Start a new task and send:
      Can you implement, next to the "Explore Examples" and "View on Github" buttons a button to download and another one to copy the page as markdown?
      
    2. Review the diff Codex proposes before applying.

    Advanced prompt editing screenshot

  • Practice attaching visuals and following up with styling tweaks.

    1. Capture a screenshot of the Agents.md textarea (top-right of the app).
    2. Paste the image directly (ctrl+v on Mac or ctrl+maj+v on Windows) or reference it with @agents-textarea.png, then add: Make inline background code chips orange.
    3. Verify Codex updates the UI styling based on the screenshot.

    Image prompt screenshot Orange background chips screenshot

  • Learn how to jump back into an existing thread.

    1. Run codex resume.
    2. Search for the task titled “chip code/background orange”.
    3. Reopen it and ask, “actually make that green”.
    4. Confirm Codex replays the context and updates the color.

    Note: you can run codex resume <session_id> directly at the end of a session, as shown when you exit the CLI.

    Resume session screenshot

  • Use the Cupcake MCP sample so you can pull product requirements on demand.

    Review the Model Context Protocol overview: https://developers.openai.com/codex/mcp. The Cupcake MCP server is hosted at https://codex-102.vercel.app/mcp.

    1. Register the server:
      codex mcp add cupcakemcp --url https://codex-102.vercel.app/mcp
      
    2. Verify registration:
      codex mcp list
      codex mcp get cupcake-mcp
      
      You should see the cupcakemcp entry in both commands.
    3. Spot-check the list output for the expected entry.

    Cupcake MCP list screenshot Cupcake MCP command screenshot

  • Verify Codex can reach Cupcake through the MCP server before you rely on it.

    1. In a Codex chat, run /mcp to confirm the server connection is healthy.
    2. If it fails, re-check the config.toml entry or the CLI flags from https://developers.openai.com/codex/mcp.
  • Test the MCP integration by pulling product requirements before coding.

    1. Start a fresh session: run codex and create a new chat.
    2. Send:
      Add a section, unrelated to agents.md, that fetches Rachel order and show what's rachel's cupcake order at the bottom of this webpage
      
    3. Let Codex fetch the Cupcake records via MCP (Rachel order is at records.json#L260).

    Cupcake MCP search screenshot Cupcake MCP result screenshot

  • Make Codex adhere to our standard leveraging AGENTS.md.

    Codex will follow the prompt in this file and add it in its context.

    1. Append the note:
      echo "When implementing new features or product requirements, fetch the latest context from available MCP servers (e.g. Cupcake orders) before coding." >> agents.md
      
    2. Save the changes.

    Agents guidance picked up screenshot

  • Make the new guidance available in every project.

    1. Move the appended content into ~/.codex/agents.md (create the directory if it doesn't exist).
    2. Clean up the project copy so only the global file keeps the note.
    3. Start a new Codex chat and rerun the prompt In which context would you use MCPs we have access to? to confirm the global guidance sticks.
  • Automate test generation prompts for future sessions.

    Add tests command screenshot

    1. Save your standard ask:
      1. Create a directory at ~/.codex/prompts to store saved commands (make it if it does not exist).
      2. Create ~/.codex/prompts/add-tests.md containing:
        Generate unit tests for the touched files. Use the project’s existing test runner and conventions. Keep diffs minimal and runnable locally.
        
    2. Resume the “Latest News” task (use codex resume and type to search) and run /add-tests to invoke the new command.
  • Practice adding local context to Codex chats.

    1. Type @ and choose a file to attach, or reference it directly such as @src/app/page.tsx. Codex CLI attach file list screenshot
    2. Rely on @file when you need context.
  • Use /review to inspect a small regression you introduce on purpose.

    1. In your repo, open pages/index.tsx and scroll to the getStaticProps loop that builds contributorsByRepo. Inside that loop, right before the contributorsByRepo[fullName] = { avatars, total } assignment, add one line:
      total++;
      
      Leave everything else unchanged so the counter quietly creeps up by one.
    2. Start a fresh Codex CLI session in the repo and run:
      /review
      
      Choose Review against a base branch and select at main. Codex CLI review preset screenshot
    3. Let Codex run the review and read the findings that flag the inflated total. Codex CLI review findings screenshot
  • Close out by skimming the official guidance.

    1. Read https://developers.openai.com/codex/prompting/ for Codex-specific prompting tips.
    2. Be precise, avoid conflicting instructions, and match reasoning effort to task complexity (use <self_reflection> when helpful).
    3. Use XML-like sectioning for multi-part asks and keep language cooperative.
    4. Remember layered AGENTS.md files:
      • Global (~/.codex/AGENTS.md) for personal defaults or safety constraints.
      • Project (AGENTS.md) for repo-specific guidance.
      • Directory (AGENTS.md) for local overrides.
    5. Capture any personal defaults you want in your own AGENTS.md files.
  • Bookmark the go-to references for deeper Codex work.

    1. Open the product hub at https://developers.openai.com/codex for docs, demos, and release notes.
    2. Explore the Codex SDK: https://developers.openai.com/codex/sdk.
    3. Review the GitHub Action wiring guide: https://developers.openai.com/codex/sdk#github-action.
    4. Track updates via the changelog: https://developers.openai.com/codex/changelog.