
How to Debug API Calls Made by Cursor
Cursor writes code that calls APIs. When those calls fail, you can't debug what you can't see. This tutorial shows the exact workflow for capturing, inspecting, diffing, and replaying real API traffic from Cursor-generated code.
Cursor writes code fast. The problem is that the API calls that code makes are invisible by default. When a request returns a 422, a 401, or a silent empty response, you are left reading an error message and guessing what went wrong — without seeing the actual request that caused it.
This tutorial shows the full debugging loop: capturing real API traffic from Cursor-generated code, finding the failing request, comparing it against a good one, mocking the endpoint to unblock yourself, and replaying after the fix to prove it worked.
The gap Cursor cannot fill on its own
Cursor is excellent at reading your codebase. It sees your types, your functions, your existing API calls. What it cannot see is what those calls look like on the wire — the actual headers, the real request body after middleware transforms it, the authentication token after it has been signed and encoded.
That is where most API bugs hide. The code looks right. Cursor's suggestions look right. But the request your app sends has a different Content-Type, a missing header, or a payload that does not match what the endpoint expects. You find the bug by seeing the real request — not by reading the source code.
APXY intercepts the real traffic so Cursor has actual network evidence to work with, and so you can see exactly what is going wrong.
Step 1: Install APXY and start the proxy
curl -fsSL https://apxy.dev/install.sh | bash
apxy startAPXY installs a local proxy and opens the Web UI at http://localhost:8082. On first run it generates a root CA certificate and trusts it in your system keychain — enter your password once when prompted.
Step 2: Proxy the environment where your app runs
Open a new terminal and inject the proxy environment variables:
eval $(apxy env)This sets HTTP_PROXY, HTTPS_PROXY, and language-specific CA certificate variables (NODE_EXTRA_CA_CERTS, REQUESTS_CA_BUNDLE, SSL_CERT_FILE) so any process launched from this shell routes through APXY automatically.
If your app is Node.js:
eval $(apxy env --lang node)
npm run devIf it is Python:
eval $(apxy env --lang python)
python app.pyKey point: Launch your app from the same shell where you ran
eval $(apxy env). Child processes inherit the proxy environment, so every HTTP call your app makes — including ones deep inside SDKs — will be captured.
Step 3: Trigger the failing request
Run the operation in your app that is producing the bad API call. If the failure is intermittent, reproduce it a few times to capture multiple instances in the traffic log.
Open the APXY Web UI at http://localhost:8082 and go to Capture → Traffic. You should see the requests appear in real time.
Use the filter bar to narrow the list:
# Filter via CLI if you prefer the terminal
apxy logs list --filter "status:4xx"
apxy logs list --filter "url:api.example.com"Step 4: Read the failing request
Click the failing request in the Web UI. You are looking at:
- Status code — 401, 403, 422, 500. Each points to a different root cause.
- Request headers — Is
Authorizationpresent? IsContent-Typecorrect? Is there a requiredX-API-Versionheader missing? - Request body — Is the payload the shape the API expects? Is it encoded correctly (JSON vs form data)?
- Response body — Most APIs return a descriptive error message that does not show up in your application logs.
The response body is often the fastest path to the answer. An endpoint that returns 422 Unprocessable Entity with {"error": "missing required field: idempotency_key"} tells you exactly what to fix — and that message never appears in a stack trace.
Step 5: Diff a failing request against a good one
If the same endpoint works in some cases but not others, the diff feature pinpoints the difference.
In the Web UI, select two requests to the same endpoint and click Diff. APXY highlights every difference in headers, body, and query parameters between the two.
Via the CLI:
# Get the IDs of two requests
apxy logs list --filter "url:api.example.com/checkout" --limit 5
# Diff them
apxy logs diff --ids <good-id>,<bad-id>The diff almost always reveals the root cause immediately: a computed HMAC signature that changed, a missing idempotency key, a body field that is null instead of absent.
Step 6: Mock the endpoint to unblock yourself
If the upstream API is rate-limiting you, returning errors in staging, or simply not ready yet, add a mock rule so you can keep working:
apxy mock add \
--name "stable-checkout-response" \
--url "https://api.stripe.com/v1/payment_intents" \
--match contains \
--status 200 \
--body '{"id":"pi_mock","status":"succeeded","amount":4900}'From this point, every request to that endpoint returns your mock response. Your app keeps running, Cursor keeps helping, and you can fix the underlying issue without blocking the rest of the work.
Remove the mock when you are ready to test against the real endpoint:
apxy mock remove --name "stable-checkout-response"Step 7: Fix the code and replay to verify
Once Cursor suggests a fix — or you patch it yourself — replay the original failing request against the updated code to verify the response changed:
# Replay a captured request by ID
apxy logs replay --id <request-id>APXY sends the same request again and shows you the new response alongside the original. If the status code changed from 422 to 200 and the response body is what you expected, the fix works — without redeploying or reproducing the full scenario manually.
Step 8: Give the evidence back to Cursor
The most powerful part of this workflow is closing the loop: taking the captured traffic back to Cursor so it can reason over real evidence instead of reading source code.
Export the failing request in a format Cursor can read:
# As a cURL command (paste directly into Cursor chat)
apxy logs export-curl --id <request-id>
# As structured JSON (good for Cursor's context window)
apxy logs export --id <request-id> --format json
# As TOON (Token-Optimized Output Notation — most compact for AI agents)
apxy logs export --id <request-id> --format toonPaste the output into Cursor and ask: "Here is the actual request and response. What is wrong with it and how should I fix the code?"
Cursor now has the real headers, the real body, and the real error response — not a description of the code that produces them. That is the difference between a guess and a diagnosis.
The full loop in practice
A typical debugging session looks like this:
- Cursor generates code that calls an API
- The call fails with a vague error in your app logs
- You capture the real request in APXY — 30 seconds
- You read the response body and find the specific error message — 1 minute
- You diff it against a working request and see the missing field — 2 minutes
- You mock the endpoint to keep working while you fix it — 1 minute
- Cursor patches the code with the real error context you provide
- You replay the original request, the response is 200, you remove the mock
Total time from failure to verified fix: under 10 minutes. Without APXY, the same session involves console logs, re-running the app, guessing at the payload, and reading documentation to figure out what the API actually expected.
Common patterns to watch for
401 / 403 — Authentication problems. Check the Authorization header in the captured request. Look for token format mismatches (Bearer vs Token), expired tokens, and missing scopes. The response body from most OAuth providers tells you exactly what is wrong.
422 — Payload shape mismatch. The API received the request but rejected the body. Compare the request body in APXY against the API's schema. Field names, types, and required vs optional fields are the usual culprits.
400 with empty body in your app. Your app may be reading the response body before the request is fully sent, or the SDK may be silently swallowing the error. APXY captures the raw response body before your app touches it.
429 — Rate limiting. Look for patterns in the traffic log: how many requests per second is the code generating? A Cursor-assisted refactor sometimes introduces a retry loop that hammers the rate limit. Mocking the endpoint immediately unblocks you.
Working in Postman but failing in the app. This is the classic case. Diff a Postman-style request you construct manually against the one APXY captured from the app. The difference is almost always in a header or encoding that Postman adds automatically but the SDK does not.
For the basics of routing Cursor through APXY, see How to Capture HTTPS Traffic from Cursor, Claude Code, and AI Coding Agents. For the broader case of using APXY with AI agents, see Why Your AI Coding Agent Needs Network Visibility.
Debug your APIs with APXY
Capture, inspect, mock, and replay HTTP/HTTPS traffic. Free to install.
Install FreeRelated articles
Why Your AI Coding Agent Needs Network Visibility
AI coding agents are excellent at reading code. They cannot see the network. That gap is where most agent-assisted debugging sessions get stuck. Here is how to close it.
GuideToken Optimization: Fitting API Traffic into Your AI Agent's Context Window
Raw HTTP traffic is verbose. A single request-response pair can consume thousands of tokens. APXY's output formats compress traffic by 60–90% while keeping the information your agent actually needs to diagnose issues.