Skip to content

OpenAI (GPT) API key

This page walks you through creating an OpenAI API key and wiring it into Aparture so the pipeline can call GPT models. If you haven't picked a provider yet, Google AI is the easier on-ramp — Aparture's default slots are already all-Google, and the free tier covers an all-Flash setup end-to-end.

Two things worth knowing about OpenAI before you start:

  • There's no free trial. Every API account needs a $5 minimum prepaid deposit before any call goes through; the ChatGPT Plus subscription is separate and doesn't include API credit.
  • Prompt caching is automatic. OpenAI caches repeated prompt prefixes on its end with no code changes needed. Real cost on repeat runs with the same profile typically tracks 20–40% below list.

1. Sign up

Head to platform.openai.com/signup. You can sign up with email, Google, Microsoft, or Apple SSO.

Verification requires a click-through email and an SMS code (phone verification isn't skippable). Fill in the basic profile — "Personal" works fine for the organisation name.

2. Add billing

OpenAI returns 429 insufficient_quota on every API call until you have a credit balance, so billing setup comes before key creation.

  1. From the dashboard, navigate to Settings → Billing (in the left sidebar under your org name).
  2. Click Add payment method, add a credit card.
  3. Click Add to credit balance — minimum deposit is $5. Credits don't expire.
  4. Paying $5 immediately activates Tier 1 (the entry tier, $100/month spend ceiling).

Optional but recommended: also set a spend cap under Settings → Limits. A hard cap of $10–$25 and a notification threshold at ~50% of that is a sensible starting point; you can raise it later.

Auto-recharge is off by default. Leave it off unless you really want spend that happens without your knowing.

3. Create an API key

  1. Go to Dashboard → API keys (left sidebar) or platform.openai.com/api-keys.
  2. Click + Create new secret key (top right).
  3. Name it something like aparture-local. The default project and "All" permissions are fine.
  4. Click Create secret key.
  5. Copy the key immediately once it appears — it's shown exactly once. If you miss it, delete and recreate.

The key format is sk-proj-<...>. Legacy sk-... keys still work but aren't issued to new accounts. Aparture treats both shapes identically.

4. Add to .env.local

Open .env.local in the Aparture project root and paste the key:

bash
OPENAI_API_KEY=sk-proj-your-actual-key-here

No quotes, no spaces around =. Restart npm run dev if it's already running — Next.js reads .env.local once at server startup.

If you haven't already set ACCESS_PASSWORD in the same file, see the install page. Both values live in .env.local; the web UI can't launch without a password.

5. Verify

Aparture's default model slots are all-Google, so adding an OpenAI key alone doesn't route anything to OpenAI yet. To actually test your key, switch at least one slot to a GPT model first:

  1. Start (or restart) npm run dev and log in.
  2. Open the Settings panel.
  3. Change the pdfModel slot to gpt-5.4. This is the most expensive stage and the clearest signal that your key works.
  4. Back in the Control Panel, run the Minimal API Test.

Expect ~$0.30–$0.80 on the 5-paper test with the PDF stage on GPT-5.4.

If the key is invalid, you'll see 401 invalid_api_key. If you didn't add credit yet, you'll see 429 insufficient_quota. If gpt-5.4 returns model_not_found, your org may not have access to the newest models yet — check Dashboard → Models, or switch to gpt-5.4-mini which is available on every tier.

You pick each pipeline stage's model individually in the Settings panel. See Model selection for what each slot does and how Aparture uses it end to end; the table below is just the recommended OpenAI picks for an all-OpenAI setup.

StageModel
Filter (filterModel)gpt-5.4-nano
Scoring (scoringModel)gpt-5.4-mini
PDF analysis (pdfModel)gpt-5.4
Briefing (briefingModel)gpt-5.4
Quick summary (quickSummaryModel)gpt-5.4-nano
NotebookLM doc (notebookLMModel)gpt-5.4

If you want a quality/cost step down, swap pdfModel to GPT-5.4 Mini — that single change cuts roughly 65% off the PDF-analysis cost, since Stage 4 dominates the total.

7. Cost estimate

Per-model pricing

All GPT-5.4 models bill per million tokens (MTok), separately for input and output. List pricing for every OpenAI model in Aparture's registry:

ModelContextInput ($/MTok)Output ($/MTok)
gpt-5.4 (recommended PDF + briefing)1M$2.50$15.00
gpt-5.4-mini (recommended scoring)400k$0.75$4.50
gpt-5.4-nano (recommended filter + q-summary)400k$0.20$1.25

Automatic prompt caching. OpenAI caches repeated prompt prefixes on its end with no configuration needed. The cached-input rate is roughly 10× cheaper than the list rate above (gpt-5.4 $0.25/MTok, gpt-5.4-mini $0.075/MTok, gpt-5.4-nano $0.02/MTok). That typically nets a 20–40% reduction overall on repeat runs with the same profile.

OpenAI updates pricing periodically. Verify current rates at developers.openai.com/api/docs/pricing before committing to real spend.

Worked calculation: 100 input papers (all-OpenAI lineup)

Reference case: 100 fetched papers, ~50 pass the filter and get scored, 20 go through PDF analysis (below the default maxDeepAnalysis cap of 30).

StageModelInput tokensOutput tokensCost
Filter (100 abstracts)GPT-5.4 Nano~40,000~5,00040k × $0.20 / MTok + 5k × $1.25 = ~$0.01
Scoring (50 abstracts)GPT-5.4 Mini~40,000~7,50040k × $0.75 + 7.5k × $4.50 = ~$0.06
PDF analysis (20 papers)GPT-5.4~360,000~40,000360k × $2.50 + 40k × $15 = ~$1.50
Quick summaries (20)GPT-5.4 Nano~30,000~8,00030k × $0.20 + 8k × $1.25 = ~$0.02
Briefing synthesisGPT-5.4~10,000~3,50010k × $2.50 + 3.5k × $15 = ~$0.08
Hallucination auditGPT-5.4~6,000~8006k × $2.50 + 0.8k × $15 = ~$0.03
Total, list price~$1.70

With automatic caching on repeat runs (same profile, same category set), the stable prefix of each prompt drops to 10% of list input pricing, so repeat runs land at ~$1.15–1.25 per run after the first.

Scaling to other input volumes

Filter and scoring scale linearly with input volume. Stage 4 scales with how many papers you deep-analyse — the default cap is maxDeepAnalysis = 30, which binds at higher input volumes:

  • 50 papers in (10 PDFs): ~$0.87 list / ~$0.60 with caching
  • 100 papers in (20 PDFs): ~$1.70 list / ~$1.20 with caching (reference case above)
  • 250 papers in (30 PDFs, at cap): ~$2.60 list / ~$1.85 with caching — PDF analysis plateaus at the cap; filter + scoring keep growing

8. Common gotchas

  • Forgot to restart npm run dev. Hot-reload doesn't pick up .env.local changes; restart the server.
  • Minimal API Test still runs against Google. If you added the OpenAI key but didn't change a model slot in Settings, the test runs with Google models and won't exercise OpenAI at all. Swap at least one slot to a GPT model first.
  • 429 insufficient_quota before any real usage. You forgot the $5 deposit. Fix: Settings → Billing → Add to credit balance.
  • model_not_found for gpt-5.4. Your org may not yet have access to the newest models. Check Dashboard → Models, or switch to gpt-5.4-mini.
  • Whitespace in the key. Copy/paste sometimes adds a trailing space or a Windows CRLF line ending. Re-save .env.local with LF and no trailing whitespace.
  • Per-model rate limits are dashboard-only. OpenAI no longer publishes consolidated tables; your current limits are at platform.openai.com/settings/organization/limits.
  • Want faster PDF analysis? OpenAI caches repeated prompt prefixes automatically (no warmup needed), and the Tier 3+ TPM is generous enough to handle higher Stage 4 parallelism than the default of 3. Raise Parallel PDF analyses in Settings to 5–8 (tuning the pipeline).

Next

Key added and dev server restarted? Confirm it works: Verify your setup →


Snapshot taken 2026-04-19. OpenAI pricing, tier thresholds, and signup flow may change. Verify against developers.openai.com/api/docs/pricing and your console's live limits at platform.openai.com/settings/organization/limits before committing to real spend.

Released under the MIT License.