Quick Start
Get up and running with Aparture in 5 minutes.
Overview
This guide walks you through your first paper analysis using the web interface. You'll:
- Select arXiv categories
- Define your research interests
- Configure analysis settings
- Run the analysis
- Review results
Time required: ~5-10 minutes for configuration, 20-45 minutes for analysis
Prerequisites
Before starting, ensure you have:
- ✅ Installed Aparture
- ✅ Configured environment variables
- ✅ At least one API key (Anthropic, OpenAI, or Google)
- ✅ Development server running (
npm run dev)
Step 1: Access the Application
- Start the development server:
npm run devOpen your browser to http://localhost:3000
Enter your
ACCESS_PASSWORDfrom.env.local
You should now see the main Aparture interface.
Step 2: Select Categories
Choose which arXiv categories to monitor.
For this quick start, select:
cs.LG- Machine Learningcs.AI- Artificial Intelligence
How to select:
- Click "Computer Science (cs)" to expand
- Check boxes for
cs.LGandcs.AI - See the summary update: "2 categories selected"
Starting Small
Begin with 2-3 categories for your first run. You can expand later.
Step 3: Define Research Criteria
Enter your research interests in natural language.
Example criteria:
I am interested in:
- Deep learning methods for computer vision
- Novel neural network architectures
- Transfer learning and fine-tuning techniques
- Practical applications with code implementationsTips:
- Be specific about techniques you care about
- Mention both broad areas and specific interests
- Include domain applications if relevant
- Keep it under 200 words
Step 4: Configure Analysis Settings
Quick Filter (Recommended)
Enable quick filtering for faster, cheaper analysis:
- Quick Filter: ✅ Enable
- Model: Claude Haiku 4.5 (fast and cheap)
- Threshold: MAYBE (balanced)
Abstract Scoring
Configure detailed scoring:
- Model: Claude Sonnet 4.5 (balanced quality)
- Batch Size: 10 papers per request
- Min Score Threshold: 5.0 (moderate relevance)
PDF Analysis
Set how many papers to analyze deeply:
- Model: Claude Opus 4.1 (best quality)
- Max Papers: 10 (good for first run)
Optional: NotebookLM
Generate a podcast-ready document:
- Generate NotebookLM: ✅ Enable
- Duration: 15 minutes
API Costs
This configuration costs approximately $1-2 for ~30 papers (Quick Filter + Sonnet scoring + 10 Opus PDF analyses). Adjust settings if cost is a concern.
Step 5: Start Analysis
- Click "Start Analysis" button
- Watch the progress indicators
- Wait for completion (~20-45 minutes)
Progress stages you'll see:
- 🔍 Fetching papers (~1 min)
- ⚡ Quick filter (~2 min)
- 📊 Scoring abstracts (~10-20 min)
- 📄 Analyzing PDFs (~10-20 min)
- 📝 Generating NotebookLM document (~1 min)
Step 6: Review Results
Once complete, you'll see:
Results Panel
Papers sorted by relevance score (0-10):
High relevance (8-10):
- Green border
- Detailed justification
- Full PDF analysis
Moderate relevance (5-7):
- Yellow border
- Brief justification
- May have PDF analysis
Lower relevance (<5):
- No border
- Short justification
What to Look For
Score: How relevant is this paper?
- 9-10: Must read
- 7-8: Should read
- 5-6: Maybe read
- <5: Probably skip
Justification: Why this score?
- Specific connections to your interests
- Key contributions mentioned
- Methodology relevance
PDF Analysis: Deep summary
- Main contributions
- Methodology details
- Results and findings
- Limitations
- Future directions
Step 7: Download Reports
Get your analysis results:
Analysis Report
- Click "Download Report"
- Saves as:
YYYY-MM-DD_arxiv_analysis_XXmin.md - Contains all scores, justifications, and PDF analyses
NotebookLM Document (if enabled)
- Click "Download NotebookLM Document"
- Saves as:
YYYY-MM-DD_notebooklm_XXmin.md - Upload to notebooklm.google.com to generate podcast
Reading Reports
Use a Markdown viewer like VS Code, Obsidian, or Typora for best experience.
Next Steps
Refine Your Workflow
Now that you've completed your first analysis:
- Adjust categories - Add or remove based on results
- Refine criteria - Update based on what was/wasn't caught
- Optimize costs - Adjust batch sizes and thresholds
- Try different models - Experiment with cost/quality trade-offs
Automate Daily Runs
Set up CLI automation for unattended daily analyses:
# Configure once
npm run setup
# Run daily
npm run analyzeExplore Advanced Features
- Testing modes → - Dry run and minimal tests
- Model selection → - Choose the right models
- NotebookLM podcasts → - Generate audio overviews
Troubleshooting
No Papers Found
Possible causes:
- No papers published today in selected categories
- Too narrow category selection
- arXiv API temporarily down
Solutions:
- Try different categories
- Wait until afternoon (papers published throughout the day)
- Check arXiv status
Analysis Stuck
If progress stops:
- Check browser console (F12) for errors
- Verify API keys are valid and have available credits
- Check API rate limits in provider dashboards
- Refresh the page if interface becomes unresponsive
High Costs
To reduce costs:
- Enable Quick Filter (saves 40-60%)
- Use cheaper models (Haiku, Flash)
- Reduce batch sizes
- Lower PDF analysis limit
- Start with fewer categories
Poor Relevance
If papers aren't relevant:
- Make research criteria more specific
- Add example topics or papers
- Adjust score threshold higher
- Enable post-processing for consistency
Example Output
Here's what a typical first run produces:
Papers fetched: 47 from cs.LG and cs.AI After quick filter: 30 papers (20 YES, 10 MAYBE) Average score: 6.2/10 Top score: 9.1 Papers with PDF analysis: 10 Duration: 32 minutes Cost: ~$1.80
Top paper example:
Title: "Efficient Attention Mechanisms for Vision Transformers" Score: 9.1/10 Why relevant: Novel attention mechanism directly applicable to your computer vision interests. Includes code implementation and strong empirical results on standard benchmarks.
Tips for Success
- Start small - 2-3 categories, 10 PDFs
- Iterate - Refine criteria based on results
- Track costs - Monitor API usage dashboards
- Test first - Use dry run mode before production
- Read the reports - Don't just trust scores
Getting Help
- Check User Guide → for detailed interface documentation
- See Testing Guide → for troubleshooting test modes
- File issues on GitHub
Happy discovering! 🔍