Major performance improvements for CI builds:
1. **Removed proactive rate limit threshold**
- No longer waits at 500 remaining requests
- Uses full 5000 request quota before forced wait
- Maximizes work per rate limit cycle
2. **Implemented incremental indexing**
- Checks if repository already exists in database
- Compares last_commit (pushedAt) to detect changes
- Only fetches README for new or updated repositories
- Skips README fetch for unchanged repos (major time savings)
3. **Increased timeout to GitHub maximum**
- Job timeout: 180m → 360m (6 hours, GitHub free tier max)
- Script timeout: 170m → 350m
- Allows full first-run indexing to complete
Impact on performance:
**First run (empty database):**
- Same as before: ~25,000 repos need full indexing
- Will use all 360 minutes but should complete
**Subsequent runs (incremental):**
- Only fetches READMEs for changed repos (~5-10% typically)
- Dramatically faster: estimated 30-60 minutes instead of 360
- Makes daily automated builds sustainable
Files changed:
- lib/github-api.js: Removed proactive rate limit check
- lib/indexer.js: Added incremental indexing logic
- .github/workflows/build-database.yml: Increased timeout to 360m
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
CRITICAL FIX: raw.githubusercontent.com does NOT count against GitHub
API rate limits, but the code was treating all requests the same way.
Problem:
- README fetches (~25,000) were going through rateLimitedRequest()
- Added artificial delays, proactive checks, and unnecessary waits
- Build took ~7 hours instead of ~2-3 hours
- Only getRepoInfo() API calls actually count against rate limits
Solution:
1. Created fetchRawContent() function for direct raw content fetches
2. Updated getReadme() to use fetchRawContent()
3. Updated getAwesomeListsIndex() to use fetchRawContent()
4. Reduced workflow timeout: 330m → 180m (3 hours)
Impact:
- Build time: ~7 hours → ~2-3 hours (60% reduction)
- Only ~25K API calls (getRepoInfo) count against 5000/hour limit
- ~25K README fetches are now unrestricted via raw.githubusercontent.com
- Will complete well within GitHub Actions 6-hour free tier limit
Files changed:
- lib/github-api.js: Add fetchRawContent(), update getReadme() and
getAwesomeListsIndex() to use it
- .github/workflows/build-database.yml: Reduce timeout to 180 minutes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Accurate calculation for ~25,000 READMEs with 5000/hour rate limit:
- Requests needed: ~25,000 (1 per README + metadata)
- Cycles needed: 25,000 / 4,500 ≈ 5.5 cycles
- Time per cycle: ~44 min work + ~16 min wait = 60 min total
- Total time: 5.5 × 60 = 330 minutes (5.5 hours)
Previous timeouts were insufficient:
- 170 minutes: completed 2.9 cycles (~13,500 requests)
- 270 minutes: would complete 4.5 cycles (~20,000 requests)
- 330 minutes: allows full completion with buffer
Changes:
- Job timeout: 180m → 330m (5.5 hours)
- Script timeout: 170m → 320m
- Within GitHub Actions free tier limit (360 minutes/6 hours)
Alternative: Use 'sample' mode for faster builds if full index
is not immediately needed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Initialize database inside indexer process to ensure connection exists
- Configure GitHub token in same process as indexer
- Make indexer throw errors instead of returning early for CI failure detection
- Remove duplicate token configuration step
- Pass GITHUB_TOKEN as environment variable to build step
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>