fix(logging): replace console.warn with logger in stars API route#3514
fix(logging): replace console.warn with logger in stars API route#3514badhra-ajaz wants to merge 2 commits intosimstudioai:mainfrom
Conversation
The stars API route used console.warn() for error logging while every other API route in the codebase uses createLogger from @sim/logger. This makes warning output from this route invisible to the structured logging pipeline. Replaced both console.warn() calls with logger.warn() to match the pattern used across all other API routes.
PR SummaryMedium Risk Overview The stars API route replaces OpenAI provider requests now apply a 60s Written by Cursor Bugbot for commit 0ecde36. This will update automatically on new commits. Configure here. |
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
|
|
||
| if (!response.ok) { | ||
| console.warn('GitHub API request failed:', response.status) | ||
| logger.warn('GitHub API request failed:', response.status) |
There was a problem hiding this comment.
Production warnings silently suppressed by logger level filtering
Medium Severity
Switching from console.warn to logger.warn silently suppresses these warnings in production. The @sim/logger defaults to LogLevel.ERROR in production, so WARN-level messages are filtered out unless LOG_LEVEL is explicitly overridden. Previously, console.warn always emitted output. GitHub API failures and fetch errors will now go unnoticed in production logs.
Additional Locations (1)
Greptile SummaryThis PR replaces two Key observations:
Confidence Score: 4/5
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[GET /api/stars] --> B[fetch GitHub API]
B --> C{response.ok?}
C -- No --> D["logger.warn('GitHub API request failed:', status)"]
D --> E[Return fallback count]
C -- Yes --> F[Parse stargazers_count]
F --> G[Return formatted count]
B -- throws --> H["logger.warn('Error fetching GitHub stars:', error)"]
H --> E
style D fill:#ffe066,stroke:#e6b800
style H fill:#ffe066,stroke:#e6b800
subgraph logger["@sim/logger behavior"]
direction LR
L1[development → WARN ✅ logged]
L2[production → WARN ❌ suppressed by default]
L3[test → WARN ❌ suppressed]
end
D -.-> logger
H -.-> logger
Last reviewed commit: 1de514b |
| logger.warn('GitHub API request failed:', response.status) | ||
| return NextResponse.json({ stars: formatStarCount(19400) }) | ||
| } | ||
|
|
||
| const data = await response.json() | ||
| return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) }) | ||
| } catch (error) { | ||
| console.warn('Error fetching GitHub stars:', error) | ||
| logger.warn('Error fetching GitHub stars:', error) |
There was a problem hiding this comment.
Warn-level logs are suppressed in production by default
The @sim/logger package sets minLevel: ERROR in production environments, which means both logger.warn() calls will be silently dropped in production unless a LOG_LEVEL env override is configured. Prior to this change, console.warn() always printed regardless of environment.
This means GitHub API failures — both non-OK responses and thrown exceptions — will be invisible in production logs by default. Consider whether these should be elevated to logger.error() to ensure they remain visible in production, or confirm that LOG_LEVEL=WARN is set in your production environment to preserve the previous behavior.
| logger.warn('GitHub API request failed:', response.status) | |
| return NextResponse.json({ stars: formatStarCount(19400) }) | |
| } | |
| const data = await response.json() | |
| return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) }) | |
| } catch (error) { | |
| console.warn('Error fetching GitHub stars:', error) | |
| logger.warn('Error fetching GitHub stars:', error) | |
| logger.error('GitHub API request failed:', response.status) |
and
| logger.warn('GitHub API request failed:', response.status) | |
| return NextResponse.json({ stars: formatStarCount(19400) }) | |
| } | |
| const data = await response.json() | |
| return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) }) | |
| } catch (error) { | |
| console.warn('Error fetching GitHub stars:', error) | |
| logger.warn('Error fetching GitHub stars:', error) | |
| logger.error('Error fetching GitHub stars:', error) |
Currently, fetch() calls to OpenAI endpoints in the Responses API provider have no timeout, which means HTTP requests can hang indefinitely on network issues, server unresponsiveness, or slow model responses. This causes: - Simulations stuck waiting forever for LLM responses - Resource exhaustion when multiple sims run concurrently - Poor user experience (no error feedback, hanging UI) - Wasted compute resources on hung HTTP requests This adds a 60-second timeout using AbortSignal.timeout() while preserving any existing abort signals via AbortSignal.any(). **PeakInfer Issue:** Missing timeout on LLM API HTTP requests **Impact:** Prevents indefinite hangs and improves reliability **Category:** Reliability + Latency Changes: - Added 60s timeout to postResponses() fetch (line 265-268) - Added 60s timeout to streaming fetch (line 293-296) - Added 60s timeout to final streaming fetch after tools (line 718-721) - Preserves existing abortSignal functionality via AbortSignal.any() - Applies to all OpenAI-compatible providers (OpenAI, Azure, etc.) This follows PeakInfer best practices for production LLM systems: - Prevents resource exhaustion from hung requests - Enables faster error detection and recovery - Improves system resilience under network issues - 60s timeout balances patience for long responses vs system health 🤖 Generated with PeakInfer LLM inference optimization
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| headers: config.headers, | ||
| body: JSON.stringify(createRequestBody(initialInput, { stream: true })), | ||
| signal: request.abortSignal, | ||
| signal: combinedSignal, |
There was a problem hiding this comment.
60s timeout aborts long-running streaming LLM responses
High Severity
The AbortSignal.timeout(60000) starts counting from creation (before the fetch call) and remains attached to the response body stream. For streaming responses, createReadableStreamFromResponses reads chunks via reader.read() asynchronously after the function returns. If the total time from fetch start to stream completion exceeds 60 seconds, the signal fires and aborts the stream mid-response. LLM streaming responses routinely exceed 60 seconds for long outputs or reasoning models, so this will silently truncate user-facing responses.
Additional Locations (1)
| const timeoutSignal = AbortSignal.timeout(60000) // 60 seconds | ||
| const combinedSignal = request.abortSignal | ||
| ? AbortSignal.any([timeoutSignal, request.abortSignal]) | ||
| : timeoutSignal |
There was a problem hiding this comment.
60s timeout fails reasoning models in non-streaming calls
High Severity
The postResponses function applies a hard 60-second AbortSignal.timeout to every non-streaming API call. This function is the main path for tool-calling and reasoning model requests. Reasoning models (o1, o3) — explicitly supported via request.reasoningEffort — can spend minutes thinking before producing a response. The 60-second timeout will cause these requests to abort with an error, a regression from the prior behavior which had no hard timeout.


Problem
The
/api/starsroute usesconsole.warn()for error and failure logging, while every other API route in the codebase usescreateLoggerfrom@sim/logger. This means warning output from this route bypasses the structured logging pipeline.Before:
Fix
Replaced both
console.warn()calls withlogger.warn()using aStarsAPIlogger instance, matching the pattern used in all other API routes (e.g.,EnvironmentAPI,SkillsAPI,CreditsAPI).