Skip to content

fix(logging): replace console.warn with logger in stars API route#3514

Open
badhra-ajaz wants to merge 2 commits intosimstudioai:mainfrom
badhra-ajaz:fix/stars-api-logger
Open

fix(logging): replace console.warn with logger in stars API route#3514
badhra-ajaz wants to merge 2 commits intosimstudioai:mainfrom
badhra-ajaz:fix/stars-api-logger

Conversation

@badhra-ajaz
Copy link

Problem

The /api/stars route uses console.warn() for error and failure logging, while every other API route in the codebase uses createLogger from @sim/logger. This means warning output from this route bypasses the structured logging pipeline.

Before:

console.warn('GitHub API request failed:', response.status)
console.warn('Error fetching GitHub stars:', error)

Fix

Replaced both console.warn() calls with logger.warn() using a StarsAPI logger instance, matching the pattern used in all other API routes (e.g., EnvironmentAPI, SkillsAPI, CreditsAPI).

import { createLogger } from '@sim/logger'
const logger = createLogger('StarsAPI')

// ...
logger.warn('GitHub API request failed:', response.status)
logger.warn('Error fetching GitHub stars:', error)

The stars API route used console.warn() for error logging while every
other API route in the codebase uses createLogger from @sim/logger.
This makes warning output from this route invisible to the structured
logging pipeline.

Replaced both console.warn() calls with logger.warn() to match the
pattern used across all other API routes.
@cursor
Copy link

cursor bot commented Mar 11, 2026

PR Summary

Medium Risk
Adds forced 60s timeouts to OpenAI provider fetch calls (including streaming), which can change request cancellation behavior and surface new abort errors under slow responses. Logging change is low risk but the provider timeout affects core model execution paths.

Overview
Improves operational robustness by standardizing warning logging in /api/stars and enforcing request timeouts in the OpenAI Responses provider.

The stars API route replaces console.warn with a scoped @sim/logger instance (StarsAPI) so failures flow through structured logging.

OpenAI provider requests now apply a 60s AbortSignal.timeout() and combine it with any caller-provided abortSignal via AbortSignal.any() for both non-streaming and streaming fetch calls, ensuring long-running provider calls are automatically cancelled.

Written by Cursor Bugbot for commit 0ecde36. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Mar 11, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

1 Skipped Deployment
Project Deployment Actions Updated (UTC)
docs Skipped Skipped Mar 12, 2026 5:42am

Request Review


if (!response.ok) {
console.warn('GitHub API request failed:', response.status)
logger.warn('GitHub API request failed:', response.status)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Production warnings silently suppressed by logger level filtering

Medium Severity

Switching from console.warn to logger.warn silently suppresses these warnings in production. The @sim/logger defaults to LogLevel.ERROR in production, so WARN-level messages are filtered out unless LOG_LEVEL is explicitly overridden. Previously, console.warn always emitted output. GitHub API failures and fetch errors will now go unnoticed in production logs.

Additional Locations (1)
Fix in Cursor Fix in Web

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 11, 2026

Greptile Summary

This PR replaces two console.warn() calls in apps/sim/app/api/stars/route.ts with structured logger.warn() calls using a StarsAPI createLogger instance, aligning this route with the logging conventions used across all other API routes in the codebase.

Key observations:

  • The import order and logger instantiation correctly follow the established pattern (e.g., EnvironmentAPI, SkillsAPI).
  • The @sim/logger package defaults to minLevel: ERROR in production, meaning both logger.warn calls will be silenced in production unless a LOG_LEVEL env override is configured — a behavioral difference from the previous console.warn which always printed. Consider using logger.error instead to preserve production visibility of GitHub API failures.

Confidence Score: 4/5

  • Safe to merge with low risk; the behavioral change in production logging is worth confirming before landing.
  • The change is minimal and structurally correct, matching the pattern used across all other API routes. The only concern is that logger.warn is suppressed in production by the logger's default minLevel: ERROR, which silently changes the observability of GitHub API failures compared to the original console.warn. The author should make an informed decision about whether to use logger.error instead or explicitly set LOG_LEVEL=WARN in production.
  • apps/sim/app/api/stars/route.ts — confirm intended log level (warn vs. error) given production log suppression behavior

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[GET /api/stars] --> B[fetch GitHub API]
    B --> C{response.ok?}
    C -- No --> D["logger.warn('GitHub API request failed:', status)"]
    D --> E[Return fallback count]
    C -- Yes --> F[Parse stargazers_count]
    F --> G[Return formatted count]
    B -- throws --> H["logger.warn('Error fetching GitHub stars:', error)"]
    H --> E

    style D fill:#ffe066,stroke:#e6b800
    style H fill:#ffe066,stroke:#e6b800

    subgraph logger["@sim/logger behavior"]
        direction LR
        L1[development → WARN ✅ logged]
        L2[production → WARN ❌ suppressed by default]
        L3[test → WARN ❌ suppressed]
    end

    D -.-> logger
    H -.-> logger
Loading

Last reviewed commit: 1de514b

Comment on lines +28 to +35
logger.warn('GitHub API request failed:', response.status)
return NextResponse.json({ stars: formatStarCount(19400) })
}

const data = await response.json()
return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) })
} catch (error) {
console.warn('Error fetching GitHub stars:', error)
logger.warn('Error fetching GitHub stars:', error)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warn-level logs are suppressed in production by default

The @sim/logger package sets minLevel: ERROR in production environments, which means both logger.warn() calls will be silently dropped in production unless a LOG_LEVEL env override is configured. Prior to this change, console.warn() always printed regardless of environment.

This means GitHub API failures — both non-OK responses and thrown exceptions — will be invisible in production logs by default. Consider whether these should be elevated to logger.error() to ensure they remain visible in production, or confirm that LOG_LEVEL=WARN is set in your production environment to preserve the previous behavior.

Suggested change
logger.warn('GitHub API request failed:', response.status)
return NextResponse.json({ stars: formatStarCount(19400) })
}
const data = await response.json()
return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) })
} catch (error) {
console.warn('Error fetching GitHub stars:', error)
logger.warn('Error fetching GitHub stars:', error)
logger.error('GitHub API request failed:', response.status)

and

Suggested change
logger.warn('GitHub API request failed:', response.status)
return NextResponse.json({ stars: formatStarCount(19400) })
}
const data = await response.json()
return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) })
} catch (error) {
console.warn('Error fetching GitHub stars:', error)
logger.warn('Error fetching GitHub stars:', error)
logger.error('Error fetching GitHub stars:', error)

Currently, fetch() calls to OpenAI endpoints in the Responses API provider
have no timeout, which means HTTP requests can hang indefinitely on network
issues, server unresponsiveness, or slow model responses.

This causes:
- Simulations stuck waiting forever for LLM responses
- Resource exhaustion when multiple sims run concurrently
- Poor user experience (no error feedback, hanging UI)
- Wasted compute resources on hung HTTP requests

This adds a 60-second timeout using AbortSignal.timeout() while preserving
any existing abort signals via AbortSignal.any().

**PeakInfer Issue:** Missing timeout on LLM API HTTP requests
**Impact:** Prevents indefinite hangs and improves reliability
**Category:** Reliability + Latency

Changes:
- Added 60s timeout to postResponses() fetch (line 265-268)
- Added 60s timeout to streaming fetch (line 293-296)
- Added 60s timeout to final streaming fetch after tools (line 718-721)
- Preserves existing abortSignal functionality via AbortSignal.any()
- Applies to all OpenAI-compatible providers (OpenAI, Azure, etc.)

This follows PeakInfer best practices for production LLM systems:
- Prevents resource exhaustion from hung requests
- Enables faster error detection and recovery
- Improves system resilience under network issues
- 60s timeout balances patience for long responses vs system health

🤖 Generated with PeakInfer LLM inference optimization
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

headers: config.headers,
body: JSON.stringify(createRequestBody(initialInput, { stream: true })),
signal: request.abortSignal,
signal: combinedSignal,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

60s timeout aborts long-running streaming LLM responses

High Severity

The AbortSignal.timeout(60000) starts counting from creation (before the fetch call) and remains attached to the response body stream. For streaming responses, createReadableStreamFromResponses reads chunks via reader.read() asynchronously after the function returns. If the total time from fetch start to stream completion exceeds 60 seconds, the signal fires and aborts the stream mid-response. LLM streaming responses routinely exceed 60 seconds for long outputs or reasoning models, so this will silently truncate user-facing responses.

Additional Locations (1)
Fix in Cursor Fix in Web

const timeoutSignal = AbortSignal.timeout(60000) // 60 seconds
const combinedSignal = request.abortSignal
? AbortSignal.any([timeoutSignal, request.abortSignal])
: timeoutSignal
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

60s timeout fails reasoning models in non-streaming calls

High Severity

The postResponses function applies a hard 60-second AbortSignal.timeout to every non-streaming API call. This function is the main path for tool-calling and reasoning model requests. Reasoning models (o1, o3) — explicitly supported via request.reasoningEffort — can spend minutes thinking before producing a response. The 60-second timeout will cause these requests to abort with an error, a regression from the prior behavior which had no hard timeout.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant