Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions apps/sim/app/api/stars/route.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
import { NextResponse } from 'next/server'
import { createLogger } from '@sim/logger'
import { env } from '@/lib/core/config/env'

const logger = createLogger('StarsAPI')

function formatStarCount(num: number): string {
if (num < 1000) return String(num)
const formatted = (Math.round(num / 100) / 10).toFixed(1)
Expand All @@ -22,14 +25,14 @@ export async function GET() {
})

if (!response.ok) {
console.warn('GitHub API request failed:', response.status)
logger.warn('GitHub API request failed:', response.status)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Production warnings silently suppressed by logger level filtering

Medium Severity

Switching from console.warn to logger.warn silently suppresses these warnings in production. The @sim/logger defaults to LogLevel.ERROR in production, so WARN-level messages are filtered out unless LOG_LEVEL is explicitly overridden. Previously, console.warn always emitted output. GitHub API failures and fetch errors will now go unnoticed in production logs.

Additional Locations (1)
Fix in Cursor Fix in Web

return NextResponse.json({ stars: formatStarCount(19400) })
}

const data = await response.json()
return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) })
} catch (error) {
console.warn('Error fetching GitHub stars:', error)
logger.warn('Error fetching GitHub stars:', error)
Comment on lines +28 to +35
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warn-level logs are suppressed in production by default

The @sim/logger package sets minLevel: ERROR in production environments, which means both logger.warn() calls will be silently dropped in production unless a LOG_LEVEL env override is configured. Prior to this change, console.warn() always printed regardless of environment.

This means GitHub API failures — both non-OK responses and thrown exceptions — will be invisible in production logs by default. Consider whether these should be elevated to logger.error() to ensure they remain visible in production, or confirm that LOG_LEVEL=WARN is set in your production environment to preserve the previous behavior.

Suggested change
logger.warn('GitHub API request failed:', response.status)
return NextResponse.json({ stars: formatStarCount(19400) })
}
const data = await response.json()
return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) })
} catch (error) {
console.warn('Error fetching GitHub stars:', error)
logger.warn('Error fetching GitHub stars:', error)
logger.error('GitHub API request failed:', response.status)

and

Suggested change
logger.warn('GitHub API request failed:', response.status)
return NextResponse.json({ stars: formatStarCount(19400) })
}
const data = await response.json()
return NextResponse.json({ stars: formatStarCount(Number(data?.stargazers_count ?? 19400)) })
} catch (error) {
console.warn('Error fetching GitHub stars:', error)
logger.warn('Error fetching GitHub stars:', error)
logger.error('Error fetching GitHub stars:', error)

return NextResponse.json({ stars: formatStarCount(19400) })
}
}
24 changes: 21 additions & 3 deletions apps/sim/providers/openai/core.ts
Original file line number Diff line number Diff line change
Expand Up @@ -261,11 +261,17 @@ export async function executeResponsesProviderRequest(
const postResponses = async (
body: Record<string, unknown>
): Promise<OpenAI.Responses.Response> => {
// Create a 60s timeout signal and combine with any existing abort signal
const timeoutSignal = AbortSignal.timeout(60000) // 60 seconds
const combinedSignal = request.abortSignal
? AbortSignal.any([timeoutSignal, request.abortSignal])
: timeoutSignal
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

60s timeout fails reasoning models in non-streaming calls

High Severity

The postResponses function applies a hard 60-second AbortSignal.timeout to every non-streaming API call. This function is the main path for tool-calling and reasoning model requests. Reasoning models (o1, o3) — explicitly supported via request.reasoningEffort — can spend minutes thinking before producing a response. The 60-second timeout will cause these requests to abort with an error, a regression from the prior behavior which had no hard timeout.

Fix in Cursor Fix in Web


const response = await fetch(config.endpoint, {
method: 'POST',
headers: config.headers,
body: JSON.stringify(body),
signal: request.abortSignal,
signal: combinedSignal,
})

if (!response.ok) {
Expand All @@ -283,11 +289,17 @@ export async function executeResponsesProviderRequest(
if (request.stream && (!tools || tools.length === 0)) {
logger.info(`Using streaming response for ${config.providerLabel} request`)

// Create a 60s timeout signal and combine with any existing abort signal
const timeoutSignal = AbortSignal.timeout(60000) // 60 seconds
const combinedSignal = request.abortSignal
? AbortSignal.any([timeoutSignal, request.abortSignal])
: timeoutSignal

const streamResponse = await fetch(config.endpoint, {
method: 'POST',
headers: config.headers,
body: JSON.stringify(createRequestBody(initialInput, { stream: true })),
signal: request.abortSignal,
signal: combinedSignal,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

60s timeout aborts long-running streaming LLM responses

High Severity

The AbortSignal.timeout(60000) starts counting from creation (before the fetch call) and remains attached to the response body stream. For streaming responses, createReadableStreamFromResponses reads chunks via reader.read() asynchronously after the function returns. If the total time from fetch start to stream completion exceeds 60 seconds, the signal fires and aborts the stream mid-response. LLM streaming responses routinely exceed 60 seconds for long outputs or reasoning models, so this will silently truncate user-facing responses.

Additional Locations (1)
Fix in Cursor Fix in Web

})

if (!streamResponse.ok) {
Expand Down Expand Up @@ -702,11 +714,17 @@ export async function executeResponsesProviderRequest(
}
}

// Create a 60s timeout signal and combine with any existing abort signal
const timeoutSignal = AbortSignal.timeout(60000) // 60 seconds
const combinedSignal = request.abortSignal
? AbortSignal.any([timeoutSignal, request.abortSignal])
: timeoutSignal

const streamResponse = await fetch(config.endpoint, {
method: 'POST',
headers: config.headers,
body: JSON.stringify(createRequestBody(currentInput, streamOverrides)),
signal: request.abortSignal,
signal: combinedSignal,
})

if (!streamResponse.ok) {
Expand Down