Skip to content

fix(langchain): Wrap finish_reason in array for gen_ai span attribute#5666

Open
ericapisani wants to merge 1 commit intomasterfrom
ep/py-2139-fix-format-of-finish-reason-94s
Open

fix(langchain): Wrap finish_reason in array for gen_ai span attribute#5666
ericapisani wants to merge 1 commit intomasterfrom
ep/py-2139-fix-format-of-finish-reason-94s

Conversation

@ericapisani
Copy link
Member

The gen_ai.response.finish_reasons attribute should be an array of strings per the semantic convention. Wrap the single finish_reason value in a list before setting the span data.

Resolves #5664 and PY-2139

The gen_ai.response.finish_reasons attribute should be an array of
strings per the semantic convention. Wrap the single finish_reason
value in a list before setting the span data.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@ericapisani ericapisani requested a review from a team as a code owner March 13, 2026 15:57
@linear-code
Copy link

linear-code bot commented Mar 13, 2026

@github-actions
Copy link
Contributor

Semver Impact of This PR

🟢 Patch (bug fixes)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

Anthropic

  • Emit gen_ai.chat spans for asynchronous messages.stream() by alexander-alderman-webb in #5572
  • Emit AI Client Spans for synchronous messages.stream() by alexander-alderman-webb in #5565
  • Set gen_ai.response.id span attribute by ericapisani in #5662
  • Add gen_ai.system attribute to spans by ericapisani in #5661

Pydantic Ai

  • Support ImageUrl content type in span instrumentation by ericapisani in #5629
  • Add tool description to execute_tool spans by ericapisani in #5596

Other

  • (crons) Add owner field to MonitorConfig by julwhitney13 in #5610
  • (otlp) Add collector_url option to OTLPIntegration by sl0thentr0py in #5603

Bug Fixes 🐛

  • (anthropic) Close span on GeneratorExit by alexander-alderman-webb in #5643
  • (celery) Propagate user-set headers by sentrivana in #5581
  • (langchain) Wrap finish_reason in array for gen_ai span attribute by ericapisani in #5666
  • (utils) Avoid double serialization of strings in safe_serialize by ericapisani in #5587
  • Enable unused import ruff check and fix unused imports by sentrivana in #5652

Documentation 📚

  • (openai-agents) Remove inapplicable comment by alexander-alderman-webb in #5495
  • Add AGENTS.md by sentrivana in #5579
  • Add set_attribute example to changelog by sentrivana in #5578

Internal Changes 🔧

Anthropic

  • Skip accumulation logic for unexpected types in streamed response by alexander-alderman-webb in #5564
  • Factor out streamed result handling by alexander-alderman-webb in #5563
  • Stream valid JSON by alexander-alderman-webb in #5641
  • Stop mocking response iterator by alexander-alderman-webb in #5573

Docs

  • Remove agentic codebase documentation workflows by dingsdax in #5655
  • Switch agentic workflows from Copilot to Claude engine by dingsdax in #5654
  • Add agentic workflows for codebase documentation by dingsdax in #5649

Openai Agents

  • Do not fail on new tool fields by alexander-alderman-webb in #5625
  • Stop expecting a specific function name by alexander-alderman-webb in #5623
  • Set streaming header when library uses with_streaming_response() by alexander-alderman-webb in #5583
  • Replace mocks with httpx for streamed responses by alexander-alderman-webb in #5580
  • Replace mocks with httpx in non-MCP tool tests by alexander-alderman-webb in #5602
  • Replace mocks with httpx in MCP tool tests by alexander-alderman-webb in #5605
  • Replace mocks with httpx in handoff tests by alexander-alderman-webb in #5604
  • Replace mocks with httpx in API error test by alexander-alderman-webb in #5601
  • Replace mocks with httpx in non-error single-response tests by alexander-alderman-webb in #5600
  • Remove test for unreachable state by alexander-alderman-webb in #5584
  • Expect namespace tool field for new openai versions by alexander-alderman-webb in #5599

Other

  • (graphene) Simplify span creation by sentrivana in #5648
  • (httpx) Resolve type checking failures by alexander-alderman-webb in #5626
  • (pyramid) Support alpha suffixes in version parsing by alexander-alderman-webb in #5618
  • (rust) Don't implement separate scope management by sentrivana in #5639
  • (strawberry) Simplify span creation by sentrivana in #5647
  • Remove custom warden action by sentrivana in #5653
  • Add httpx to linting requirements by alexander-alderman-webb in #5644
  • Remove CodeQL action by sentrivana in #5616
  • Normalize dots in package names in populate_tox.py by alexander-alderman-webb in #5574
  • Do not run actions on potel-base by sentrivana in #5614

🤖 This preview updates automatically when you update the PR.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 13, 2026

Codecov Results 📊

13 passed | Total: 13 | Pass Rate: 100% | Execution Time: 8.12s

All tests are passing successfully.

✅ Patch coverage is 100.00%. Project has 14158 uncovered lines.

Files with missing lines (1)
File Patch % Lines
langchain.py 3.28% ⚠️ 590 Missing

Generated by Codecov Action

Comment on lines 554 to 561
finish_reason = generation.generation_info.get("finish_reason")
if finish_reason is not None:
span.set_data(
SPANDATA.GEN_AI_RESPONSE_FINISH_REASONS, finish_reason
SPANDATA.GEN_AI_RESPONSE_FINISH_REASONS,
[finish_reason],
)
except AttributeError:
pass
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The on_chat_model_end callback closes the span before on_llm_end can run, preventing the finish_reason from being set on chat model spans.
Severity: MEDIUM

Suggested Fix

The logic to set the finish_reason should be moved from on_llm_end to on_chat_model_end for chat model spans. This ensures the attribute is set before the span is closed and removed from the span_map.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: sentry_sdk/integrations/langchain.py#L554-L561

Potential issue: For chat models, LangChain invokes both `on_chat_model_end` and
`on_llm_end` for the same `run_id`. The `on_chat_model_end` callback calls `_exit_span`,
which closes the span and removes it from the internal `span_map`. Subsequently, when
`on_llm_end` executes, it cannot find the span for the given `run_id` and returns early.
This prevents the `GEN_AI_RESPONSE_FINISH_REASONS` attribute from being set on the span,
meaning chat model spans will be missing the `finish_reason` in production environments.
The existing tests may pass because the mocks do not replicate this dual-callback
behavior.

Did we get this right? 👍 / 👎 to inform future reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Langchain] GEN_AI_RESPONSE_FINISH_REASONS needs to be an array of strings in on_llm_end

1 participant