fix(langchain): Wrap finish_reason in array for gen_ai span attribute#5666
fix(langchain): Wrap finish_reason in array for gen_ai span attribute#5666ericapisani wants to merge 1 commit intomasterfrom
Conversation
The gen_ai.response.finish_reasons attribute should be an array of strings per the semantic convention. Wrap the single finish_reason value in a list before setting the span data. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨Anthropic
Pydantic Ai
Other
Bug Fixes 🐛
Documentation 📚
Internal Changes 🔧Anthropic
Docs
Openai Agents
Other
🤖 This preview updates automatically when you update the PR. |
Codecov Results 📊✅ 13 passed | Total: 13 | Pass Rate: 100% | Execution Time: 8.12s All tests are passing successfully. ✅ Patch coverage is 100.00%. Project has 14158 uncovered lines. Files with missing lines (1)
Generated by Codecov Action |
| finish_reason = generation.generation_info.get("finish_reason") | ||
| if finish_reason is not None: | ||
| span.set_data( | ||
| SPANDATA.GEN_AI_RESPONSE_FINISH_REASONS, finish_reason | ||
| SPANDATA.GEN_AI_RESPONSE_FINISH_REASONS, | ||
| [finish_reason], | ||
| ) | ||
| except AttributeError: | ||
| pass |
There was a problem hiding this comment.
Bug: The on_chat_model_end callback closes the span before on_llm_end can run, preventing the finish_reason from being set on chat model spans.
Severity: MEDIUM
Suggested Fix
The logic to set the finish_reason should be moved from on_llm_end to on_chat_model_end for chat model spans. This ensures the attribute is set before the span is closed and removed from the span_map.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: sentry_sdk/integrations/langchain.py#L554-L561
Potential issue: For chat models, LangChain invokes both `on_chat_model_end` and
`on_llm_end` for the same `run_id`. The `on_chat_model_end` callback calls `_exit_span`,
which closes the span and removes it from the internal `span_map`. Subsequently, when
`on_llm_end` executes, it cannot find the span for the given `run_id` and returns early.
This prevents the `GEN_AI_RESPONSE_FINISH_REASONS` attribute from being set on the span,
meaning chat model spans will be missing the `finish_reason` in production environments.
The existing tests may pass because the mocks do not replicate this dual-callback
behavior.
Did we get this right? 👍 / 👎 to inform future reviews.
The gen_ai.response.finish_reasons attribute should be an array of strings per the semantic convention. Wrap the single finish_reason value in a list before setting the span data.
Resolves #5664 and PY-2139