Skip to content

Conversation

@KUNDAN1334
Copy link

Solved #8

@bnarasimha21 bnarasimha21 requested a review from Copilot October 28, 2025 12:58
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements structured output support for the ChatGradient model using Pydantic models, enabling the model to return validated, type-safe responses conforming to predefined schemas.

Key Changes:

  • Added with_structured_output() method to ChatGradient that accepts Pydantic models, TypedDict, or JSON schemas
  • Implemented JSON extraction and parsing utilities with error handling
  • Added comprehensive unit tests covering basic Pydantic output, validation errors, and raw mode

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.

File Description
langchain_gradient/chat_models.py Implemented with_structured_output() method and helper functions for parsing JSON from model responses
tests/unit_tests/test_structured_output.py Added unit tests for structured output functionality with Pydantic models
docs/structured_output_examples.md Added documentation and usage examples for structured output feature

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

content = message.content if hasattr(message, 'content') else str(message)

# Try to extract JSON from code blocks (``````)
pattern = r"``````"
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The regex pattern r\"``````\" will only match exactly 6 backticks in a row. Based on the test mock responses which use triple backticks (), this pattern should be `r\"(.+?)```"` to correctly extract JSON from markdown code blocks.

Suggested change
pattern = r"``````"
pattern = r"```(.+?)```"

Copilot uses AI. Check for mistakes.
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a helpful assistant that outputs valid JSON matching this schema:\n"
"``````\n"
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prompt template contains 6 backticks but the regex pattern expects 6 backticks exactly. This creates an inconsistency. The template should use triple backticks (```) to match standard markdown code block syntax, or the regex pattern in _extract_and_parse_json needs to be updated accordingly.

Copilot uses AI. Check for mistakes.
("system",
"You are a helpful assistant that outputs valid JSON matching the schema below.\n"
"{format_instructions}\n"
"Always wrap your JSON output in `````` tags."),
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instruction says 'wrap your JSON output in `````` tags' but standard markdown code blocks use triple backticks (```). This mismatch will cause parsing to fail. Change to triple backticks or update the regex pattern to match 6 backticks.

Suggested change
"Always wrap your JSON output in `````` tags."),
"Always wrap your JSON output in ``` tags."),

Copilot uses AI. Check for mistakes.
else:
print(f"Parsed: {result['parsed']}")

undefined
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file ends with 'undefined' which appears to be an unintended placeholder or error. This line should be removed.

Suggested change
undefined

Copilot uses AI. Check for mistakes.
Comment on lines +8 to +48
from pydantic import BaseModel, Field
from langchain_gradient import ChatGradient

class Person(BaseModel):
name: str = Field(description="The person's name")
age: int = Field(description="The person's age")
email: str = Field(description="Email address")

llm = ChatGradient(model="llama3.3-70b-instruct")
structured_llm = llm.with_structured_output(Person)

response = structured_llm.invoke(
"Create a person named John, age 30, email [email protected]"
)
print(response)

Output: Person(name='John', age=30, email='[email protected]')

## Multiple Objects

from typing import List

class PersonList(BaseModel):
people: List[Person]

structured_llm = llm.with_structured_output(PersonList)
response = structured_llm.invoke("Create 3 people with different names")
print(response.people) # List of Person objects

## Error Handling

Get raw output and errors
structured_llm = llm.with_structured_output(Person, include_raw=True)

result = structured_llm.invoke("Create a person")

if result["parsing_error"]:
print(f"Error: {result['parsing_error']}")
print(f"Raw: {result['raw'].content}")
else:
print(f"Parsed: {result['parsed']}")
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code block is missing proper markdown formatting. It should be wrapped in triple backticks with the language identifier (```python) to render correctly as a code block.

Suggested change
from pydantic import BaseModel, Field
from langchain_gradient import ChatGradient
class Person(BaseModel):
name: str = Field(description="The person's name")
age: int = Field(description="The person's age")
email: str = Field(description="Email address")
llm = ChatGradient(model="llama3.3-70b-instruct")
structured_llm = llm.with_structured_output(Person)
response = structured_llm.invoke(
"Create a person named John, age 30, email [email protected]"
)
print(response)
Output: Person(name='John', age=30, email='[email protected]')
## Multiple Objects
from typing import List
class PersonList(BaseModel):
people: List[Person]
structured_llm = llm.with_structured_output(PersonList)
response = structured_llm.invoke("Create 3 people with different names")
print(response.people) # List of Person objects
## Error Handling
Get raw output and errors
structured_llm = llm.with_structured_output(Person, include_raw=True)
result = structured_llm.invoke("Create a person")
if result["parsing_error"]:
print(f"Error: {result['parsing_error']}")
print(f"Raw: {result['raw'].content}")
else:
print(f"Parsed: {result['parsed']}")
```python
from pydantic import BaseModel, Field
from langchain_gradient import ChatGradient
class Person(BaseModel):
name: str = Field(description="The person's name")
age: int = Field(description="The person's age")
email: str = Field(description="Email address")
llm = ChatGradient(model="llama3.3-70b-instruct")
structured_llm = llm.with_structured_output(Person)
response = structured_llm.invoke(
"Create a person named John, age 30, email [email protected]"
)
print(response)

Output: Person(name='John', age=30, email='[email protected]')

Multiple Objects

from typing import List

class PersonList(BaseModel):
    people: List[Person]

structured_llm = llm.with_structured_output(PersonList)
response = structured_llm.invoke("Create 3 people with different names")
print(response.people) # List of Person objects

Error Handling

Get raw output and errors

structured_llm = llm.with_structured_output(Person, include_raw=True)

result = structured_llm.invoke("Create a person")

if result["parsing_error"]:
    print(f"Error: {result['parsing_error']}")
    print(f"Raw: {result['raw'].content}")
else:
    print(f"Parsed: {result['parsed']}")

Copilot uses AI. Check for mistakes.

## Error Handling

Get raw output and errors
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line should be formatted as a comment or descriptive text (e.g., '# Get raw output and errors') rather than appearing as standalone text outside a code block.

Suggested change
Get raw output and errors
**Get raw output and errors**

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant