Skip to content

Messages and chat history

PydanticAI provides access to messages exchanged during an agent run. These messages can be used both to continue a coherent conversation, and to understand how an agent performed.

Accessing Messages from Results

After running an agent, you can access the messages exchanged during that run from the result object.

Both RunResult (returned by Agent.run, Agent.run_sync) and StreamedRunResult (returned by Agent.run_stream) have the following methods:

  • all_messages(): returns all messages, including messages from prior runs and system prompts. There's also a variant that returns JSON bytes, all_messages_json().
  • new_messages(): returns only the messages from the current run, excluding system prompts, this is generally the data you want when you want to use the messages in further runs to continue the conversation. There's also a variant that returns JSON bytes, new_messages_json().

StreamedRunResult and complete messages

On StreamedRunResult, the messages returned from these methods will only include the final response message once the stream has finished.

E.g. you've awaited one of the following coroutines:

Example of accessing methods on a RunResult :

Accessing messages from a RunResult
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')

result = agent.run_sync('Tell me a joke.')
print(result.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.

# all messages from the run
print(result.all_messages())
"""
[
    SystemPrompt(content='Be a helpful assistant.', role='system'),
    UserPrompt(
        content='Tell me a joke.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='user',
    ),
    ModelTextResponse(
        content='Did you hear about the toothpaste scandal? They called it Colgate.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='model-text-response',
    ),
]
"""

# messages excluding system prompts
print(result.new_messages())
"""
[
    UserPrompt(
        content='Tell me a joke.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='user',
    ),
    ModelTextResponse(
        content='Did you hear about the toothpaste scandal? They called it Colgate.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='model-text-response',
    ),
]
"""
(This example is complete, it can be run "as is")

Example of accessing methods on a StreamedRunResult :

Accessing messages from a StreamedRunResult
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')


async def main():
    async with agent.run_stream('Tell me a joke.') as result:
        # incomplete messages before the stream finishes
        print(result.all_messages())
        """
        [
            SystemPrompt(content='Be a helpful assistant.', role='system'),
            UserPrompt(
                content='Tell me a joke.',
                timestamp=datetime.datetime(
                    2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
                ),
                role='user',
            ),
        ]
        """

        async for text in result.stream():
            print(text)
            #> Did you
            #> Did you hear about
            #> Did you hear about the toothpaste
            #> Did you hear about the toothpaste scandal? They
            #> Did you hear about the toothpaste scandal? They called it
            #> Did you hear about the toothpaste scandal? They called it Colgate.

        # complete messages once the stream finishes
        print(result.all_messages())
        """
        [
            SystemPrompt(content='Be a helpful assistant.', role='system'),
            UserPrompt(
                content='Tell me a joke.',
                timestamp=datetime.datetime(
                    2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
                ),
                role='user',
            ),
            ModelTextResponse(
                content='Did you hear about the toothpaste scandal? They called it Colgate.',
                timestamp=datetime.datetime(
                    2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
                ),
                role='model-text-response',
            ),
        ]
        """
(This example is complete, it can be run "as is" inside an async context)

Using Messages as Input for Further Agent Runs

The primary use of message histories in PydanticAI is to maintain context across multiple agent runs.

To use existing messages in a run, pass them to the message_history parameter of Agent.run, Agent.run_sync or Agent.run_stream.

all_messages() vs. new_messages()

PydanticAI will inspect any messages it receives for system prompts.

If any system prompts are found in message_history, new system prompts are not generated, otherwise new system prompts are generated and inserted before message_history in the list of messages used in the run.

Thus you can decide whether you want to use system prompts from a previous run or generate them again by using all_messages() or new_messages().

Reusing messages in a conversation
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')

result1 = agent.run_sync('Tell me a joke.')
print(result1.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.

result2 = agent.run_sync('Explain?', message_history=result1.new_messages())
print(result2.data)
#> This is an excellent joke invent by Samuel Colvin, it needs no explanation.

print(result2.all_messages())
"""
[
    SystemPrompt(content='Be a helpful assistant.', role='system'),
    UserPrompt(
        content='Tell me a joke.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='user',
    ),
    ModelTextResponse(
        content='Did you hear about the toothpaste scandal? They called it Colgate.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='model-text-response',
    ),
    UserPrompt(
        content='Explain?',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='user',
    ),
    ModelTextResponse(
        content='This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='model-text-response',
    ),
]
"""
(This example is complete, it can be run "as is")

Other ways of using messages

Since messages are defined by simple dataclasses, you can manually create and manipulate, e.g. for testing.

The message format is independent of the model used, so you can use messages in different agents, or the same agent with different models.

from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')

result1 = agent.run_sync('Tell me a joke.')
print(result1.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.

result2 = agent.run_sync(
    'Explain?', model='gemini-1.5-pro', message_history=result1.new_messages()
)
print(result2.data)
#> This is an excellent joke invent by Samuel Colvin, it needs no explanation.

print(result2.all_messages())
"""
[
    SystemPrompt(content='Be a helpful assistant.', role='system'),
    UserPrompt(
        content='Tell me a joke.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='user',
    ),
    ModelTextResponse(
        content='Did you hear about the toothpaste scandal? They called it Colgate.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='model-text-response',
    ),
    UserPrompt(
        content='Explain?',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='user',
    ),
    ModelTextResponse(
        content='This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
        timestamp=datetime.datetime(
            2032, 1, 2, 3, 4, 5, 6, tzinfo=datetime.timezone.utc
        ),
        role='model-text-response',
    ),
]
"""

Examples

For a more complete example of using messages in conversations, see the chat app example.

API Reference

Message module-attribute

Any message send to or returned by a model.

SystemPrompt dataclass

A system prompt, generally written by the application developer.

This gives the model context and guidance on how to respond.

Source code in pydantic_ai/messages.py
15
16
17
18
19
20
21
22
23
24
25
@dataclass
class SystemPrompt:
    """A system prompt, generally written by the application developer.

    This gives the model context and guidance on how to respond.
    """

    content: str
    """The content of the prompt."""
    role: Literal['system'] = 'system'
    """Message type identifier, this type is available on all message as a discriminator."""

content instance-attribute

content: str

The content of the prompt.

role class-attribute instance-attribute

role: Literal['system'] = 'system'

Message type identifier, this type is available on all message as a discriminator.

UserPrompt dataclass

A user prompt, generally written by the end user.

Content comes from the user_prompt parameter of Agent.run, Agent.run_sync, and Agent.run_stream.

Source code in pydantic_ai/messages.py
28
29
30
31
32
33
34
35
36
37
38
39
40
41
@dataclass
class UserPrompt:
    """A user prompt, generally written by the end user.

    Content comes from the `user_prompt` parameter of [`Agent.run`][pydantic_ai.Agent.run],
    [`Agent.run_sync`][pydantic_ai.Agent.run_sync], and [`Agent.run_stream`][pydantic_ai.Agent.run_stream].
    """

    content: str
    """The content of the prompt."""
    timestamp: datetime = field(default_factory=_now_utc)
    """The timestamp of the prompt."""
    role: Literal['user'] = 'user'
    """Message type identifier, this type is available on all message as a discriminator."""

content instance-attribute

content: str

The content of the prompt.

timestamp class-attribute instance-attribute

timestamp: datetime = field(default_factory=now_utc)

The timestamp of the prompt.

role class-attribute instance-attribute

role: Literal['user'] = 'user'

Message type identifier, this type is available on all message as a discriminator.

ToolReturn dataclass

A tool return message, this encodes the result of running a retriever.

Source code in pydantic_ai/messages.py
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
@dataclass
class ToolReturn:
    """A tool return message, this encodes the result of running a retriever."""

    tool_name: str
    """The name of the "tool" was called."""
    content: str | dict[str, Any]
    """The return value."""
    tool_id: str | None = None
    """Optional tool identifier, this is used by some models including OpenAI."""
    timestamp: datetime = field(default_factory=_now_utc)
    """The timestamp, when the tool returned."""
    role: Literal['tool-return'] = 'tool-return'
    """Message type identifier, this type is available on all message as a discriminator."""

    def model_response_str(self) -> str:
        if isinstance(self.content, str):
            return self.content
        else:
            content = tool_return_value_object.validate_python(self.content)
            return tool_return_value_object.dump_json(content).decode()

    def model_response_object(self) -> dict[str, Any]:
        if isinstance(self.content, str):
            return {'return_value': self.content}
        else:
            return tool_return_value_object.validate_python(self.content)

tool_name instance-attribute

tool_name: str

The name of the "tool" was called.

content instance-attribute

content: str | dict[str, Any]

The return value.

tool_id class-attribute instance-attribute

tool_id: str | None = None

Optional tool identifier, this is used by some models including OpenAI.

timestamp class-attribute instance-attribute

timestamp: datetime = field(default_factory=now_utc)

The timestamp, when the tool returned.

role class-attribute instance-attribute

role: Literal['tool-return'] = 'tool-return'

Message type identifier, this type is available on all message as a discriminator.

RetryPrompt dataclass

A message back to a model asking it to try again.

This can be sent for a number of reasons:

  • Pydantic validation of retriever arguments failed, here content is derived from a Pydantic ValidationError
  • a retriever raised a ModelRetry exception
  • no retriever was found for the tool name
  • the model returned plain text when a structured response was expected
  • Pydantic validation of a structured response failed, here content is derived from a Pydantic ValidationError
  • a result validator raised a ModelRetry exception
Source code in pydantic_ai/messages.py
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
@dataclass
class RetryPrompt:
    """A message back to a model asking it to try again.

    This can be sent for a number of reasons:

    * Pydantic validation of retriever arguments failed, here content is derived from a Pydantic
      [`ValidationError`][pydantic_core.ValidationError]
    * a retriever raised a [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] exception
    * no retriever was found for the tool name
    * the model returned plain text when a structured response was expected
    * Pydantic validation of a structured response failed, here content is derived from a Pydantic
      [`ValidationError`][pydantic_core.ValidationError]
    * a result validator raised a [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] exception
    """

    content: list[pydantic_core.ErrorDetails] | str
    """Details of why and how the model should retry.

    If the retry was triggered by a [`ValidationError`][pydantic_core.ValidationError], this will be a list of
    error details.
    """
    tool_name: str | None = None
    """The name of the tool that was called, if any."""
    tool_id: str | None = None
    """The tool identifier, if any."""
    timestamp: datetime = field(default_factory=_now_utc)
    """The timestamp, when the retry was triggered."""
    role: Literal['retry-prompt'] = 'retry-prompt'
    """Message type identifier, this type is available on all message as a discriminator."""

    def model_response(self) -> str:
        if isinstance(self.content, str):
            description = self.content
        else:
            description = f'{len(self.content)} validation errors: {json.dumps(self.content, indent=2)}'
        return f'{description}\n\nFix the errors and try again.'

content instance-attribute

content: list[ErrorDetails] | str

Details of why and how the model should retry.

If the retry was triggered by a ValidationError, this will be a list of error details.

tool_name class-attribute instance-attribute

tool_name: str | None = None

The name of the tool that was called, if any.

tool_id class-attribute instance-attribute

tool_id: str | None = None

The tool identifier, if any.

timestamp class-attribute instance-attribute

timestamp: datetime = field(default_factory=now_utc)

The timestamp, when the retry was triggered.

role class-attribute instance-attribute

role: Literal['retry-prompt'] = 'retry-prompt'

Message type identifier, this type is available on all message as a discriminator.

ModelAnyResponse module-attribute

Any response from a model.

ModelTextResponse dataclass

A plain text response from a model.

Source code in pydantic_ai/messages.py
115
116
117
118
119
120
121
122
123
124
125
126
127
@dataclass
class ModelTextResponse:
    """A plain text response from a model."""

    content: str
    """The text content of the response."""
    timestamp: datetime = field(default_factory=_now_utc)
    """The timestamp of the response.

    If the model provides a timestamp in the response (as OpenAI does) that will be used.
    """
    role: Literal['model-text-response'] = 'model-text-response'
    """Message type identifier, this type is available on all message as a discriminator."""

content instance-attribute

content: str

The text content of the response.

timestamp class-attribute instance-attribute

timestamp: datetime = field(default_factory=now_utc)

The timestamp of the response.

If the model provides a timestamp in the response (as OpenAI does) that will be used.

role class-attribute instance-attribute

role: Literal["model-text-response"] = "model-text-response"

Message type identifier, this type is available on all message as a discriminator.

ModelStructuredResponse dataclass

A structured response from a model.

This is used either to call a retriever or to return a structured response from an agent run.

Source code in pydantic_ai/messages.py
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
@dataclass
class ModelStructuredResponse:
    """A structured response from a model.

    This is used either to call a retriever or to return a structured response from an agent run.
    """

    calls: list[ToolCall]
    """The tool calls being made."""
    timestamp: datetime = field(default_factory=_now_utc)
    """The timestamp of the response.

    If the model provides a timestamp in the response (as OpenAI does) that will be used.
    """
    role: Literal['model-structured-response'] = 'model-structured-response'
    """Message type identifier, this type is available on all message as a discriminator."""

calls instance-attribute

calls: list[ToolCall]

The tool calls being made.

timestamp class-attribute instance-attribute

timestamp: datetime = field(default_factory=now_utc)

The timestamp of the response.

If the model provides a timestamp in the response (as OpenAI does) that will be used.

role class-attribute instance-attribute

role: Literal["model-structured-response"] = (
    "model-structured-response"
)

Message type identifier, this type is available on all message as a discriminator.

ToolCall dataclass

Either a tool call from the agent.

Source code in pydantic_ai/messages.py
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
@dataclass
class ToolCall:
    """Either a tool call from the agent."""

    tool_name: str
    """The name of the tool to call."""
    args: ArgsJson | ArgsObject
    """The arguments to pass to the tool.

    Either as JSON or a Python dictionary depending on how data was returned.
    """
    tool_id: str | None = None
    """Optional tool identifier, this is used by some models including OpenAI."""

    @classmethod
    def from_json(cls, tool_name: str, args_json: str, tool_id: str | None = None) -> ToolCall:
        return cls(tool_name, ArgsJson(args_json), tool_id)

    @classmethod
    def from_object(cls, tool_name: str, args_object: dict[str, Any]) -> ToolCall:
        return cls(tool_name, ArgsObject(args_object))

    def has_content(self) -> bool:
        if isinstance(self.args, ArgsObject):
            return any(self.args.args_object.values())
        else:
            return bool(self.args.args_json)

tool_name instance-attribute

tool_name: str

The name of the tool to call.

args instance-attribute

The arguments to pass to the tool.

Either as JSON or a Python dictionary depending on how data was returned.

tool_id class-attribute instance-attribute

tool_id: str | None = None

Optional tool identifier, this is used by some models including OpenAI.

ArgsJson dataclass

Source code in pydantic_ai/messages.py
130
131
132
133
@dataclass
class ArgsJson:
    args_json: str
    """A JSON string of arguments."""

args_json instance-attribute

args_json: str

A JSON string of arguments.

ArgsObject dataclass

Source code in pydantic_ai/messages.py
136
137
138
139
@dataclass
class ArgsObject:
    args_object: dict[str, Any]
    """A python dictionary of arguments."""

args_object instance-attribute

args_object: dict[str, Any]

A python dictionary of arguments.

MessagesTypeAdapter module-attribute

MessagesTypeAdapter = LazyTypeAdapter(
    list[Annotated[Message, Field(discriminator="role")]]
)

Pydantic TypeAdapter for (de)serializing messages.