steamship.agents.schema package#
Submodules#
steamship.agents.schema.action module#
- class steamship.agents.schema.action.Action(*, tool: str, input: List[Block], output: List[Block] | None = None, is_final: bool = False)[source]#
Bases:
BaseModel
Actions represent a binding of a Tool to the inputs supplied to the tool.
Upon completion, the Action also contains the output of the Tool given the inputs.
steamship.agents.schema.agent module#
- class steamship.agents.schema.agent.Agent(*, tools: List[Tool], message_selector: MessageSelector = NoMessages())[source]#
Bases:
BaseModel
,ABC
Agent is responsible for choosing the next action to take for an AgentService.
It uses the provided context, and a set of Tools, to decide on an action that will be executed by the AgentService.
- default_system_message() str | None [source]#
The default system message used by Agents to drive LLM instruction.
Non Chat-based Agents should always return None. Chat-based Agents should override this method to provide a default prompt.
- message_selector: MessageSelector#
Selector of messages from ChatHistory. Used for conversation memory retrieval.
- abstract next_action(context: AgentContext) Action [source]#
- record_action_run(action: Action, context: AgentContext)[source]#
- class steamship.agents.schema.agent.ChatAgent(*, tools: List[Tool], message_selector: MessageSelector = NoMessages(), llm: ChatLLM, output_parser: OutputParser)[source]#
-
ChatAgents choose next actions for an AgentService based on chat-based interactions with an LLM.
- output_parser: OutputParser#
Utility responsible for converting LLM output into Actions
- class steamship.agents.schema.agent.LLMAgent(*, tools: List[Tool], message_selector: MessageSelector = NoMessages(), llm: LLM, output_parser: OutputParser)[source]#
Bases:
Agent
LLMAgents choose next actions for an AgentService based on interactions with an LLM.
- abstract next_action(context: AgentContext) Action [source]#
- output_parser: OutputParser#
Utility responsible for converting LLM output into Actions
steamship.agents.schema.cache module#
- class steamship.agents.schema.cache.ActionCache(client: Steamship, key_value_store: KeyValueStore)[source]#
Bases:
object
Provide persistent cache layer for AgentContext that allows lookups of output blocks from Actions.
Use this cache to eliminate calls to Tools.
NOTE: EXPERIMENTAL.
- key_value_store: KeyValueStore#
- class steamship.agents.schema.cache.LLMCache(client: Steamship, key_value_store: KeyValueStore)[source]#
Bases:
object
Provide persistent cache layer for AgentContext that allows lookups of Actions from LLM prompts.
Use this cache to eliminate calls to LLMs for Tool selection and direct responses.
NOTE: EXPERIMENTAL.
- key_value_store: KeyValueStore#
steamship.agents.schema.chathistory module#
- class steamship.agents.schema.chathistory.ChatHistory(file: File, embedding_index: EmbeddingIndexPluginInstance | None, text_splitter: TextSplitter = None)[source]#
Bases:
object
A ChatHistory is a wrapper of a File ideal for ongoing interactions between a user and a virtual assistant. It also includes vector-backed storage for similarity-based retrieval.
- append_agent_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with status update messages from the Agent.
- append_assistant_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the agent, i.e., results from the assistant.
- append_llm_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with status update messages from the Agent.
- append_message_with_role(text: str = None, role: RoleTag = RoleTag.USER, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the end-user.
- append_request_complete_message() Block [source]#
Append a new block to this with status update messages from the Agent.
- append_status_message_with_role(text: str = None, role: RoleTag = RoleTag.USER, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the end-user.
- append_system_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the system, i.e., instructions to the assistant.
- append_tool_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with status update messages from the Agent.
- append_user_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the end-user.
- clear()[source]#
Deletes ALL messages from the ChatHistory (including system).
NOTE: upon deletion, refresh() is called to ensure up-to-date history refs.
- delete_messages(selector: MessageSelector)[source]#
Delete a set of selected messages from the ChatHistory.
If selector == None, no messages will be deleted.
NOTES: - upon deletion, refresh() is called to ensure up-to-date history refs. - causes a full re-index of chat history if the history is searchable.
- embedding_index: EmbeddingIndexPluginInstance#
- static get_or_create(client: Steamship, context_keys: Dict[str, str], tags: List[Tag] = None, searchable: bool = True) ChatHistory [source]#
- search(text: str, k=None) Task[SearchResults] [source]#
- select_messages(selector: MessageSelector) List[Block] [source]#
- text_splitter: TextSplitter#
- class steamship.agents.schema.chathistory.ChatHistoryLoggingHandler(chat_history: ChatHistory, log_level: any = 20, streaming_opts: StreamingOpts | None = None)[source]#
Bases:
StreamHandler
Logs messages emitted by Agents and Tools into a ChatHistory file.
This is a basic mechanism for streaming status messages alongside generated content.
- chat_history: ChatHistory#
- emit(record)[source]#
Emit a record.
If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an ‘encoding’ attribute, it is used to determine how to do the output to the stream.
- log_level: any#
- streaming_opts: StreamingOpts#
steamship.agents.schema.context module#
- class steamship.agents.schema.context.AgentContext(request_id: str | None = None, streaming_opts: StreamingOpts | None = None)[source]#
Bases:
object
AgentContext contains all relevant information about a particular execution of an Agent. It is used by the AgentService to manage execution history as well as store/retrieve information and metadata that will be used in the process of an agent execution.
- action_cache: ActionCache | None#
Caches all interations with Tools within a Context. This provides a way to avoid duplicated calls to Tools when within the same context.
- chat_history: ChatHistory#
Record of user-package messages. It records user submitted queries/prompts and the final agent-driven answer sent in response to those queries/prompts. It does NOT record any chat history related to agent execution and action selection.
- completed_steps: List[Action]#
Record of agent-selected Actions and their outputs. This provides an ordered look at the execution sequence for this context.
- emit_funcs: List[Callable[[List[Block], Dict[str, Any]], None]]#
Called when an agent execution has completed. These provide a way for the AgentService to return the result of an agent execution to the package that requested the agent execution.
- static get_or_create(client: Steamship, context_keys: Dict[str, str], tags: List[Tag] = None, searchable: bool = True, use_llm_cache: bool | None = False, use_action_cache: bool | None = False, streaming_opts: StreamingOpts | None = None, initial_system_message: str | None = None)[source]#
Get the AgentContext that corresponds to the parameters supplied.
If the AgentContext does not already exist, a new one will be created and returned.
- Parameters:
client (Steamship) – Steamship workspace-scoped client
context_keys (dict) – key-value pairs used to uniquely identify a context within a workspace
tags (list) – List of Steamship Tags to attach to a ChatHistory for a new context
searchable (bool) – Whether the ContextHistory should embed appended messages for subsequent retrieval
use_llm_cache (bool) – Determines if an LLM Cache should be created for a new context
use_action_cache (bool) – Determines if an Action Cache should be created for a new context
streaming_opts (StreamingOpts) – Determines how status messages are appended to the context’s ChatHistory
initial_system_message (str) – System message used to initialize the context’s ChatHistory. If one already exists, this will be ignored.
steamship.agents.schema.functions module#
- class steamship.agents.schema.functions.FunctionParameters(*, type: JSONType = JSONType.object, properties: Mapping[str, FunctionProperty], required: List[str] | None = [])[source]#
Bases:
BaseModel
Schema for the description of how to invoke an OpenAI function.
- class Config[source]#
Bases:
object
- use_enum_values = True#
This tells Pydantic to serialize the Enum values as strings, which is VERY IMPORTANT for OpenAI
- properties: Mapping[str, FunctionProperty]#
Map of param names to their types and description
- class steamship.agents.schema.functions.FunctionProperty(*, type: JSONType = JSONType.object, description: str)[source]#
Bases:
BaseModel
Schema for an individual parameter used in an OpenAI function.
- class steamship.agents.schema.functions.JSONType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
-
- array = 'array'#
- boolean = 'boolean'#
- integer = 'integer'#
- null = 'null'#
- number = 'number'#
- object = 'object'#
- string = 'string'#
- class steamship.agents.schema.functions.OpenAIFunction(*, name: str, description: str, parameters: FunctionParameters)[source]#
Bases:
BaseModel
Schema for an OpenAI function that can be used in prompting.
- parameters: FunctionParameters#
Specifies how the function should be called.
steamship.agents.schema.llm module#
- class steamship.agents.schema.llm.ChatLLM[source]#
Bases:
BaseModel
,ABC
ChatLLM wraps large language model-based backends that use a chat completion style interation.
They may be used with Agents in Action selection, or for direct prompt completion.
steamship.agents.schema.message_selectors module#
- class steamship.agents.schema.message_selectors.MessageWindowMessageSelector(*, k: int)[source]#
Bases:
MessageSelector
- class steamship.agents.schema.message_selectors.NoMessages[source]#
Bases:
MessageSelector
- class steamship.agents.schema.message_selectors.TokenWindowMessageSelector(*, max_tokens: int)[source]#
Bases:
MessageSelector
steamship.agents.schema.output_parser module#
- class steamship.agents.schema.output_parser.OutputParser[source]#
Bases:
BaseModel
,ABC
Used to convert text into Actions.
Primarily used by LLM-based agents that generate textual descriptions of selected actions and their inputs. OutputParsers can be used to convert those descriptions into Action objects for the AgentService to run.
Example
- input: “Action: GenerateImage
ActionInput: row-house”
output: Action(“dalle”, “row-house”)
- abstract parse(text: str, context: AgentContext) Action [source]#
Convert text into an Action object.
steamship.agents.schema.text_splitters module#
- class steamship.agents.schema.text_splitters.FixedSizeTextSplitter(chunk_size)[source]#
Bases:
TextSplitter
Simplest possible chunking strategy; every n characters.
steamship.agents.schema.tool module#
- class steamship.agents.schema.tool.AgentContext[source]#
Bases:
BaseModel
Placeholder to avoid circular dependency.
- class steamship.agents.schema.tool.Tool(*, name: str, agent_description: str, human_description: str, is_final: bool = False, cacheable: bool = True)[source]#
Bases:
BaseModel
Tools provide functionality that may be used by AgentServices, as directed by Agents, to achieve a goal.
Tools may be used to wrap Steamship packages and plugins, as well as third-party backend services, and even locally-contained bits of Python code.
- agent_description: str#
Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.
- as_openai_function() OpenAIFunction [source]#
- cacheable: bool#
Whether runs of this Tool should be cached based on inputs (if caching is enabled in the AgentContext for a run). Setting this to False will make prevent any Actions that involve this tool from being cached, meaning that every Action using this Tool will result in a call to run. By default, Tools are considered cacheable.
- is_final: bool#
Whether actions performed by this tool should have their is_final bit marked.
Setting this to True means that the output of this tool will halt the reasoning loop. Its output will be returned directly to the user.
- name: str#
The short name for the tool. This will be used by Agents to refer to this tool during action selection.
- post_process(async_task: Task, context: AgentContext) List[Block] [source]#
Transforms Task output into a List[Block].
- abstract run(tool_input: List[Block], context: AgentContext) List[Block] | Task[Any] [source]#
Run the tool given the provided input and context.
At the moment, only synchronous Tools (those that return List[Block]) are supported.
Support for asynchronous Tools (those that return Task[Any]) will be added shortly.
Module contents#
- class steamship.agents.schema.Action(*, tool: str, input: List[Block], output: List[Block] | None = None, is_final: bool = False)[source]#
Bases:
BaseModel
Actions represent a binding of a Tool to the inputs supplied to the tool.
Upon completion, the Action also contains the output of the Tool given the inputs.
- class steamship.agents.schema.Agent(*, tools: List[Tool], message_selector: MessageSelector = NoMessages())[source]#
Bases:
BaseModel
,ABC
Agent is responsible for choosing the next action to take for an AgentService.
It uses the provided context, and a set of Tools, to decide on an action that will be executed by the AgentService.
- default_system_message() str | None [source]#
The default system message used by Agents to drive LLM instruction.
Non Chat-based Agents should always return None. Chat-based Agents should override this method to provide a default prompt.
- message_selector: MessageSelector#
Selector of messages from ChatHistory. Used for conversation memory retrieval.
- abstract next_action(context: AgentContext) Action [source]#
- record_action_run(action: Action, context: AgentContext)[source]#
- class steamship.agents.schema.AgentContext(request_id: str | None = None, streaming_opts: StreamingOpts | None = None)[source]#
Bases:
object
AgentContext contains all relevant information about a particular execution of an Agent. It is used by the AgentService to manage execution history as well as store/retrieve information and metadata that will be used in the process of an agent execution.
- action_cache: ActionCache | None#
Caches all interations with Tools within a Context. This provides a way to avoid duplicated calls to Tools when within the same context.
- chat_history: ChatHistory#
Record of user-package messages. It records user submitted queries/prompts and the final agent-driven answer sent in response to those queries/prompts. It does NOT record any chat history related to agent execution and action selection.
- completed_steps: List[Action]#
Record of agent-selected Actions and their outputs. This provides an ordered look at the execution sequence for this context.
- emit_funcs: List[Callable[[List[Block], Dict[str, Any]], None]]#
Called when an agent execution has completed. These provide a way for the AgentService to return the result of an agent execution to the package that requested the agent execution.
- static get_or_create(client: Steamship, context_keys: Dict[str, str], tags: List[Tag] = None, searchable: bool = True, use_llm_cache: bool | None = False, use_action_cache: bool | None = False, streaming_opts: StreamingOpts | None = None, initial_system_message: str | None = None)[source]#
Get the AgentContext that corresponds to the parameters supplied.
If the AgentContext does not already exist, a new one will be created and returned.
- Parameters:
client (Steamship) – Steamship workspace-scoped client
context_keys (dict) – key-value pairs used to uniquely identify a context within a workspace
tags (list) – List of Steamship Tags to attach to a ChatHistory for a new context
searchable (bool) – Whether the ContextHistory should embed appended messages for subsequent retrieval
use_llm_cache (bool) – Determines if an LLM Cache should be created for a new context
use_action_cache (bool) – Determines if an Action Cache should be created for a new context
streaming_opts (StreamingOpts) – Determines how status messages are appended to the context’s ChatHistory
initial_system_message (str) – System message used to initialize the context’s ChatHistory. If one already exists, this will be ignored.
- class steamship.agents.schema.ChatAgent(*, tools: List[Tool], message_selector: MessageSelector = NoMessages(), llm: ChatLLM, output_parser: OutputParser)[source]#
-
ChatAgents choose next actions for an AgentService based on chat-based interactions with an LLM.
- message_selector: MessageSelector#
Selector of messages from ChatHistory. Used for conversation memory retrieval.
- output_parser: OutputParser#
Utility responsible for converting LLM output into Actions
- class steamship.agents.schema.ChatHistory(file: File, embedding_index: EmbeddingIndexPluginInstance | None, text_splitter: TextSplitter = None)[source]#
Bases:
object
A ChatHistory is a wrapper of a File ideal for ongoing interactions between a user and a virtual assistant. It also includes vector-backed storage for similarity-based retrieval.
- append_agent_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with status update messages from the Agent.
- append_assistant_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the agent, i.e., results from the assistant.
- append_llm_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with status update messages from the Agent.
- append_message_with_role(text: str = None, role: RoleTag = RoleTag.USER, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the end-user.
- append_request_complete_message() Block [source]#
Append a new block to this with status update messages from the Agent.
- append_status_message_with_role(text: str = None, role: RoleTag = RoleTag.USER, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the end-user.
- append_system_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the system, i.e., instructions to the assistant.
- append_tool_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with status update messages from the Agent.
- append_user_message(text: str = None, tags: List[Tag] = None, content: str | bytes = None, url: str | None = None, mime_type: MimeTypes | None = None) Block [source]#
Append a new block to this with content provided by the end-user.
- clear()[source]#
Deletes ALL messages from the ChatHistory (including system).
NOTE: upon deletion, refresh() is called to ensure up-to-date history refs.
- delete_messages(selector: MessageSelector)[source]#
Delete a set of selected messages from the ChatHistory.
If selector == None, no messages will be deleted.
NOTES: - upon deletion, refresh() is called to ensure up-to-date history refs. - causes a full re-index of chat history if the history is searchable.
- embedding_index: EmbeddingIndexPluginInstance#
- static get_or_create(client: Steamship, context_keys: Dict[str, str], tags: List[Tag] = None, searchable: bool = True) ChatHistory [source]#
- search(text: str, k=None) Task[SearchResults] [source]#
- select_messages(selector: MessageSelector) List[Block] [source]#
- text_splitter: TextSplitter#
- class steamship.agents.schema.ChatLLM[source]#
Bases:
BaseModel
,ABC
ChatLLM wraps large language model-based backends that use a chat completion style interation.
They may be used with Agents in Action selection, or for direct prompt completion.
- class steamship.agents.schema.FinishAction(*, tool: str = 'Agent-FinishAction', input: List[Block] = [], output: List[Block] | None = None, is_final: bool = True)[source]#
Bases:
Action
Represents a final selected action in an Agent Execution.
- class steamship.agents.schema.LLM[source]#
Bases:
BaseModel
,ABC
LLM wraps large language model-based backends.
They may be used with LLMAgents in Action selection, or for direct prompt completion.
- class steamship.agents.schema.LLMAgent(*, tools: List[Tool], message_selector: MessageSelector = NoMessages(), llm: LLM, output_parser: OutputParser)[source]#
Bases:
Agent
LLMAgents choose next actions for an AgentService based on interactions with an LLM.
- message_selector: MessageSelector#
Selector of messages from ChatHistory. Used for conversation memory retrieval.
- abstract next_action(context: AgentContext) Action [source]#
- output_parser: OutputParser#
Utility responsible for converting LLM output into Actions
- class steamship.agents.schema.OutputParser[source]#
Bases:
BaseModel
,ABC
Used to convert text into Actions.
Primarily used by LLM-based agents that generate textual descriptions of selected actions and their inputs. OutputParsers can be used to convert those descriptions into Action objects for the AgentService to run.
Example
- input: “Action: GenerateImage
ActionInput: row-house”
output: Action(“dalle”, “row-house”)
- abstract parse(text: str, context: AgentContext) Action [source]#
Convert text into an Action object.
- class steamship.agents.schema.Tool(*, name: str, agent_description: str, human_description: str, is_final: bool = False, cacheable: bool = True)[source]#
Bases:
BaseModel
Tools provide functionality that may be used by AgentServices, as directed by Agents, to achieve a goal.
Tools may be used to wrap Steamship packages and plugins, as well as third-party backend services, and even locally-contained bits of Python code.
- agent_description: str#
Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.
- as_openai_function() OpenAIFunction [source]#
- cacheable: bool#
Whether runs of this Tool should be cached based on inputs (if caching is enabled in the AgentContext for a run). Setting this to False will make prevent any Actions that involve this tool from being cached, meaning that every Action using this Tool will result in a call to run. By default, Tools are considered cacheable.
- is_final: bool#
Whether actions performed by this tool should have their is_final bit marked.
Setting this to True means that the output of this tool will halt the reasoning loop. Its output will be returned directly to the user.
- name: str#
The short name for the tool. This will be used by Agents to refer to this tool during action selection.
- post_process(async_task: Task, context: AgentContext) List[Block] [source]#
Transforms Task output into a List[Block].
- abstract run(tool_input: List[Block], context: AgentContext) List[Block] | Task[Any] [source]#
Run the tool given the provided input and context.
At the moment, only synchronous Tools (those that return List[Block]) are supported.
Support for asynchronous Tools (those that return Task[Any]) will be added shortly.