steamship.agents.service package#
Submodules#
steamship.agents.service.agent_service module#
- class steamship.agents.service.agent_service.AgentService(use_llm_cache: bool | None = False, use_action_cache: bool | None = False, max_actions_per_run: int | None = 5, max_actions_per_tool: Dict[str, int] | None = None, agent: Agent | None = None, **kwargs)[source]#
Bases:
PackageService
AgentService is a Steamship Package that can use an Agent, Tools, and a provided AgentContext to respond to user input.
- async_prompt(prompt: str | None = None, context_id: str | None = None, **kwargs) StreamingResponse [source]#
- build_default_context(context_id: str | None = None, **kwargs) AgentContext [source]#
Build the agent’s default context.
The provides a single place to implement (or override) the default context that will be used by endpoints that transports define. This allows an Agent developer to use, eg, the TelegramTransport but with a custom type of memory or caching.
The returned context does not have any emit functions yet registered to it.
- get_default_agent(throw_if_missing: bool = True) Agent | None [source]#
Return the default agent of this agent service.
This is a helper wrapper to safeguard naming conventions that have formed.
- max_actions_per_run: int#
The maximum number of actions to permit while the agent is reasoning.
This is intended primarily to act as a backstop to prevent a condition in which the Agent decides to loop endlessly on tool runs that consume resources with a cost-basis (e.g. prompt completions, embedding operations, vector lookups)
- max_actions_per_tool: Dict[str, int] = {}#
The maximum number of actions to permit per tool name.
This is intended primarily to act as a backstop to prevent a condition in which the Agent decides to loop endlessly on tool runs that consume resources with a cost-basis (e.g. prompt completions, embedding operations, vector lookups)
- prompt(prompt: str | None = None, context_id: str | None = None, **kwargs) List[Block] [source]#
Run an agent with the provided text as the input.
- run_action(agent: Agent, action: Action, context: AgentContext)[source]#
- run_agent(agent: Agent, context: AgentContext)[source]#
- steamship.agents.service.agent_service.build_context_appending_emit_func(context: AgentContext, make_blocks_public: bool | None = False) Callable[[List[Block], Dict[str, Any]], None] [source]#
Build an emit function that will append output blocks directly to ChatHistory, via AgentContext.
NOTE: Messages will be tagged as ASSISTANT messages, as this assumes that agent output should be considered an assistant response to a USER.