steamship.agents.service package#

Submodules#

steamship.agents.service.agent_service module#

class steamship.agents.service.agent_service.AgentService(use_llm_cache: bool | None = False, use_action_cache: bool | None = False, max_actions_per_run: int | None = 5, max_actions_per_tool: Dict[str, int] | None = None, agent: Agent | None = None, **kwargs)[source]#

Bases: PackageService

AgentService is a Steamship Package that can use an Agent, Tools, and a provided AgentContext to respond to user input.

agent: Agent | None#

The default agent for this agent service.

async_prompt(prompt: str | None = None, context_id: str | None = None, **kwargs) StreamingResponse[source]#
build_default_context(context_id: str | None = None, **kwargs) AgentContext[source]#

Build the agent’s default context.

The provides a single place to implement (or override) the default context that will be used by endpoints that transports define. This allows an Agent developer to use, eg, the TelegramTransport but with a custom type of memory or caching.

The returned context does not have any emit functions yet registered to it.

get_default_agent(throw_if_missing: bool = True) Agent | None[source]#

Return the default agent of this agent service.

This is a helper wrapper to safeguard naming conventions that have formed.

max_actions_per_run: int#

The maximum number of actions to permit while the agent is reasoning.

This is intended primarily to act as a backstop to prevent a condition in which the Agent decides to loop endlessly on tool runs that consume resources with a cost-basis (e.g. prompt completions, embedding operations, vector lookups)

max_actions_per_tool: Dict[str, int] = {}#

The maximum number of actions to permit per tool name.

This is intended primarily to act as a backstop to prevent a condition in which the Agent decides to loop endlessly on tool runs that consume resources with a cost-basis (e.g. prompt completions, embedding operations, vector lookups)

next_action(agent: Agent, input_blocks: List[Block], context: AgentContext) Action[source]#
prompt(prompt: str | None = None, context_id: str | None = None, **kwargs) List[Block][source]#

Run an agent with the provided text as the input.

run_action(agent: Agent, action: Action, context: AgentContext)[source]#
run_agent(agent: Agent, context: AgentContext)[source]#
set_default_agent(agent: Agent)[source]#
use_action_cache: bool#

Whether or not to cache agent Actions (for tool runs) by default.

use_llm_cache: bool#

Whether or not to cache LLM calls (for tool selection/direct responses) by default.

steamship.agents.service.agent_service.build_context_appending_emit_func(context: AgentContext, make_blocks_public: bool | None = False) Callable[[List[Block], Dict[str, Any]], None][source]#

Build an emit function that will append output blocks directly to ChatHistory, via AgentContext.

NOTE: Messages will be tagged as ASSISTANT messages, as this assumes that agent output should be considered an assistant response to a USER.

Module contents#