steamship.agents.tools.question_answering package#

Submodules#

steamship.agents.tools.question_answering.prompt_database_question_answerer module#

class steamship.agents.tools.question_answering.prompt_database_question_answerer.PromptDatabaseQATool(facts: List[str] | None = None, *, name: str = 'PromptDatabaseQATool', agent_description: str = 'Used to answer questions about the number of subway stations in US cities. The input is the question about subway stations. The output is the answer as a sentence.', human_description: str = 'Answers questions about the number of subway stations in US cities.', is_final: bool = False, cacheable: bool = True, rewrite_prompt: str = 'Instructions:\nPlease rewrite the following passage to be incredibly polite, to a fault.\nPassage:\n{input}\nRewritten Passage:', question_answering_prompt: str | None = "Use the following pieces of memory to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{source_text}\n\nQuestion: {{input}}\n\nHelpful Answer:")[source]#

Bases: TextRewritingTool

Example tool to illustrate how one can create a tool with a mini database embedded in a prompt.

To use:

tool = PromptDatabaseQATool(
facts=[

“Sentence with fact 1”, “Sentence with fact 2”

], ai_description=”Used to answer questions about SPECIFIC_THING. “

“The input is the question and the output is the answer.”

)

facts: List[str]#
question_answering_prompt: str | None#

steamship.agents.tools.question_answering.vector_search_learner_tool module#

Answers questions with the assistance of a VectorSearch plugin.

class steamship.agents.tools.question_answering.vector_search_learner_tool.VectorSearchLearnerTool(*, name: str = 'VectorSearchLearnerTool', agent_description: str = 'Used to remember a fact. Only use this tool if someone asks to remember or learn something. The input is a fact to learn. The output is a confirmation that the fact has been learned.', human_description: str = 'Learns a new fact and puts it in the Vector Database.', is_final: bool = False, cacheable: bool = True, embedding_index_handle: str | None = 'embedding-index', embedding_index_version: str | None = None, embedding_index_config: dict | None = {'embedder': {'config': {'dimensionality': 1536, 'model': 'text-embedding-ada-002'}, 'fetch_if_exists': True, 'plugin_handle': 'openai-embedder', 'plugin_instance_handle': 'text-embedding-ada-002'}}, embedding_index_instance_handle: str = 'default-embedding-index')[source]#

Bases: VectorSearchTool

Tool to answer questions with the assistance of a vector search plugin.

agent_description: str#

Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.

human_description: str#

Human-friendly description. Used for logging, tool indices, etc.

learn_sentence(sentence: str, context: AgentContext, metadata: dict | None = None)[source]#

Learns a sigle sentence-sized piece of text.

GUIDANCE: No more than about a short sentence is a useful unit of embedding search & lookup.

name: str#

The short name for the tool. This will be used by Agents to refer to this tool during action selection.

run(tool_input: List[Block], context: AgentContext) List[Block] | Task[Any][source]#

Learns a fact with the assistance of an Embedding Index plugin.

Inputs#

tool_input: List[Block]

A list of blocks to be rewritten if text-containing.

context: AgentContext

The active AgentContext.

Output#

output: List[Blocks]

A lit of blocks containing the answers.

steamship.agents.tools.question_answering.vector_search_qa_tool module#

Answers questions with the assistance of a VectorSearch plugin.

class steamship.agents.tools.question_answering.vector_search_qa_tool.VectorSearchQATool(*, name: str = 'VectorSearchQATool', agent_description: str = ('Used to answer questions. ', 'The input should be a plain text question. ', 'The output is a plain text answer'), human_description: str = 'Answers questions about a user. This can include personal information (names, preferences, etc.).', is_final: bool = False, cacheable: bool = True, embedding_index_handle: str | None = 'embedding-index', embedding_index_version: str | None = None, embedding_index_config: dict | None = {'embedder': {'config': {'dimensionality': 1536, 'model': 'text-embedding-ada-002'}, 'fetch_if_exists': True, 'plugin_handle': 'openai-embedder', 'plugin_instance_handle': 'text-embedding-ada-002'}}, embedding_index_instance_handle: str = 'default-embedding-index', question_answering_prompt: str | None = "Use the following pieces of memory to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{source_text}\n\nQuestion: {question}\n\nHelpful Answer:", source_document_prompt: str | None = 'Source Document: {text}', load_docs_count: int = 2)[source]#

Bases: VectorSearchTool

Tool to answer questions with the assistance of a vector search plugin.

agent_description: str#

Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.

answer_question(question: str, context: AgentContext) List[Block][source]#
human_description: str#

Human-friendly description. Used for logging, tool indices, etc.

load_docs_count: int#
name: str#

The short name for the tool. This will be used by Agents to refer to this tool during action selection.

question_answering_prompt: str | None#
run(tool_input: List[Block], context: AgentContext) List[Block] | Task[Any][source]#

Answers questions with the assistance of an Embedding Index plugin.

Inputs#

tool_input: List[Block]

A list of blocks to be rewritten if text-containing.

context: AgentContext

The active AgentContext.

Output#

output: List[Blocks]

A lit of blocks containing the answers.

source_document_prompt: str | None#

steamship.agents.tools.question_answering.vector_search_tool module#

Answers questions with the assistance of a VectorSearch plugin.

class steamship.agents.tools.question_answering.vector_search_tool.VectorSearchTool(*, name: str, agent_description: str, human_description: str, is_final: bool = False, cacheable: bool = True, embedding_index_handle: str | None = 'embedding-index', embedding_index_version: str | None = None, embedding_index_config: dict | None = {'embedder': {'config': {'dimensionality': 1536, 'model': 'text-embedding-ada-002'}, 'fetch_if_exists': True, 'plugin_handle': 'openai-embedder', 'plugin_instance_handle': 'text-embedding-ada-002'}}, embedding_index_instance_handle: str = 'default-embedding-index')[source]#

Bases: Tool, ABC

Abstract Base Class that provides helper data for a tool that uses Vector Search.

embedding_index_config: dict | None#
embedding_index_handle: str | None#
embedding_index_instance_handle: str#
embedding_index_version: str | None#
get_embedding_index(client: Steamship) EmbeddingIndexPluginInstance[source]#

Module contents#

class steamship.agents.tools.question_answering.PromptDatabaseQATool(facts: List[str] | None = None, *, name: str = 'PromptDatabaseQATool', agent_description: str = 'Used to answer questions about the number of subway stations in US cities. The input is the question about subway stations. The output is the answer as a sentence.', human_description: str = 'Answers questions about the number of subway stations in US cities.', is_final: bool = False, cacheable: bool = True, rewrite_prompt: str = 'Instructions:\nPlease rewrite the following passage to be incredibly polite, to a fault.\nPassage:\n{input}\nRewritten Passage:', question_answering_prompt: str | None = "Use the following pieces of memory to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{source_text}\n\nQuestion: {{input}}\n\nHelpful Answer:")[source]#

Bases: TextRewritingTool

Example tool to illustrate how one can create a tool with a mini database embedded in a prompt.

To use:

tool = PromptDatabaseQATool(
facts=[

“Sentence with fact 1”, “Sentence with fact 2”

], ai_description=”Used to answer questions about SPECIFIC_THING. “

“The input is the question and the output is the answer.”

)

agent_description: str#

Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.

cacheable: bool#

Whether runs of this Tool should be cached based on inputs (if caching is enabled in the AgentContext for a run). Setting this to False will make prevent any Actions that involve this tool from being cached, meaning that every Action using this Tool will result in a call to run. By default, Tools are considered cacheable.

facts: List[str]#
human_description: str#

Human-friendly description. Used for logging, tool indices, etc.

is_final: bool#

Whether actions performed by this tool should have their is_final bit marked.

Setting this to True means that the output of this tool will halt the reasoning loop. Its output will be returned directly to the user.

name: str#

The short name for the tool. This will be used by Agents to refer to this tool during action selection.

question_answering_prompt: str | None#
rewrite_prompt: str#
class steamship.agents.tools.question_answering.VectorSearchLearnerTool(*, name: str = 'VectorSearchLearnerTool', agent_description: str = 'Used to remember a fact. Only use this tool if someone asks to remember or learn something. The input is a fact to learn. The output is a confirmation that the fact has been learned.', human_description: str = 'Learns a new fact and puts it in the Vector Database.', is_final: bool = False, cacheable: bool = True, embedding_index_handle: str | None = 'embedding-index', embedding_index_version: str | None = None, embedding_index_config: dict | None = {'embedder': {'config': {'dimensionality': 1536, 'model': 'text-embedding-ada-002'}, 'fetch_if_exists': True, 'plugin_handle': 'openai-embedder', 'plugin_instance_handle': 'text-embedding-ada-002'}}, embedding_index_instance_handle: str = 'default-embedding-index')[source]#

Bases: VectorSearchTool

Tool to answer questions with the assistance of a vector search plugin.

agent_description: str#

Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.

cacheable: bool#

Whether runs of this Tool should be cached based on inputs (if caching is enabled in the AgentContext for a run). Setting this to False will make prevent any Actions that involve this tool from being cached, meaning that every Action using this Tool will result in a call to run. By default, Tools are considered cacheable.

embedding_index_config: dict | None#
embedding_index_handle: str | None#
embedding_index_instance_handle: str#
embedding_index_version: str | None#
human_description: str#

Human-friendly description. Used for logging, tool indices, etc.

is_final: bool#

Whether actions performed by this tool should have their is_final bit marked.

Setting this to True means that the output of this tool will halt the reasoning loop. Its output will be returned directly to the user.

learn_sentence(sentence: str, context: AgentContext, metadata: dict | None = None)[source]#

Learns a sigle sentence-sized piece of text.

GUIDANCE: No more than about a short sentence is a useful unit of embedding search & lookup.

name: str#

The short name for the tool. This will be used by Agents to refer to this tool during action selection.

run(tool_input: List[Block], context: AgentContext) List[Block] | Task[Any][source]#

Learns a fact with the assistance of an Embedding Index plugin.

Inputs#

tool_input: List[Block]

A list of blocks to be rewritten if text-containing.

context: AgentContext

The active AgentContext.

Output#

output: List[Blocks]

A lit of blocks containing the answers.

class steamship.agents.tools.question_answering.VectorSearchQATool(*, name: str = 'VectorSearchQATool', agent_description: str = ('Used to answer questions. ', 'The input should be a plain text question. ', 'The output is a plain text answer'), human_description: str = 'Answers questions about a user. This can include personal information (names, preferences, etc.).', is_final: bool = False, cacheable: bool = True, embedding_index_handle: str | None = 'embedding-index', embedding_index_version: str | None = None, embedding_index_config: dict | None = {'embedder': {'config': {'dimensionality': 1536, 'model': 'text-embedding-ada-002'}, 'fetch_if_exists': True, 'plugin_handle': 'openai-embedder', 'plugin_instance_handle': 'text-embedding-ada-002'}}, embedding_index_instance_handle: str = 'default-embedding-index', question_answering_prompt: str | None = "Use the following pieces of memory to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{source_text}\n\nQuestion: {question}\n\nHelpful Answer:", source_document_prompt: str | None = 'Source Document: {text}', load_docs_count: int = 2)[source]#

Bases: VectorSearchTool

Tool to answer questions with the assistance of a vector search plugin.

agent_description: str#

Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.

answer_question(question: str, context: AgentContext) List[Block][source]#
cacheable: bool#

Whether runs of this Tool should be cached based on inputs (if caching is enabled in the AgentContext for a run). Setting this to False will make prevent any Actions that involve this tool from being cached, meaning that every Action using this Tool will result in a call to run. By default, Tools are considered cacheable.

embedding_index_config: dict | None#
embedding_index_handle: str | None#
embedding_index_instance_handle: str#
embedding_index_version: str | None#
human_description: str#

Human-friendly description. Used for logging, tool indices, etc.

is_final: bool#

Whether actions performed by this tool should have their is_final bit marked.

Setting this to True means that the output of this tool will halt the reasoning loop. Its output will be returned directly to the user.

load_docs_count: int#
name: str#

The short name for the tool. This will be used by Agents to refer to this tool during action selection.

question_answering_prompt: str | None#
run(tool_input: List[Block], context: AgentContext) List[Block] | Task[Any][source]#

Answers questions with the assistance of an Embedding Index plugin.

Inputs#

tool_input: List[Block]

A list of blocks to be rewritten if text-containing.

context: AgentContext

The active AgentContext.

Output#

output: List[Blocks]

A lit of blocks containing the answers.

source_document_prompt: str | None#