inspect_ai.solver
Generation
generate
Generate output from the model and append it to task message history.
generate() is the default solver if none is specified for a given task.
@solver
def generate(
tool_calls: Literal["loop", "single", "none"] = "loop",
**kwargs: Unpack[GenerateConfigArgs],
) -> Solvertool_callsLiteral['loop', 'single', 'none']-
Resolve tool calls: -
"loop"resolves tools calls and then invokes generate(), proceeding in a loop which terminates when there are no more tool calls ormessage_limitortoken_limitis exceeded. This is the default behavior. -"single"resolves at most a single set of tool calls and then returns. -"none"does not resolve tool calls at all (in this case you will need to invokecall_tools()directly). **kwargsUnpack[GenerateConfigArgs]-
Optional generation config arguments.
use_tools
Inject tools into the task state to be used in generate().
@solver
def use_tools(
*tools: Tool | ToolDef | ToolSource | Sequence[Tool | ToolDef | ToolSource],
tool_choice: ToolChoice | None = "auto",
append: bool = False,
) -> Solver*toolsTool | ToolDef | ToolSource | Sequence[Tool | ToolDef | ToolSource]-
One or more tools or lists of tools to make available to the model. If no tools are passed, then no change to the currently available set of
toolsis made. tool_choiceToolChoice | None-
Directive indicating which tools the model should use. If
Noneis passed, then no change totool_choiceis made. appendbool-
If
True, then the passed-in tools are appended to the existing tools; otherwise any existing tools are replaced (the default)
Prompting
prompt_template
Parameterized prompt template.
Prompt template containing a {prompt} placeholder and any number of additional params. All values contained in sample metadata and store are also automatically included in the params.
@solver
def prompt_template(template: str, **params: Any) -> Solvertemplatestr-
Template for prompt.
**paramsAny-
Parameters to fill into the template.
system_message
Solver which inserts a system message into the conversation.
System message template containing any number of optional params. for substitution using the str.format() method. All values contained in sample metadata and store are also automatically included in the params.
The new message will go after other system messages (if there are none it will be inserted at the beginning of the conversation).
@solver
def system_message(template: str, **params: Any) -> Solvertemplatestr-
Template for system message.
**paramsAny-
Parameters to fill into the template.
user_message
Solver which inserts a user message into the conversation.
User message template containing any number of optional params. for substitution using the str.format() method. All values contained in sample metadata and store are also automatically included in the params.
@solver
def user_message(template: str, **params: Any) -> Solvertemplatestr-
Template for user message.
**paramsAny-
Parameters to fill into the template.
assistant_message
Solver which inserts an assistant message into the conversation.
Assistant message template containing any number of optional params. for substitution using the str.format() method. All values contained in sample metadata and store are also automatically included in the params.
@solver
def assistant_message(template: str, **params: Any) -> Solvertemplatestr-
Template for assistant message.
**paramsAny-
Parameters to fill into the template.
chain_of_thought
Solver which modifies the user prompt to encourage chain of thought.
@solver
def chain_of_thought(template: str = DEFAULT_COT_TEMPLATE) -> Solvertemplatestr-
String or path to file containing CoT template. The template uses a single variable:
prompt.
self_critique
Solver which uses a model to critique the original answer.
The critique_template is used to generate a critique and the completion_template is used to play that critique back to the model for an improved response. Note that you can specify an alternate model for critique (you don’t need to use the model being evaluated).
@solver
def self_critique(
critique_template: str | None = None,
completion_template: str | None = None,
model: str | Model | None = None,
) -> Solvercritique_templatestr | None-
String or path to file containing critique template. The template uses two variables:
questionandcompletion. Variables from samplemetadataare also available in the template. completion_templatestr | None-
String or path to file containing completion template. The template uses three variables:
question,completion, andcritique modelstr | Model | None-
Alternate model to be used for critique (by default the model being evaluated is used).
multiple_choice
Multiple choice question solver. Formats a multiple choice question prompt, then calls generate().
Note that due to the way this solver works, it has some constraints:
- The Sample must have the
choicesattribute set. - The only built-in compatible scorer is the
choicescorer. - It calls generate() internally, so you don’t need to call it again
@solver
def multiple_choice(
*,
template: str | None = None,
cot: bool = False,
multiple_correct: bool = False,
max_tokens: int | None = None,
**kwargs: Unpack[DeprecatedArgs],
) -> Solvertemplatestr | None-
Template to use for the multiple choice question. The defaults vary based on the options and are taken from the
MultipleChoiceTemplateenum. The template will have questions and possible answers substituted into it before being sent to the model. Consequently it requires three specific template variables:{question}: The question to be asked.{choices}: The choices available, which will be formatted as a list of A) … B) … etc. before sending to the model.{letters}: (optional) A string of letters representing the choices, e.g. “A,B,C”. Used to be explicit to the model about the possible answers.
cotbool-
Default
False. Whether the solver should perform chain-of-thought reasoning before answering. NOTE: this has no effect if you provide a custom template. multiple_correctbool-
Default
False. Whether to allow multiple answers to the multiple choice question. For example, “What numbers are squares? A) 3, B) 4, C) 9” has multiple correct answers, B and C. Leave asFalseif there’s exactly one correct answer from the choices available. NOTE: this has no effect if you provide a custom template. max_tokensint | None-
Default
None. Controls the number of tokens generated through the call to generate(). **kwargsUnpack[DeprecatedArgs]-
Deprecated arguments for backward compatibility.
Composition
chain
Compose a solver from multiple other solvers and/or agents.
Solvers are executed in turn, and a solver step event is added to the transcript for each. If a solver returns a state with completed=True, the chain is terminated early.
@solver
def chain(
*solvers: Solver | Agent | list[Solver] | list[Solver | Agent],
) -> Solverfork
Fork the TaskState and evaluate it against multiple solvers in parallel.
Run several solvers against independent copies of a TaskState. Each Solver gets its own copy of the TaskState and is run (in parallel) in an independent Subtask (meaning that is also has its own independent Store that doesn’t affect the Store of other subtasks or the parent).
async def fork(
state: TaskState, solvers: Solver | list[Solver]
) -> TaskState | list[TaskState]Types
Solver
Contribute to solving an evaluation task.
Transform a TaskState, returning the new state. Solvers may optionally call the generate() function to create a new state resulting from model generation. Solvers may also do prompt engineering or other types of elicitation.
class Solver(Protocol):
async def __call__(
self,
state: TaskState,
generate: Generate,
) -> TaskStateExamples
@solver
def prompt_cot(template: str) -> Solver:
def solve(state: TaskState, generate: Generate) -> TaskState:
# insert chain of thought prompt
return state
return solveSolverSpec
Solver specification used to (re-)create solvers.
@dataclass(frozen=True)
class SolverSpecAttributes
solverstr-
Solver name (simple name or file.py@name).
argsdict[str, Any]-
Solver arguments.
TaskState
The TaskState represents the internal state of the Task being run for a single Sample.
The TaskState is passed to and returned from each solver during a sample’s evaluation. It allows us to maintain the manipulated message history, the tools available to the model, the final output of the model, and whether the task is completed or has hit a limit.
class TaskStateAttributes
modelModelName-
Name of model being evaluated.
sample_idint | str-
Unique id for sample.
epochint-
Epoch number for sample.
inputstr | list[ChatMessage]-
Input from the Sample, should be considered immutable.
input_textstr-
Convenience function for accessing the initial input from the Sample as a string.
If the
inputis alist[ChatMessage], this will return the text from the last chat message user_promptChatMessageUser-
User prompt for this state.
Tasks are very general and can have may types of inputs. However, in many cases solvers assume they can interact with the state as a “chat” in a predictable fashion (e.g. prompt engineering solvers). This property enables easy read and write access to the user chat prompt. Raises an exception if there is no user prompt
metadatadict[str, Any]messageslist[ChatMessage]-
Chat conversation history for sample.
This will generally get appended to every time a
generatecall is made to the model. Useful for both debug and for solvers/scorers to assess model performance or choose the next step. outputModelOutput-
The ‘final’ model output once we’ve completed all solving.
For simple evals this may just be the last
messagefrom the conversation history, but more complex solvers may set this directly. storeStore-
Store for shared data
toolslist[Tool]-
Tools available to the model.
tool_choiceToolChoice | None-
Tool choice directive.
message_limitint | None-
Limit on total messages allowed per conversation.
token_limitint | None-
Limit on total tokens allowed per conversation.
token_usageint-
Total tokens used for the current sample.
completedbool-
Is the task completed.
Additionally, checks for an operator interrupt of the sample.
targetTarget-
The scoring target for this Sample.
scoresdict[str, Score] | None-
Scores yielded by running task.
uuidstr-
Globally unique identifier for sample run.
Methods
- metadata_as
-
Pydantic model interface to metadata.
def metadata_as(self, metadata_cls: Type[MT]) -> MTmetadata_clsType[MT]-
Pydantic model type
- store_as
-
Pydantic model interface to the store.
def store_as(self, model_cls: Type[SMT], instance: str | None = None) -> SMTmodel_clsType[SMT]-
Pydantic model type (must derive from StoreModel)
instancestr | None-
Optional instances name for store (enables multiple instances of a given StoreModel type within a single sample)
Generate
Generate using the model and add the assistant message to the task state.
class Generate(Protocol):
async def __call__(
self,
state: TaskState,
tool_calls: Literal["loop", "single", "none"] = "loop",
**kwargs: Unpack[GenerateConfigArgs],
) -> TaskStatestateTaskState-
Beginning task state.
tool_callsLiteral['loop', 'single', 'none']-
"loop"resolves tools calls and then invokes generate(), proceeding in a loop which terminates when there are no more tool calls, ormessage_limitortoken_limitis exceeded. This is the default behavior."single"resolves at most a single set of tool calls and then returns."none"does not resolve tool calls at all (in this case you will need to invokecall_tools()directly).
**kwargsUnpack[GenerateConfigArgs]-
Optional generation config arguments.
Decorators
solver
Decorator for registering solvers.
def solver(
name: str | Callable[P, SolverType],
) -> Callable[[Callable[P, Solver]], Callable[P, Solver]] | Callable[P, Solver]namestr | Callable[P, SolverType]-
Optional name for solver. If the decorator has no name argument then the name of the underlying Callable[P, SolverType] object will be used to automatically assign a name.
Examples
@solver
def prompt_cot(template: str) -> Solver:
def solve(state: TaskState, generate: Generate) -> TaskState:
# insert chain of thought prompt
return state
return solve