Reasoning
Overview
Reasoning models like OpenAI o-series, Claude Sonnet 3.7, Gemini 2.5 Flash, Grok 3, and DeepSeek r1 have some additional options that can be used to tailor their behaviour. They also in some cases make available full or partial reasoning traces for the chains of thought that led to their response.
In this article we’ll first cover the basics of Reasoning Content and Reasoning Options, then cover the usage and options supported by various reasoning models.
Reasoning Content
Many reasoning models allow you to see their underlying chain of thought in a special “thinking” or reasoning block. While reasoning is presented in different ways depending on the model, in the Inspect API it is normalised into ContentReasoning blocks which are parallel to ContentText, ContentImage, etc.
Reasoning blocks are presented in their own region in both Inspect View and in terminal conversation views.
While reasoning content isn’t made available in a standard fashion across models, Inspect does attempt to capture it using several heuristics, including responses that include a reasoning
or reasoning_content
field in the assistant message, assistant content that includes <think></think>
tags, as well as using explicit APIs for models that support them (e.g. Claude 3.7).
In addition, some models make available reasoning_tokens
which will be added to the standard ModelUsage object returned along with output.
Reasoning Options
The following reasoning options are available from the CLI and within GenerateConfig:
Option | Description | Default | Models |
---|---|---|---|
reasoning_effort |
Constrains effort on reasoning for reasoning models (low , medium , or high ) |
medium |
OpenAI o-series, Grok 3 |
reasoning_tokens |
Maximum number of tokens to use for reasoning. | (none) | Claude 3.7+ and Gemini 2.5+ |
reasoning_summary |
Provide summary of reasoning steps (concise , detailed , auto ). Use “auto” to access the most detailed summarizer available for the current model. |
(none) | OpenAI o-series |
reasoning_history |
Include reasoning in message history sent to model (none , all , last , or auto ) |
auto |
All models |
As you can see from above, models have different means of specifying the tokens to allocate for reasoning (reasoning_effort
and reasoning_tokens
). The two options don’t map precisely into each other, so if you are doing an evaluation with multiple reasoning models you should specify both. For example:
eval(
task,=["openai/o3-mini","anthropic/anthropic/claude-3-7-sonnet-20250219"],
model="medium", # openai and grok specific
reasoning_effort=4096 # anthropic and gemini specific
reasoning_tokens="auto", # openai specific
reasoning_summary )
The reasoning_history
option lets you control how much of the model’s previous reasoning is presented in the message history sent to generate(). The default is auto
, which uses a provider-specific recommended default (normally all
). Use last
to not let the reasoning overwhelm the context window.
OpenAI o-series
OpenAI has several reasoning models available including the o1, o3, and o4 famillies of models. Learn more about the specific models available in the OpenAI Models documentation.
Reasoning Effort
You can condition the amount of reasoning done via the reasoning_effort
option, which can be set to low
, medium
, or high
(the default is medium
if not specified). For example:
inspect eval math.py --model openai/o3 --reasoning-effort high
Reasoning History
The reasoning_summary
and responses_store
options described below are available only in the development version of Inspect. To install the development version from GitHub:
pip install git+https://github.com/UKGovernmentBEIS/inspect_ai
You can see a summary of the model’s reasoning by specifying the reasoning_summary
option. Availablle options are concise
, detailed
, and auto
(auto
is recommended to access the most detailed summarizer available for the current model). For example:
inspect eval math.py --model openai/o3 --reasoning-summary auto
When using o-series models, Inspect automatically enables the store option so that reasoning blocks can be retrieved by the model from the conversation history. To control this behavior explicitly use the responses_store
model argument. For example:
inspect eval math.py --model openai/o4-mini -M responses_store=false
For example, you might need to do this if you have a non-logging interface to OpenAI models (as store
is incompatible with non-logging interfaces).
Claude 3.7 Sonnet
Anthropic’s Claude 3.7 Sonnet model includes optional support for extended thinking. Unlike other reasoning models 3.7 Sonnet is a hybrid model that supports both normal and reasoning modes. This means that you need to explicitly request reasoning by specifying the reasoning_tokens
option, for example:
inspect eval math.py \
--model anthropic/claude-3-7-sonnet-latest \
--reasoning-tokens 4096
Tokens
The max_tokens
for any given request is determined as follows:
- If you only specify
reasoning_tokens
, then themax_tokens
will be set to4096 + reasoning_tokens
(as 4096 is the standard Inspect default for Anthropic max tokens). - If you explicitly specify a
max_tokens
, that value will be used as the max tokens without modification (so should accomodate sufficient space for both yourreasoning_tokens
and normal output).
Inspect will automatically use response streaming whenever extended thinking is enabled to mitigate against networking issue that can occur for long running requests.
History
Note that Anthropic requests that all reasoning blocks and played back to the model in chat conversations (although they will only use the last reasoning block and will not bill for tokens on previous ones). Consquently, the reasoning_history
option has no effect for Claude 3.7 models (it effectively always uses last
).
Tools
When using tools, you should read Anthropic’s documentation on extended thinking with tool use. In short, thinking occurs on the first assistant turn and then the normal tool loop is run without additional thinking. Thinking is re-triggered when the tool loop is exited (i.e. a user message without a tool result is received).
Google Gemini
Google currently makes available several Gemini reasoning models, the most recent of which are:
Gemini 2.5 Flash:
google/gemini-2.5-flash-preview-04-17
Gemini 2.5 Pro:
google/gemini-2.5-pro-preview-03-25
You can use the --reasoning-tokens
option to control the amount of reasoning used by these models. For example:
inspect eval math.py \
--model google/gemini-2.5-flash-preview-04-17 \
--reasoning-tokens 4096
The Gemini API includes support for including reasoning in model output, however this feature is not yet enabled for any of their deployed reasoning models.
Grok
Grok currenntly makes available two reasoning models:
grok/grok-3-mini-beta
grok/grok-3-fast-beta
You can condition the amount of reasoning done by Grok using the [reasoning_effort
]https://docs.x.ai/docs/guides/reasoning) option, which can be set to low
or high
.
inspect eval math.py --model grok/grok-3-mini-beta --reasoning-effort high
DeepSeek-R1
DeepSeek-R1 is an open-weights reasoning model from DeepSeek. It is generally available either in its original form or as a distillation of R1 based on another open weights model (e.g. Qwen or Llama-based models).
DeepSeek models can be accessed directly using their OpenAI interface. Further, a number of model hosting providers supported by Inspect make DeepSeek available, for example:
Provider | Model |
---|---|
Together AI | together/deepseek-ai/DeepSeek-R1 (docs) |
Groq | groq/deepseek-r1-distill-llama-70b (docs) |
Ollama | ollama/deepseek-r1:<tag> (docs) |
There isn’t currently a way to customise the reasoning_effort
of DeepSeek models, although they have indicated that this will be available soon.
Reasoning content from DeepSeek models is captured using either the reasoning_content
field made available by the hosted DeepSeek API or the <think>
tags used by various hosting providers.