TheDocumentation Index
Fetch the complete documentation index at: https://mcpjam-mintlify-docs-update-pr-1995-1777694378328.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
TestAgent class runs prompts with MCP tools enabled. It handles the multi-step prompt loop and returns rich result objects.
Import
Constructor
Parameters
Configuration for the test agent.
TestAgentOptions
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
tools | Record<string, Tool> | Yes | - | MCP tools from manager.getTools() |
model | string | Yes | - | Model identifier in provider/model format |
apiKey | string | Yes | - | API key for the LLM provider |
systemPrompt | string | No | undefined | System prompt for the LLM |
temperature | number | No | undefined | Sampling temperature (0-2). If undefined, uses model default. |
maxSteps | number | No | 10 | Maximum agentic loop iterations |
customProviders | Record<string, CustomProvider> | No | undefined | Custom LLM provider definitions |
mcpClientManager | MCPClientManager | No | undefined | Same connected manager used for connectToServer / getToolsForAiSdk. Required to populate getWidgetSnapshots() on each PromptResult (see note below). |
Example
MCP Apps, traces, and
mcpClientManager: MCP App widgets are backed by HTML served from an MCP resource (ui.resourceUri), not by the tool’s JSON result alone. After each tool call, TestAgent uses the manager’s readResource to fetch that HTML and attach widgetSnapshots to the PromptResult. When you upload results to MCPJam, those snapshots are stored so the Evals trace viewer can replay the widget offline. If you omit mcpClientManager, prompts still run and tools still execute — but widgetSnapshots stay empty and uploaded traces will only contain messages and spans (no embedded app replay). Passing manager is also what enables replay credential capture for authenticated HTTP servers when reporting to MCPJam.Methods
prompt()
Sends a prompt to the LLM and returns the result.getWidgetSnapshots() on the returned result (or toEvalResult() when reporting) to access MCP App HTML captured during the run when mcpClientManager was set.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
message | string | Yes | The user prompt |
options | PromptOptions | No | Additional options |
PromptOptions
| Property | Type | Description |
|---|---|---|
context | PromptResult | PromptResult[] | Previous result(s) for multi-turn conversations |
abortSignal | AbortSignal | Cancels the prompt runtime when aborted |
stopWhen | StopCondition<ToolSet> | Array<StopCondition<ToolSet>> | Additional conditions for the multi-step prompt loop. Tools still execute normally. TestAgent always applies stepCountIs(maxSteps) as a safety guard. |
timeout | number | { totalMs?: number; stepMs?: number; chunkMs?: number } | Bounds prompt runtime. number and totalMs cap the full prompt, stepMs caps each generation step, and chunkMs is accepted for parity but is mainly relevant to streaming APIs. |
timeoutMs | number | Shortcut for a total prompt timeout in milliseconds |
stopAfterToolCall | string | string[] | Short-circuits the named tool(s) with a stub result and stops after the step where they were called. Tool names and args are still captured in the PromptResult. |
Returns
Promise<PromptResult> - The result object with response and metadata.
Example
prompt() never throws exceptions. Errors are captured in the PromptResult.
Check result.hasError() to detect failures.stopAfterToolCall is intended for evals that only care about tool selection
and arguments. If the model emits multiple tool calls in the same step,
non-target sibling tools may still execute before the loop stops.timeout bounds prompt runtime. The runtime creates an internal abort signal,
so tools can stop early if their implementation respects the provided
abortSignal. If a tool ignores that signal, its underlying work may continue
briefly after the prompt returns an error result.Model String Format
Models are specified asprovider/model:
Custom Providers
Add custom OpenAI or Anthropic-compatible endpoints.CustomProvider Type
| Property | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Provider identifier |
protocol | "openai-compatible" | "anthropic-compatible" | Yes | API protocol |
baseUrl | string | Yes | API endpoint URL |
modelIds | string[] | Yes | Available model IDs |
useChatCompletions | boolean | No | Use /chat/completions endpoint |
apiKeyEnvVar | string | No | Custom env var for API key |
Example
Configuration Properties
tools
The MCP tools available to the agent. Obtained fromMCPClientManager.getTools().
model
The LLM model identifier. Format:provider/model-id.
apiKey
The API key for the LLM provider.systemPrompt
Optional system prompt to guide the LLM’s behavior.temperature
Controls response randomness. Range: 0.0 (deterministic) to 1.0 (creative).maxSteps
Maximum iterations of the agentic loop (prompt → tool → result → continue).Control Multi-Step Loops with stopWhen
UsestopWhen to control whether the agent starts another step after the current step completes.
stopWhen does not skip tool execution. It controls whether the prompt loop
continues after the current step completes, and TestAgent also applies
stepCountIs(maxSteps) as a safety guard.Bound Prompt Runtime with timeout
Usetimeout when you want to bound how long TestAgent.prompt() can run:
chunkMs is accepted for parity, but it is mainly useful for streaming APIs.
For TestAgent.prompt(), number, totalMs, and stepMs are the main
settings to focus on.Complete Example
Related
- Testing with LLMs - Conceptual guide
- PromptResult Reference - Result object API
- LLM Providers Reference - All providers

