langchain_community.callbacks.tracers.wandb
.WandbTracer¶
- class langchain_community.callbacks.tracers.wandb.WandbTracer(run_args: Optional[WandbRunArgs] = None, **kwargs: Any)[source]¶
Callback Handler that logs to Weights and Biases.
This handler will log the model architecture and run traces to Weights and Biases. This will ensure that all LangChain activity is logged to W&B.
Initializes the WandbTracer.
- Parameters
run_args – (dict, optional) Arguments to pass to wandb.init(). If not provided, wandb.init() will be called with no arguments. Please refer to the wandb.init for more details.
To use W&B to monitor all LangChain activity, add this tracer like any other LangChain callback: ``` from wandb.integration.langchain import WandbTracer
tracer = WandbTracer() chain = LLMChain(llm, callbacks=[tracer]) # …end of notebook / script: tracer.finish() ```
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__
([run_args])Initializes the WandbTracer.
finish
()Waits for all asynchronous processes to finish and data to upload.
on_agent_action
(action, *, run_id[, ...])Run on agent action.
on_agent_finish
(finish, *, run_id[, ...])Run on agent end.
on_chain_end
(outputs, *, run_id[, inputs])End a trace for a chain run.
on_chain_error
(error, *[, inputs])Handle an error for a chain run.
on_chain_start
(serialized, inputs, *, run_id)Start a trace for a chain run.
on_chat_model_start
(serialized, messages, *, ...)Run when a chat model starts running.
on_llm_end
(response, *, run_id, **kwargs)End a trace for an LLM run.
on_llm_error
(error, *, run_id, **kwargs)Handle an error for an LLM run.
on_llm_new_token
(token, *[, chunk, ...])Run on new LLM token.
on_llm_start
(serialized, prompts, *, run_id)Start a trace for an LLM run.
on_retriever_end
(documents, *, run_id, **kwargs)Run when Retriever ends running.
on_retriever_error
(error, *, run_id, **kwargs)Run when Retriever errors.
on_retriever_start
(serialized, query, *, run_id)Run when Retriever starts running.
on_retry
(retry_state, *, run_id, **kwargs)Run on a retry event.
on_text
(text, *, run_id[, parent_run_id])Run on arbitrary text.
on_tool_end
(output, *, run_id, **kwargs)End a trace for a tool run.
on_tool_error
(error, *, run_id, **kwargs)Handle an error for a tool run.
on_tool_start
(serialized, input_str, *, run_id)Start a trace for a tool run.
- __init__(run_args: Optional[WandbRunArgs] = None, **kwargs: Any) None [source]¶
Initializes the WandbTracer.
- Parameters
run_args – (dict, optional) Arguments to pass to wandb.init(). If not provided, wandb.init() will be called with no arguments. Please refer to the wandb.init for more details.
To use W&B to monitor all LangChain activity, add this tracer like any other LangChain callback: ``` from wandb.integration.langchain import WandbTracer
tracer = WandbTracer() chain = LLMChain(llm, callbacks=[tracer]) # …end of notebook / script: tracer.finish() ```
- finish() None [source]¶
Waits for all asynchronous processes to finish and data to upload.
Proxy for wandb.finish().
- on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any ¶
Run on agent action.
- on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any ¶
Run on agent end.
- on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) Run ¶
End a trace for a chain run.
- on_chain_error(error: BaseException, *, inputs: Optional[Dict[str, Any]] = None, run_id: UUID, **kwargs: Any) Run ¶
Handle an error for a chain run.
- on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) Run ¶
Start a trace for a chain run.
- on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) Any ¶
Run when a chat model starts running.
- on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) Run ¶
Handle an error for an LLM run.
- on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Run ¶
Run on new LLM token. Only available when streaming is enabled.
- on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) Run ¶
Start a trace for an LLM run.
- on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) Run ¶
Run when Retriever ends running.
- on_retriever_error(error: BaseException, *, run_id: UUID, **kwargs: Any) Run ¶
Run when Retriever errors.
- on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) Run ¶
Run when Retriever starts running.
- on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) Any ¶
Run on arbitrary text.