Guard
The Guard class.
This class is the main entry point for using Guardrails. It is initialized from one of the following class methods:
from_rail
from_rail_string
from_pydantic
from_string
The __call__
method functions as a wrapper around LLM APIs. It takes in an LLM
API, and optional prompt parameters, and returns the raw output from
the LLM and the validated output.
Initialize the Guard.
from_rail
classmethod
Create a Schema from a .rail
file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
rail_file |
str
|
The path to the |
required |
num_reasks |
Optional[int]
|
The max times to re-ask the LLM for invalid output. |
None
|
Returns:
Type | Description |
---|---|
Guard
|
An instance of the |
from_rail_string
classmethod
Create a Schema from a .rail
string.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
rail_string |
str
|
The |
required |
num_reasks |
Optional[int]
|
The max times to re-ask the LLM for invalid output. |
None
|
Returns:
Type | Description |
---|---|
Guard
|
An instance of the |
from_pydantic
classmethod
from_pydantic(output_class: Type[BaseModel], prompt: Optional[str] = None, instructions: Optional[str] = None, num_reasks: Optional[int] = None) -> Guard
Create a Guard instance from a Pydantic model and prompt.
from_string
classmethod
from_string(validators: List[Validator], description: Optional[str] = None, prompt: Optional[str] = None, instructions: Optional[str] = None, reask_prompt: Optional[str] = None, reask_instructions: Optional[str] = None, num_reasks: Optional[int] = None) -> Guard
Create a Guard instance for a string response with prompt, instructions, and validations.
Arguments
Name | Type | Description | Default |
---|---|---|---|
validators |
List[Validator]
|
(List[Validator]): The list of validators to apply to the string output. |
required |
description |
str
|
A description for the string to be generated. Defaults to None. |
None
|
prompt |
str
|
The prompt used to generate the string. Defaults to None. |
None
|
instructions |
str
|
Instructions for chat models. Defaults to None. |
None
|
reask_prompt |
str
|
An alternative prompt to use during reasks. Defaults to None. |
None
|
reask_instructions |
str
|
Alternative instructions to use during reasks. Defaults to None. |
None
|
num_reasks |
int
|
The max times to re-ask the LLM for invalid output. |
None
|
__call__
__call__(llm_api: Union[Callable, Callable[[Any], Awaitable[Any]]], prompt_params: Optional[Dict] = None, num_reasks: Optional[int] = None, prompt: Optional[str] = None, instructions: Optional[str] = None, msg_history: Optional[List[Dict]] = None, metadata: Optional[Dict] = None, full_schema_reask: Optional[bool] = None, *args, **kwargs) -> Union[Tuple[Optional[str], Any], Awaitable[Tuple[Optional[str], Any]]]
Call the LLM and validate the output. Pass an async LLM API to return a coroutine.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
llm_api |
Union[Callable, Callable[[Any], Awaitable[Any]]]
|
The LLM API to call (e.g. openai.Completion.create or openai.Completion.acreate) |
required |
prompt_params |
Optional[Dict]
|
The parameters to pass to the prompt.format() method. |
None
|
num_reasks |
Optional[int]
|
The max times to re-ask the LLM for invalid output. |
None
|
prompt |
Optional[str]
|
The prompt to use for the LLM. |
None
|
instructions |
Optional[str]
|
Instructions for chat models. |
None
|
msg_history |
Optional[List[Dict]]
|
The message history to pass to the LLM. |
None
|
metadata |
Optional[Dict]
|
Metadata to pass to the validators. |
None
|
full_schema_reask |
Optional[bool]
|
When reasking, whether to regenerate the full schema
or just the incorrect values.
Defaults to |
None
|
Returns:
Type | Description |
---|---|
Union[Tuple[Optional[str], Any], Awaitable[Tuple[Optional[str], Any]]]
|
The raw text output from the LLM and the validated output. |
parse
parse(llm_output: str, metadata: Optional[Dict] = None, llm_api: Optional[Callable] = None, num_reasks: Optional[int] = None, prompt_params: Optional[Dict] = None, full_schema_reask: Optional[bool] = None, *args, **kwargs) -> Union[Any, Awaitable[Any]]
Alternate flow to using Guard where the llm_output is known.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
llm_output |
str
|
The output being parsed and validated. |
required |
metadata |
Optional[Dict]
|
Metadata to pass to the validators. |
None
|
llm_api |
Optional[Callable]
|
The LLM API to call (e.g. openai.Completion.create or openai.Completion.acreate) |
None
|
num_reasks |
Optional[int]
|
The max times to re-ask the LLM for invalid output. |
None
|
prompt_params |
Optional[Dict]
|
The parameters to pass to the prompt.format() method. |
None
|
full_schema_reask |
Optional[bool]
|
When reasking, whether to regenerate the full schema or just the incorrect values. |
None
|
Returns:
Type | Description |
---|---|
Union[Any, Awaitable[Any]]
|
The validated response. This is either a string or a dictionary, determined by the object schema defined in the RAILspec. |