Skip to content

Use Guardrails from LangChain

You can use Guardrails to add a layer of security around LangChain components. Here's how to use Guardrails with LangChain.

Installing dependencies

Make sure you have both langchain and guardrails installed. If you don't, run the following commands:

import openai
!pip install guardrails-ai
!pip install langchain

Create a RAIL spec

rail_spec = """
<rail version="0.1">
<output>
<object name="patient_info">
<string description="Patient's gender" name="gender"></string>
<integer format="valid-range: 0 100" name="age"></integer>
<string description="Symptoms that the patient is currently experiencing" name="symptoms"></string>
</object>
</output>
<prompt>

Given the following doctor's notes about a patient, please extract a dictionary that contains the patient's information.

${doctors_notes}

${gr.complete_json_suffix_v2}
</prompt>
</rail>
"""

Create a GuardrailsOutputParser

from rich import print

from langchain.output_parsers import GuardrailsOutputParser

from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
output_parser = GuardrailsOutputParser.from_rail_string(rail_spec, api=openai.ChatCompletion.create)

The GuardrailsOutputParser contains a Guard object, which can be used to access the prompt and output schema. E.g., here is the compiled prompt that is stored in GuardrailsOutputParser:

print(output_parser.guard.prompt)

Given the following doctor's notes about a patient, please extract a dictionary that contains the patient's 
information.

${doctors_notes}


Given below is XML that describes the information to extract from this document and the tags to extract it into.

<output>
    <object name="patient_info">
        <string name="gender" description="Patient's gender"/>
        <integer name="age" format="valid-range: min=0 max=100"/>
        <string name="symptoms" description="Symptoms that the patient is currently experiencing"/>
    </object>
</output>


ONLY return a valid JSON object (no other text is necessary), where the key of the field in JSON is the `name` 
attribute of the corresponding XML, and the value is of the type specified by the corresponding XML's tag. The JSON
MUST conform to the XML format, including any types and format requests e.g. requests for lists, objects and 
specific types. Be correct and concise.

Here are examples of simple (XML, JSON) pairs that show the expected behavior:
- `<string name='foo' format='two-words lower-case' />` => `{'foo': 'example one'}`
- `<list name='bar'><string format='upper-case' /></list>` => `{"bar": ['STRING ONE', 'STRING TWO', etc.]}`
- `<object name='baz'><string name="foo" format="capitalize two-words" /><integer name="index" format="1-indexed" 
/></object>` => `{'baz': {'foo': 'Some String', 'index': 1}}`


We can now create a LangChain PromptTemplate from this output parser. Note that the PromptTemplate class from LangChain utilizes f-strings. In order to prevent it from trying to treat our example json as variables that should be substituted we will escape our prompt.

Create Prompt Template

prompt = PromptTemplate(
    template=output_parser.guard.prompt.escape(),
    input_variables=output_parser.guard.prompt.variable_names,
)

Query the LLM and get formatted, validated and corrected output

model = OpenAI(temperature=0)


doctors_notes = """
49 y/o Male with chronic macular rash to face &amp; hair, worse in beard, eyebrows &amp; nares.
Itchy, flaky, slightly scaly. Moderate response to OTC steroid cream
"""
output = model(prompt.format_prompt(doctors_notes=doctors_notes).to_string())
print(output_parser.parse(output))
Async event loop found, but guard was invoked synchronously.For validator parallelization, please call `validate_async` instead.

{
    'patient_info': {
        'gender': 'Male',
        'age': 49,
        'symptoms': 'Itchy, flaky, slightly scaly. Moderate response to OTC steroid cream'
    }
}