Skip to content

Generating Bug Free Leetcode Solutions

Note

To download this tutorial as a Jupyter notebook, click here.

In this example, we want to solve String Maniuplation leetcode problems such that the code is bug free.

We make the assumption that:

  1. We don't need any external libraries that are not already installed in the environment.
  2. We are able to execute the code in the environment.

Objective

We want to generate bug-free code for solving leetcode problems. In this example, we don't account for semantic bugs, only for syntactic bugs.

In short, we want to make sure that the code can be executed without any errors.

Step 1: Generating RAIL Spec

Ordinarily, we could create a separate RAIL spec in a file. However, for the sake of this example, we will generate the RAIL spec in the notebook as a string. We will also show the same RAIL spec in a code-first format using a Pydantic model.

XML option:

rail_str = """
<rail version="0.1">
<output>
<pythoncode format="bug-free-python" name="python_code" on-fail-bug-free-python="reask"></pythoncode>
</output>
<prompt>
Given the following high level leetcode problem description, write a short Python code snippet that solves the problem.

Problem Description:
${leetcode_problem}

${gr.complete_json_suffix}</prompt>
</rail>
"""

Pydantic model option:

from pydantic import BaseModel, Field
from guardrails.validators import BugFreePython
from guardrails.datatypes import PythonCode

prompt = """
Given the following high level leetcode problem description, write a short Python code snippet that solves the problem.

Problem Description:
${leetcode_problem}

${gr.complete_json_suffix}"""

class BugFreePythonCode(BaseModel):
    python_code: PythonCode = Field(validators=[BugFreePython(on_fail="reask")])

    class Config:
        arbitrary_types_allowed = True

Step 2: Create a Guard object with the RAIL Spec

We create a gd.Guard object that will check, validate and correct the generated code. This object:

  1. Enforces the quality criteria specified in the RAIL spec (i.e. bug free code).
  2. Takes corrective action when the quality criteria are not met (i.e. reasking the LLM).
  3. Compiles the schema and type info from the RAIL spec and adds it to the prompt.
import guardrails as gd

from rich import print

From XML:

guard = gd.Guard.from_rail_string(rail_str)

Or from the pydantic model:

guard = gd.Guard.from_pydantic(output_class=BugFreePythonCode, prompt=prompt)

The Guard object compiles the output schema and adds it to the prompt. We can see the final prompt below:

print(guard.base_prompt)
Given the following high level leetcode problem description, write a short Python code snippet that solves the 
problem.

Problem Description:
${leetcode_problem}


Given below is XML that describes the information to extract from this document and the tags to extract it into.

<output>
    <pythoncode name="python_code" format="bug-free-python"/>
</output>


ONLY return a valid JSON object (no other text is necessary), where the key of the field in JSON is the `name` 
attribute of the corresponding XML, and the value is of the type specified by the corresponding XML's tag. The JSON
MUST conform to the XML format, including any types and format requests e.g. requests for lists, objects and 
specific types. Be correct and concise. If you are unsure anywhere, enter `null`.

Here are examples of simple (XML, JSON) pairs that show the expected behavior:
- `<string name='foo' format='two-words lower-case' />` => `{'foo': 'example one'}`
- `<list name='bar'><string format='upper-case' /></list>` => `{"bar": ['STRING ONE', 'STRING TWO', etc.]}`
- `<object name='baz'><string name="foo" format="capitalize two-words" /><integer name="index" format="1-indexed" 
/></object>` => `{'baz': {'foo': 'Some String', 'index': 1}}`

Step 3: Wrap the LLM API call with Guard

import openai

leetcode_problem = """
Given a string s, find the longest palindromic substring in s. You may assume that the maximum length of s is 1000.
"""

raw_llm_response, validated_response = guard(
    openai.Completion.create,
    prompt_params={"leetcode_problem": leetcode_problem},
    engine="text-davinci-003",
    max_tokens=2048,
    temperature=0,
)
Async event loop found, but guard was invoked synchronously.For validator parallelization, please call `validate_async` instead.

Running the cell above returns: 1. The raw LLM text output as a single string. 2. A dictionary where the key is python_code and the value is the generated code.

print(validated_response)
{
    'python_code': "def longestPalindrome(s):\n    longest_palindrome = ''\n    for i in range(len(s)):\n        
for j in range(i, len(s)):\n            substring = s[i:j+1]\n            if substring == substring[::-1] and 
len(substring) > len(longest_palindrome):\n                longest_palindrome = substring\n    return 
longest_palindrome"
}

Here's the generated code:

print(validated_response["python_code"])
def longestPalindrome(s):
    longest_palindrome = ''
    for i in range(len(s)):
        for j in range(i, len(s)):
            substring = s
            if substring == substring[::-1] and len(substring) > len(longest_palindrome):
                longest_palindrome = substring
    return longest_palindrome

We can confirm that the code is bug free by executing the code in the environment.

try:
    exec(validated_response["python_code"])
    print("Success!")
except Exception as e:
    print("Failed!")
Success!