proj2.llm_toolkit

class LLM:

LLM class for local language model interactions

LLM(tokens: int = 500)

Initializes the LLM with the specified number of tokens

Args: tokens (int): The max number of generated characters

device = 'cpu'
model = 'ibm-granite/granite-4.0-h-350M'
tokenizer
tokens
def generate(self, context: str, prompt: str) -> str:

Uses the local LLM to generate text based on the provided context and prompt

Args: context (str): The system context to provide to the LLM prompt (str): The user prompt to provide to the LLM

Returns: str: The raw, unformatted output from the LLM