1 min read

InstructLab Testing

InstructLab Testing

1. Define the Scope & Objectives

  • Ensure the model is fine-tuned for analysing requirements quality in your specific context (e.g., compliance, traceability, clarity, feasibility).
  • Identify key quality metrics such as completeness, consistency, verifiability, and ambiguity detection.

2. Set Up InstructLab

Prerequisites

  • Install InstructLab
  • Set up a suitable GPU/TPU environment for training

Installation

pip install instructlab

Initialize the Project

instructlab init my_requirements_qa_model
cd my_requirements_qa_model

3. Prepare Training Data

  • Collect high-quality requirements datasets, preferably labeled.
  • Format them into a structure compatible with InstructLab.

Example Training Dataset Format (JSONL)

{"instruction": "Assess the clarity of the following requirement:", "input": "The system shall be fast.", "output": "Ambiguous - Define 'fast' with measurable criteria."}
{"instruction": "Check if this requirement is verifiable:", "input": "The software shall be user-friendly.", "output": "Not verifiable - User-friendliness needs clear criteria."}

Prepare the Dataset

mkdir data
mv requirements_qa.jsonl data/

4. Fine-tune the LLM

Modify config.yaml:

dataset: "data/requirements_qa.jsonl"
model: "mistral-7b"
epochs: 3
batch_size: 8
learning_rate: 5e-5

Run the training:

instructlab train

5. Evaluate & Test the Model

Once trained, test it on sample requirements:

instructlab evaluate --input "The system shall support high availability."

Refine based on feedback.


6. Deploy the Model

After satisfactory performance, deploy it via:

instructlab deploy --output model.pth

Or integrate it into an API for automated requirements analysis.


Would you like to integrate it into OpenSESA for automated quality checks?