The AI Guard Lab Tool is used to evaluate the efficacy of the CrowdStrike AIDR AI Guard API against labeled datasets. It supports both malicious prompt injection detection and topic-based detection.