Back
AI / LLM Project

CV Evaluator LLM

An LLM pipeline that evaluates resumes against a role, generates structured feedback, surfaces missing signals, and suggests improvements with consistent scoring.

GitHub RepoResume evaluation

Overview

This project focuses on building a practical evaluator that outputs actionable, structured results rather than vague AI feedback. It’s designed to be extended into a web tool or internal HR assistant.

LLMStructured OutputEvaluationPrompting

Key outputs

  • Fit score and rationale aligned to the role.
  • Strengths and gaps (missing signals).
  • Recommendations to improve the CV for the target job.
  • Structured JSON (easy to integrate into apps).

How it works

  • Input: CV text + job description (or role requirements).
  • Normalization: extract key sections and entities such as skills, experience, and education.
  • Evaluation: criteria-based scoring with calibrated explanations.
  • Output: formatted report + JSON schema for automation.

Next improvements

  • Add rubric tuning per industry (SWE, Data, Product, etc.).
  • Bias and safety layer: avoid sensitive inference and focus on job-relevant content.
  • Optional: RAG with role-specific standards and competency frameworks.
  • Deploy a minimal web demo with Streamlit or FastAPI.