flowchart TD
A["Code Submission"] --> B["SonarQube Analysis<br/>(CWE mapping)"]
B --> C["LLM Feedback<br/>(Level-appropriate)"]
C --> D["Student View &<br/>Instructor Reports"]
AI-Grounded Vulnerability Feedback for Non‑Security CS Courses
Security failures start early.
Students often learn to write code before they learn to write secure code.
Empirical evidence supports this gap.
flowchart TD
A["Code Submission"] --> B["SonarQube Analysis<br/>(CWE mapping)"]
B --> C["LLM Feedback<br/>(Level-appropriate)"]
C --> D["Student View &<br/>Instructor Reports"]
Grounded analyzer reduces hallucination risk; LLM provides audience-appropriate feedback.
Integrates proven learning theories into the system design to enhance the learning experience:
Figure: Pedalogical Application
Figure: Pedalogical Question Nodes
Figure: Pedalogical LLM Question Generation
Figure: Sample Student Feedback
Figure: Sample Instructor Report
RQ1: Is AI-generated vulnerability feedback associated with reduced vulnerabilities in revised submissions?
RQ2: Does exposure to AI-generated vulnerability feedback improve secure coding practices over time?