AI Fairness at ProveGround
Last updated 2026-04-17
ProveGround uses AI in several places that touch students' academic and career outcomes. This page explains where, how we think about bias, and what users can do when they see AI output that looks wrong.
Where we use AI
- Resume and bio coaching.A large language model (Anthropic's Claude) reads what a student wrote and suggests rewrites. Students accept or reject every suggestion.
- Match Engine ranking. We compute a match score between a student and a listing using an ensemble of deterministic features (skills, recency, institution) and, in some tiers, LLM-derived semantic similarity.
- Listing authoring tools. Corporate partners can use AI to draft or tighten job descriptions. No student data is part of that prompt.
- Skill analytics and outcomes insights. We use AI to summarize aggregated cohort data for institutional admins; we do not generate per-student risk predictions.
Known bias risks
LLMs reproduce patterns in their training data. That can systematically disadvantage:
- Students whose names, schools, or English phrasing look atypical in the training distribution.
- First-generation college students whose resume vocabulary may differ from conventions the model rewards.
- Students whose work comes from communities or industries underrepresented in public web text.
Match-score models can also pick up proxies for protected attributes (e.g. zip-code-linked institution signals) even when we don't pass those attributes directly.
What we do about it
- Human in the loop. Every AI surface shows a notice telling employers and admins not to use AI output as the sole basis for a decision.
- Student control. Coaching output is always accepted explicitly by the student; nothing is auto-applied to their profile.
- Transparent scoring. Match scores break down into the specific signals that contributed. Admins can adjust weights in the Match Engine settings.
- Report path. Every notice links to Report Issue. We triage every AI-fairness report and track outcomes in our audit log.
- Auditing. We sample AI output across demographic splits quarterly and publish aggregate drift metrics to institutional admins in the Match Engine Insights dashboard.
What we don't do
- We don't make automated employment decisions (hires, rejections) from AI output alone.
- We don't use protected attributes (race, gender, age, disability, national origin) as features in any model.
- We don't send personally identifying student data to LLM providers beyond the minimum needed for the specific request (e.g. resume text for coaching), and we do not allow providers to train on our data.
Questions or concerns
Email privacy@proveground.com or file an issue. For formal complaints, see our Privacy Policy.
Last updated: February 23, 2026