Poster Presentation New Zealand Association of Plastic Surgeons Annual Scientific Meeting

AI as a study tool for the FRACS Fellowship Examination (1808)

Oliver M Jensen 1
  1. Plastic, Maxilliofacial and Burns Unit, Hutt Hospital, Lower Hutt, WELLINGTON, New Zealand

Background: Artificial intelligence (AI), particularly large language models (LLMs), is emerging as a tool in surgical education. While prior studies have shown that AI can simulate clinical reasoning and support exam preparation—such as ChatGPT’s near-passing performance on the USMLE (Kung et al., 2023) and improved candidate confidence in AI-assisted mock oral exams (Choudhury et al., 2023)—its role in FRACS examination preparation remains unexplored.

Objective: To evaluate the quality of AI-generated FRACS-style questions and answers, and determine whether senior examiners can distinguish AI-generated responses from those written by near-complete senior registrars.

Methods: Two senior FRACS examiners assessed FRACS-style questions generated by ChatGPT for clinical relevance, structure, and appropriateness. They then scored anonymised responses to similar questions, half written by AI and half by senior registrars, and attempted to identify their origin.

Results: AI-generated questions were rated as clinically relevant and well-structured in 85% of cases. Examiner accuracy in identifying AI-generated responses was 54%, with AI answers scoring comparably to registrar responses in structure, factual accuracy, and reasoning. Subtle differences were noted in surgical nuance and context-specific judgment.

Conclusion: AI-generated questions and answers demonstrate potential as an adjunct in FRACS exam preparation. While not a substitute for clinical mentorship or operative experience, AI tools may support trainees by simulating examiner questioning, reinforcing clinical reasoning, and providing rapid feedback. Further development and surgical-specific refinement are warranted