![]() | Only 14 pages are availabe for public view |
Abstract Background: In Qena Faculty of Medicine, Multiple choice questions (MCQs)including (SBAs and EMQs) were newly introduced in the Pediatrics department for written assessments of 5th grade undergraduate medical students. When there is a huge body of content to be evaluated and a large number of pupils to be assessed, MCQs are especially effective. MCQ exams often have acceptable logistics, are simple to run, and can be scored quickly using computers. Difficulty index and discriminating value of each item are easily determined, making standard application possible. However, designing valid questions and responses is a demanding skill that can be time consuming, and criticism has been made that they do not assess higher-order learning and analytic skills. Information that increases our understanding of multiple-choice items and tests will develop our ability to improve item writing, improve test design, better measure the achievement and skill level, that will eventually lead to more appropriate score interpretation and decision making. Aim of the study: The purpose of this study was to critically appraise the Pediatrics MCQ test papers as regards: editing of the test papers and students’ directions, construct validity (levels of cognitive skills tested), conformity of test items with the standard guidelines for MCQ construction regarding: item format, structure (stem; lead-in question; and responses), identification of Pediatrics exam item-writing technical flaws related to testwiseness and irrelevant difficulty, and finally to analyze the quality of the multiple choice questions in terms of: difficulty and discrimination indices, distractor efficiency and internal consistency reliability (MCQ item analysis) |