Please use this identifier to cite or link to this item:
http://hdl.handle.net/10773/24547
Title: | Quality of multiple choice questions in a numerical and statistical methods course |
Author: | Cruz, João Pedro Freitas, Adelaide Macedo, Pedro Seabra, Dina |
Keywords: | Education Item response theory Numerical and statistical methods |
Issue Date: | 17-Sep-2018 |
Publisher: | SEFI |
Abstract: | The quality control of written examination is very important in the teaching and learning process of any course. In educational assessment contexts, Item Response Theory (IRT) has been applied to measure the quality of a test in areas of knowledge like medicine, psychology, and social sciences, and its interest has been growing in other topics as well. Based on statistical models for the probability of an individual answering a question correctly, IRT can be addressed to measure examiners’ ability in an assessment test and to estimate difficulty and discrimination levels of each item in the test. In this work, IRT is applied to Numerical and Statistical Methods course to measure the quality of tests based on Multiple Choice Questions (MCQ). The present study focuses on three school years, namely 2015, 2016 and 2017, more specifically on the 1st semester of the 2nd year of the degree course. It has involved more than 300 students in each year, and it points out questions (also called items) from some chapters of the program that were evaluated through MCQ. Emphasis is given on the range of item difficulty and item discrimination parameters, estimated by IRT methodology, for each question in those exams. We show where each partial exam explores ability levels: at a passing point or at more demanding levels. After the application of IRT to each test, which was composed of eight questions, we got 48 item difficulty and item discrimination parameters. The application of standard boxplots shows few atypical responses from students in terms of extremal values of difficulty and discrimination, which corresponds to MCQ that deserve further attention. We have concluded that the vast majority of questions are well posed considering that they are designed to focus on the cut-off point (passing/not passing). A proposed reflection, about the learned benefits from ‘good’ outliers and possible causes for those ‘bad’ items, suggests future improvements to classes, study materials and exams. |
Peer review: | yes |
URI: | http://hdl.handle.net/10773/24547 |
ISBN: | 978-2-87352-016-8 |
Publisher Version: | https://www.sefi.be/wp-content/uploads/2018/10/SEFI-Proceedings-2-October-2018.pdf |
Appears in Collections: | CIDMA - Capítulo de livro OGTCG - Capítulo de livro |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
SEFI_2018_Preprint.pdf | 733.46 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.