Please use this identifier to cite or link to this item: http://hdl.handle.net/10773/24547
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCruz, João Pedropt_PT
dc.contributor.authorFreitas, Adelaidept_PT
dc.contributor.authorMacedo, Pedropt_PT
dc.contributor.authorSeabra, Dinapt_PT
dc.date.accessioned2018-10-31T15:54:01Z-
dc.date.available2018-10-31T15:54:01Z-
dc.date.issued2018-09-17-
dc.identifier.isbn978-2-87352-016-8-
dc.identifier.urihttp://hdl.handle.net/10773/24547-
dc.description.abstractThe quality control of written examination is very important in the teaching and learning process of any course. In educational assessment contexts, Item Response Theory (IRT) has been applied to measure the quality of a test in areas of knowledge like medicine, psychology, and social sciences, and its interest has been growing in other topics as well. Based on statistical models for the probability of an individual answering a question correctly, IRT can be addressed to measure examiners’ ability in an assessment test and to estimate difficulty and discrimination levels of each item in the test. In this work, IRT is applied to Numerical and Statistical Methods course to measure the quality of tests based on Multiple Choice Questions (MCQ). The present study focuses on three school years, namely 2015, 2016 and 2017, more specifically on the 1st semester of the 2nd year of the degree course. It has involved more than 300 students in each year, and it points out questions (also called items) from some chapters of the program that were evaluated through MCQ. Emphasis is given on the range of item difficulty and item discrimination parameters, estimated by IRT methodology, for each question in those exams. We show where each partial exam explores ability levels: at a passing point or at more demanding levels. After the application of IRT to each test, which was composed of eight questions, we got 48 item difficulty and item discrimination parameters. The application of standard boxplots shows few atypical responses from students in terms of extremal values of difficulty and discrimination, which corresponds to MCQ that deserve further attention. We have concluded that the vast majority of questions are well posed considering that they are designed to focus on the cut-off point (passing/not passing). A proposed reflection, about the learned benefits from ‘good’ outliers and possible causes for those ‘bad’ items, suggests future improvements to classes, study materials and exams.pt_PT
dc.language.isoengpt_PT
dc.publisherSEFIpt_PT
dc.relationinfo:eu-repo/grantAgreement/FCT/5876/147206/PTpt_PT
dc.rightsopenAccesspt_PT
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/pt_PT
dc.subjectEducationpt_PT
dc.subjectItem response theorypt_PT
dc.subjectNumerical and statistical methodspt_PT
dc.titleQuality of multiple choice questions in a numerical and statistical methods coursept_PT
dc.typebookPartpt_PT
dc.description.versionpublishedpt_PT
dc.peerreviewedyespt_PT
ua.event.date17-21 setembro, 2018pt_PT
degois.publication.firstPage158pt_PT
degois.publication.lastPage165pt_PT
degois.publication.locationCopenhagen, Denmarkpt_PT
degois.publication.titleProceedings of the 46 th SEFI Annual Conference 2018pt_PT
degois.publication.volume46pt_PT
dc.relation.publisherversionhttps://www.sefi.be/wp-content/uploads/2018/10/SEFI-Proceedings-2-October-2018.pdfpt_PT
Appears in Collections:CIDMA - Capítulo de livro
OGTCG - Capítulo de livro

Files in This Item:
File Description SizeFormat 
SEFI_2018_Preprint.pdf733.46 kBAdobe PDFView/Open


FacebookTwitterLinkedIn
Formato BibTex MendeleyEndnote Degois 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.