Please use this identifier to cite or link to this item: http://hdl.handle.net/10773/29906
Title: Freedman’s paradox: a solution based on normalized entropy
Author: Macedo, Pedro
Keywords: Big data
Info-metrics
Regression
Variable selection
Issue Date: 2020
Publisher: Springer
Abstract: In linear regression models where there are no relationships between the dependent variable and each of the potential explanatory variables – a usual scenario in real-world problems – some of them can be identified as relevant by standard statistical procedures. This incorrect identification is usually known as Freedman's paradox. To avoid this disturbing effect in regression analysis, an info-metrics approach based on normalized entropy is discussed and illustrated in this work. As an alternative to traditional statistical methodologies currently used by practitioners, the simulation results suggest that normalized entropy is a powerful procedure to identify pure noise models.
Peer review: yes
URI: http://hdl.handle.net/10773/29906
DOI: 10.1007/978-3-030-56219-9_16
ISBN: 978-3-030-56218-2
Publisher Version: https://link.springer.com/chapter/10.1007%2F978-3-030-56219-9_16
Appears in Collections:CIDMA - Capítulo de livro
DMat - Capítulo de livro
PSG - Capítulo de livro

Files in This Item:
File Description SizeFormat 
PaperBook-RIA.pdf414.6 kBAdobe PDFrestrictedAccess


FacebookTwitterLinkedIn
Formato BibTex MendeleyEndnote Degois 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.