Please use this identifier to cite or link to this item:
Title: Normalized entropy aggregation for inhomogeneous large-scale data
Author: Costa, Maria da Conceição
Macedo, Pedro
Keywords: Big data
Maximum Entropy
Normalized entropy
Issue Date: 2019
Publisher: Springer
Abstract: It was already in the fifties of the last century that the relationship between information theory, statistics, and maximum entropy was established, following the works of Kullback, Leibler, Lindley and Jaynes. However, the applications were restricted to very specific domains and it was not until recently that the convergence between information processing, data analysis and inference demanded the foundation of a new scientific area, commonly referred to as Info-Metrics. As huge amount of information and large-scale data have become available, the term "big data" has been used to refer to the many kinds of challenges presented in its analysis: many observations, many variables (or both), limited computational resources, different time regimes or multiple sources. In this work, we consider one particular aspect of big data analysis which is the presence of inhomogeneities, compromising the use of the classical framework in regression modelling. A new approach is proposed, based on the introduction of the concepts of info-metrics to the analysis of inhomogeneous large-scale data. The framework of information-theoretic estimation methods is presented, along with some information measures. In particular, the normalized entropy is tested in aggregation procedures and some simulation results are presented.
Peer review: yes
DOI: 10.1007/978-3-030-26036-1_2
ISBN: 978-3-030-26035-4
Publisher Version:
Appears in Collections:CIDMA - Capítulo de livro
DMat - Capítulo de livro
PSG - Capítulo de livro

Files in This Item:
File Description SizeFormat 
CostaMacedo2019.pdf286.82 kBAdobe PDFembargoedAccess

Formato BibTex MendeleyEndnote Degois 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.