Quality of randomness and node dropout regularization for fitting neural networks

dc.contributor.authorKoivu Aki
dc.contributor.authorKakko Joona-Pekko
dc.contributor.authorMäntyniemi Santeri
dc.contributor.authorSairanen Mikko
dc.contributor.organizationfi=terveysteknologia|en=Health Technology|
dc.contributor.organizationfi=tietotekniikan laitos|en=Department of Computing|
dc.contributor.organization-code2610300
dc.contributor.organization-code2610303
dc.converis.publication-id176026934
dc.converis.urlhttps://research.utu.fi/converis/portal/Publication/176026934
dc.date.accessioned2022-10-28T14:26:32Z
dc.date.available2022-10-28T14:26:32Z
dc.description.abstractQuality of randomness in generating random numbers is an attribute manifested by a sufficiently random process, and a sufficiently large sample size. To assess it, various statistical tests for it have been proposed in the past. The application area for random number generation is wide in natural sciences, and one of the more prominent and widely adopted is machine learning, where bounded randomness or stochastic random number generation has been utilized in various tasks. The artificial neural networks used for example in deep learning use random number generation for weight initialization, optimization and in methods that aim to reduce the overfitting phenomena of these models. One of these methods include node dropout, which has been widely adopted. The method's internal logic is heavily dictated by a random number generator it utilizes. This study investigated the relationship of quality of randomness and the node dropout regularization in terms of reducing overfitting of neural networks. Our experimentation included five different random number generators, which output were tested for quality of randomness by various statistical tests. These sets of random numbers were then used to dictate the internal logic of a node dropout layer in a neural network model, in four different classification tasks. The impact of data size and relevant hyperparameters were tested, and the overall amount of overfitting, which was compared against the randomness results of a generator. The results suggest that true random number generation in node dropout can be both advantageous and disadvantageous, depending on the dataset and prediction problem at hand. These findings suggest that fitting neural networks in general can be improved by adding random number generation experimentation to modelling.
dc.identifier.eissn1873-6793
dc.identifier.jour-issn0957-4174
dc.identifier.olddbid188286
dc.identifier.oldhandle10024/171380
dc.identifier.urihttps://www.utupub.fi/handle/11111/43667
dc.identifier.urlhttps://doi.org/10.1016/j.eswa.2022.117938
dc.identifier.urnURN:NBN:fi-fe2022091258812
dc.language.isoen
dc.okm.affiliatedauthorKoivu, Aki
dc.okm.affiliatedauthorMäntyniemi, Santeri
dc.okm.discipline113 Computer and information sciencesen_GB
dc.okm.discipline113 Tietojenkäsittely ja informaatiotieteetfi_FI
dc.okm.internationalcopublicationnot an international co-publication
dc.okm.internationalityInternational publication
dc.okm.typeA1 ScientificArticle
dc.publisherPERGAMON-ELSEVIER SCIENCE LTD
dc.publisher.countryUnited Kingdomen_GB
dc.publisher.countryBritanniafi_FI
dc.publisher.country-codeGB
dc.relation.articlenumber117938
dc.relation.doi10.1016/j.eswa.2022.117938
dc.relation.ispartofjournalExpert Systems with Applications
dc.relation.volume207
dc.source.identifierhttps://www.utupub.fi/handle/10024/171380
dc.titleQuality of randomness and node dropout regularization for fitting neural networks
dc.year.issued2022

Tiedostot

Näytetään 1 - 1 / 1
Ladataan...
Name:
1-s2.0-S0957417422011769-main.pdf
Size:
1.7 MB
Format:
Adobe Portable Document Format