A two-step learning approach for solving full and almost full cold start problems in dyadic prediction
| dc.contributor.author | Tapio Pahikkala | |
| dc.contributor.author | Michiel Stock | |
| dc.contributor.author | Antti Airola | |
| dc.contributor.author | Tero Aittokallio | |
| dc.contributor.author | Bernard De Baets | |
| dc.contributor.author | Willem Waegeman | |
| dc.contributor.organization | fi=matematiikan ja tilastotieteen laitos|en=Department of Mathematics and Statistics| | |
| dc.contributor.organization | fi=tietojenkäsittelytiede|en=Computer Science| | |
| dc.contributor.organization | fi=tietotekniikan laitos|en=Department of Computing| | |
| dc.contributor.organization-code | 1.2.246.10.2458963.20.23479734818 | |
| dc.contributor.organization-code | 1.2.246.10.2458963.20.46717060993 | |
| dc.contributor.organization-code | 1.2.246.10.2458963.20.85312822902 | |
| dc.converis.publication-id | 3746444 | |
| dc.converis.url | https://research.utu.fi/converis/portal/Publication/3746444 | |
| dc.date.accessioned | 2022-10-28T12:37:20Z | |
| dc.date.available | 2022-10-28T12:37:20Z | |
| dc.description.abstract | <p> Dyadic prediction methods operate on pairs of objects (dyads), aiming to infer labels for out-of-sample dyads. We consider the full and almost full cold start problem in dyadic prediction, a setting that occurs when both objects in an out-of-sample dyad have not been observed during training, or if one of them has been observed, but very few times. A popular approach for addressing this problem is to train a model that makes predictions based on a pairwise feature representation of the dyads, or, in case of kernel methods, based on a tensor product pairwise kernel. As an alternative to such a kernel approach, we introduce a novel two-step learning algorithm that borrows ideas from the fields of pairwise learning and spectral filtering. We show theoretically that the two-step method is very closely related to the tensor product kernel approach, and experimentally that it yields a slightly better predictive performance. Moreover, unlike existing tensor product kernel methods, the two-step method allows closed-form solutions for training and parameter selection via cross-validation estimates both in the full and almost full cold start settings, making the approach much more efficient and straightforward to implement.</p> | |
| dc.format.pagerange | 517 | |
| dc.format.pagerange | 532 | |
| dc.identifier.eisbn | 978-3-662-44851-9 | |
| dc.identifier.isbn | 978-3-662-44850-2 | |
| dc.identifier.jour-issn | 0302-9743 | |
| dc.identifier.olddbid | 177746 | |
| dc.identifier.oldhandle | 10024/160840 | |
| dc.identifier.uri | https://www.utupub.fi/handle/11111/34489 | |
| dc.identifier.urn | URN:NBN:fi-fe2021042715307 | |
| dc.okm.affiliatedauthor | Airola, Antti | |
| dc.okm.affiliatedauthor | Aittokallio, Tero | |
| dc.okm.affiliatedauthor | Pahikkala, Tapio | |
| dc.okm.discipline | 113 Computer and information sciences | en_GB |
| dc.okm.discipline | 113 Tietojenkäsittely ja informaatiotieteet | fi_FI |
| dc.okm.internationalcopublication | international co-publication | |
| dc.okm.internationality | International publication | |
| dc.okm.type | A4 Conference Article | |
| dc.relation.conference | The European Conferences on Machine Learning (ECML) and on Principles and Practice of Knowledge Discovery in Data Bases (PKDD) | |
| dc.relation.doi | 10.1007/978-3-662-44851-9_33 | |
| dc.relation.ispartofjournal | Lecture Notes in Computer Science | |
| dc.relation.ispartofseries | Lecture Notes in Computer Science | |
| dc.relation.volume | 8725 | |
| dc.source.identifier | https://www.utupub.fi/handle/10024/160840 | |
| dc.title | A two-step learning approach for solving full and almost full cold start problems in dyadic prediction | |
| dc.title.book | Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2014) | |
| dc.year.issued | 2014 |
Tiedostot
1 - 1 / 1