![]() | Only 14 pages are availabe for public view |
Abstract Multi-task learning (MTL) has been commonly used in various applications where it is required to build a single model that optimizes many associated learning tasks at the same time. Thus, in this modeling task, sometimes, the same set of features is used by a single model that is trained and optimized to produce several outputs simultaneously. The objective of this study is to contribute to a better understanding of how and when different MTL techniques work or not. To achieve this objective, methods were compared using different subsets of the input features. These subsets include the intersection, union, and difference between the features that are relevant to each output individually. In addition, a phenomenon known as negative transfer, where enhancing a model’s performance on one task would harm performance on a task with different requirements, was also investigated. |