KUCST at CheckThat 2023: How good can we be with a generic model?

Publikation: Working paperPreprintForskning

Standard

KUCST at CheckThat 2023 : How good can we be with a generic model? / Agirrezabal, Manex.

2023.

Publikation: Working paperPreprintForskning

Harvard

Agirrezabal, M 2023 'KUCST at CheckThat 2023: How good can we be with a generic model?'.

APA

Agirrezabal, M. (2023). KUCST at CheckThat 2023: How good can we be with a generic model?

Vancouver

Agirrezabal M. KUCST at CheckThat 2023: How good can we be with a generic model? 2023.

Author

Agirrezabal, Manex. / KUCST at CheckThat 2023 : How good can we be with a generic model?. 2023.

Bibtex

@techreport{6754bf16f82f4c7d8ca8b7e87baeecda,
title = "KUCST at CheckThat 2023: How good can we be with a generic model?",
abstract = " In this paper we present our method for tasks 2 and 3A at the CheckThat2023 shared task. We make use of a generic approach that has been used to tackle a diverse set of tasks, inspired by authorship attribution and profiling. We train a number of Machine Learning models and our results show that Gradient Boosting performs the best for both tasks. Based on the official ranking provided by the shared task organizers, our model shows an average performance compared to other teams. ",
keywords = "cs.CL",
author = "Manex Agirrezabal",
year = "2023",
language = "English",
type = "WorkingPaper",

}

RIS

TY - UNPB

T1 - KUCST at CheckThat 2023

T2 - How good can we be with a generic model?

AU - Agirrezabal, Manex

PY - 2023

Y1 - 2023

N2 - In this paper we present our method for tasks 2 and 3A at the CheckThat2023 shared task. We make use of a generic approach that has been used to tackle a diverse set of tasks, inspired by authorship attribution and profiling. We train a number of Machine Learning models and our results show that Gradient Boosting performs the best for both tasks. Based on the official ranking provided by the shared task organizers, our model shows an average performance compared to other teams.

AB - In this paper we present our method for tasks 2 and 3A at the CheckThat2023 shared task. We make use of a generic approach that has been used to tackle a diverse set of tasks, inspired by authorship attribution and profiling. We train a number of Machine Learning models and our results show that Gradient Boosting performs the best for both tasks. Based on the official ranking provided by the shared task organizers, our model shows an average performance compared to other teams.

KW - cs.CL

M3 - Preprint

BT - KUCST at CheckThat 2023

ER -

ID: 374968865