Jason Duran – Computer Science
Title: Explainability and Interpretability for the problem of Misinformation Detection
Abstract:
The challenge of identification and detection of online fake news and misinformation is a rapidly growing field of computer science. The use of Explainability and Interpretability in this area is particularly important because understanding why a model is identifying a piece of information as misleading or false is crucial to adopting and accepting these models as tools for increasing the quality of everyone’s social media experience. Explainability in this area is also particularly interesting due to the broad array of models that span linear, tree based, deep learning, graph neural network, attention and large language models. The models are often a mixture of multiple types of models and use many techniques from NLP that make explainability particularly challenging and difficult to generalize across models and data. In addition, the recent emergence of even more complex Large Language models like ChatGPT appear to be robust classifiers of misinformation while remaining even more challenging in direct explainability while offering even more human interpretable output. This work covers both the general areas of explainability across these models, touching on the significant relevant ideas in explainability, and also reviews the seminal and recent works specifically in the explanation of fake news detection in the past few years and identifies some interesting open problems in the area.
Committee:
Dr. Francesca Spezzano (Chair), Dr. Edoardo Serra, Dr. Gaby Dagher.