Skip to main content
Loading Events

« All Events

  • This event has passed.

Comprehensive Exam – Jeffrey Fairbanks

November 13 @ 12:30 pm - 2:30 pm MST

Presented by Jeffrey Fairbanks
Computing PhD, Computer Science emphasis

Online Presentation via Zoom

Analysis of Boosting Effectiveness and Reasoning in Large Language Models

Abstract: Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) by achieving remarkable performance across various tasks. However, their effectiveness is often limited by the need for well-constructed prompts and the substantial computational resources required for fine-tuning. This paper explores advanced techniques to enhance LLM reasoning and performance, focusing on five innovative methods: ProTeGi, Self-Refine, CRITIC, TextGrad, and Reflexion. These methods introduce novel approaches such as Beam Search and Reflection to optimize prompts and outputs, thereby reducing the dependency on extensive labeled data and manual human intervention. Through a comprehensive literature review, this paper evaluates the strengths and limitations of each method, highlighting their contributions to improving LLM reasoning. The integration of linguistic feedback, iterative refinement, and external tool interaction are examined as key strategies for optimization via reflection. Additionally, the paper discusses the challenges of computational expense, data requirements, and safety concerns associated with these techniques. The findings suggest that combining Beam Search with Reflection significantly enhances LLM output effectiveness, particularly in complex reasoning tasks. This paper aims to provide a detailed understanding of current advancements and open questions in LLM optimization, paving the way for more efficient and effective AI systems.

Committee:   Dr. Edoardo Serra (chair), Dr. Francesca Spezzano, Dr. Nasir Eisty, Dr. Steven Cutchin (CompEE)