heechan.yang

[FSE24] A Quantitative and Qualitative Evaluation of LLM-Based Explainable Fault Localization 본문

Research Papers

[FSE24] A Quantitative and Qualitative Evaluation of LLM-Based Explainable Fault Localization

heechan.yang 2024. 7. 13. 18:06

Authors: Sunmin Kang, Gabin An, Shin Yoo (KAIST) [1]

Diagram of AutoFL from the paper

 

Notes

This paper presents autoFL which is an automatic fault localization tool that leverages LLM and meta-data of target subject such as failing test case and its (class, method) coverage with code snippets and comments. Not only does autoFL provide the identified location of the fault within the program code (which SBFL and MBFL techniques follow), it also provides an explanation of the root caues of the fault and an explanation on how to fix the fault.

 

 

Opinion on the paper

It is well-known the LLM limit is on the limitted amount of tokens it can take in. It seems the authors of the paper have put great effort on reducing the size of input tokens by snipping out irrelevant (datas that may confuse LLM). The automated process of autoFL is probabily what most of the developers today do manually with chatGPT on web browser. Hope such research can be also be used in the industry.

 

 

Questions

  1. At which stage of the autoFL framework is the LLM given the list of methods to search from?
  2. (To myself)... What practices can I do to improve my criticizing skills....?

 

Additional Materials to Read

  1. Chang-of-Thought Prompting, [40] of the paper
  2. OpenAI's LLMs function calling, [30] of the paper
  3. LLM self-consistency, [39] of the paper

 

Reference

[1] Sungmin Kang et al. "A Quantitative and Qualitative Evaluation of LLM-Based Explainable Fault Localization" FSE 2024