| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | |||||
| 3 | 4 | 5 | 6 | 7 | 8 | 9 |
| 10 | 11 | 12 | 13 | 14 | 15 | 16 |
| 17 | 18 | 19 | 20 | 21 | 22 | 23 |
| 24 | 25 | 26 | 27 | 28 | 29 | 30 |
| 31 |
- build
- Cyber Security
- sotware-testing
- software-engineering
- FSE
- Software Engineering
- ICSE
- libFuzzer
- 대학원생
- binary code analaysis
- 바이너리 분석
- 프로그램 분석
- 느헤미야
- protobuf
- Environment
- QT
- software-testing
- 소프트웨어 취약저 분석
- fault-localization
- binary code analysis
- citrus
- 생명의 삶
- fuzzing
- unit-testing
- 묵상
- linking
- libxml2
- reading critique
- vulnerabilties
- graphfuzz
- Today
- Total
heechan.yang
[FSE24] A Quantitative and Qualitative Evaluation of LLM-Based Explainable Fault Localization 본문
[FSE24] A Quantitative and Qualitative Evaluation of LLM-Based Explainable Fault Localization
heechan.yang 2024. 7. 13. 18:06Authors: Sunmin Kang, Gabin An, Shin Yoo (KAIST) [1]

Notes
This paper presents autoFL which is an automatic fault localization tool that leverages LLM and meta-data of target subject such as failing test case and its (class, method) coverage with code snippets and comments. Not only does autoFL provide the identified location of the fault within the program code (which SBFL and MBFL techniques follow), it also provides an explanation of the root caues of the fault and an explanation on how to fix the fault.
Opinion on the paper
It is well-known the LLM limit is on the limitted amount of tokens it can take in. It seems the authors of the paper have put great effort on reducing the size of input tokens by snipping out irrelevant (datas that may confuse LLM). The automated process of autoFL is probabily what most of the developers today do manually with chatGPT on web browser. Hope such research can be also be used in the industry.
Questions
- At which stage of the autoFL framework is the LLM given the list of methods to search from?
- (To myself)... What practices can I do to improve my criticizing skills....?
Additional Materials to Read
- Chang-of-Thought Prompting, [40] of the paper
- OpenAI's LLMs function calling, [30] of the paper
- LLM self-consistency, [39] of the paper
Reference
[1] Sungmin Kang et al. "A Quantitative and Qualitative Evaluation of LLM-Based Explainable Fault Localization" FSE 2024