Refreshing Large Language Modes With Search Engine Augmentation
FRESHLLMs
Posted on January 5, 2024
This post is a brief summary about the paper that I read for my study and curiosity, so I shortly arrange the content of the paper, titled FRESHLLMs: Refreshing Large Language Models with Search Engine Augmentation (vu et al., arXiv 2023), that I read and studied.
[Read More]
Tags:
LLM, Factuality, Knowledge
How Can We Know What Language Modles Know?
Language Model Knowledge Analysis
Posted on December 18, 2023
This post is a brief summary about the paper that I read for my study and curiosity, so I shortly arrange the content of the paper, titled How Can We Know What Lnaguage Models Know? (Jiang et al., TACL 2020), that I read and studied.
[Read More]
Tags:
LLM, Knowledge, Prompt
Teaching Models to Express Their Uncertainty in Words
Teaching Models to Express Their Uncertaintiy in Words
Posted on November 20, 2023
This post is a brief summary about the paper that I read for my study and curiosity, so I shortly arrange the content of the paper, titled Teaching Models to Express Their Uncertainty in Words. (Lin et al., TMLR 2022), that I read and studied.
[Read More]
Tags:
LLM, Factuality, Knowledge
(ITI) Inference-Time Intervenction- Eliciting Truthful answers from a Language Model
ITI
Posted on October 14, 2023
This post is a brief summary about the paper that I read for my study and curiosity, so I shortly arrange the content of the paper, titled Inference-Time Intervention: Eliciting Truthful Answers from a Language Model (Li et al., arXiv 2023), that I read and studied.
[Read More]
Tags:
LLM, Deconding, Factuality
(DoLA) DECODING BY CONTRASTING LAYERS IMPROVES FACTUALITY IN LARGE LANGUAGE MODELS
DoLA
Posted on October 14, 2023
This post is a brief summary about the paper that I read for my study and curiosity, so I shortly arrange the content of the paper, titled DoLA: DECODING BY CONTRASTING LAYERS IMPROVES FACTUALITY IN LARGE LANGUAGE MODELS (Chung et al., arXiv 2023), that I read and studied.
[Read More]
Tags:
LLM, Deconding, Factuality