# LangChain Authors - Improving Document Retrieval With Contextual Compression (Highlights) ![rw-book-cover|256](https://images.unsplash.com/photo-1562654501-a0ccc0fc3fb1?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGRvY3VtZW50fGVufDB8fHx8MTY4MjA1NDU3MQ&ixlib=rb-4.0.3&q=80&w=2000) ## Metadata **Review**:: [readwise.io](https://readwise.io/bookreview/27018362) **Source**:: #from/readwise #from/reader **Zettel**:: #zettel/fleeting **Status**:: #x **Authors**:: [[LangChain Authors]] **Full Title**:: Improving Document Retrieval With Contextual Compression **Category**:: #articles #readwise/articles **Category Icon**:: 📰 **URL**:: [blog.langchain.dev](https://blog.langchain.dev/improving-document-retrieval-with-contextual-compression/) **Host**:: [[blog.langchain.dev]] **Highlighted**:: [[2023-04-29]] **Created**:: [[2023-04-29]] ## Highlights - Inserting irrelevant information into the LLM prompt is bad because: 1. It might distract the LLM from the relevant information 2. It takes up precious space that could be used to insert other relevant information. ([View Highlight](https://read.readwise.io/read/01gz554rwzmhndmnwjqt36nh96)) ^517845361 - “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale. ([View Highlight](https://read.readwise.io/read/01gz555szy9s7gvk9wzrtfywc5)) ^517845434