update readme
Browse files
README.md
CHANGED
|
@@ -11,6 +11,11 @@ license: apache-2.0
|
|
| 11 |
|
| 12 |
# doc2query/all-with_prefix-t5-base-v1
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
## Usage
|
| 16 |
```python
|
|
|
|
| 11 |
|
| 12 |
# doc2query/all-with_prefix-t5-base-v1
|
| 13 |
|
| 14 |
+
This is a [doc2query](https://arxiv.org/abs/1904.08375) based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
|
| 15 |
+
|
| 16 |
+
It can be used for:
|
| 17 |
+
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
|
| 18 |
+
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
|
| 19 |
|
| 20 |
## Usage
|
| 21 |
```python
|