Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 31 for Bowman (0.14 sec)

  1. model_cards/DeepPavlov/rubert-base-cased-sentence/README.md

    
    \[1\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \(2015\) A large annotated corpus for learning natural language inference. arXiv preprint [arXiv:1508.05326](https://arxiv.org/abs/1508.05326)
    
    Plain Text
    - Registered: 2020-10-25 10:36
    - Last Modified: 2020-07-15 16:59
    - 976 bytes
    - Viewed (0)
  2. model_cards/DeepPavlov/bert-base-multilingual-cased-sentence/README.md

    
    \[1\]: Williams A., Nangia N. & Bowman S. \(2017\) A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. arXiv preprint [arXiv:1704.05426](https://arxiv.org/abs/1704.05426)
    
    \[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053)
    
    Plain Text
    - Registered: 2020-10-25 10:36
    - Last Modified: 2020-03-06 22:19
    - 1K bytes
    - Viewed (0)
  3. model_cards/gsarti/scibert-nli/README.md

    # SciBERT-NLI
    
    This is the model [SciBERT](https://github.com/allenai/scibert) [1] fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [2].
    
    Plain Text
    - Registered: 2020-10-25 10:36
    - Last Modified: 2020-03-24 15:01
    - 2K bytes
    - Viewed (0)
  4. examples/text-classification/README.md

    # XNLI
    
    Based on the script [`run_xnli.py`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_xnli.py).
    
    Plain Text
    - Registered: 2020-10-25 10:36
    - Last Modified: 2020-10-22 15:42
    - 10.2K bytes
    - Viewed (0)
  5. docs/source/main_classes/processors.rst

    `The Cross-Lingual NLI Corpus (XNLI) <https://www.nyu.edu/projects/bowman/xnli/>`__ is a benchmark that evaluates
    the quality of cross-lingual text representations. 
    XNLI is crowd-sourced dataset based on `MultiNLI <http://www.nyu.edu/projects/bowman/multinli/>`: pairs of text are labeled with textual entailment 
    Plain Text
    - Registered: 2020-10-25 10:36
    - Last Modified: 2020-09-23 17:20
    - 6.9K bytes
    - Viewed (0)
  6. model_cards/gsarti/biobert-nli/README.md

    # BioBERT-NLI
    
    This is the model [BioBERT](https://github.com/dmis-lab/biobert) [1] fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [2].
    
    Plain Text
    - Registered: 2020-10-25 10:36
    - Last Modified: 2020-03-25 01:15
    - 2K bytes
    - Viewed (0)
  7. model_cards/gsarti/covidbert-nli/README.md

    The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**.
    
    Plain Text
    - Registered: 2020-10-25 10:36
    - Last Modified: 2020-03-31 11:59
    - 2.2K bytes
    - Viewed (0)
  8. multilingual.md

    training set has been machine-translated.
    
    To run the fine-tuning code, please download the
    [XNLI dev/test set](https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip) and the
    [XNLI machine-translated training set](https://www.nyu.edu/projects/bowman/xnli/XNLI-MT-1.0.zip)
    and then unpack both .zip files into some directory `$XNLI_DIR`.
    
    To run fine-tuning on XNLI. The language is hard-coded into `run_classifier.py`
    Plain Text
    - Registered: 2020-10-25 22:38
    - Last Modified: 2019-10-17 19:45
    - 11K bytes
    - Viewed (0)
  9. acknowledgments.tex

    \item 第十一章(实践方法论): Daniel Beckstein. \item 第十二章(应用): George Dahl, Vladimir Nekrasov and Ribana Roscher. \item 第十三章(线性因子模型): Jayanth Koushik. \item 第十五章(表示学习): Kunal Ghosh. \item 第十六章( 深度学习中的结构化概率模型): Minh Lê and Anton Varfolom. \item 第十八章(直面配分函数): Sam Bowman. \item 第十九章(近似推断): Yujia Bao. \item 第二十章(深度生成模型): Nicolas Chapados, Daniel Galvez, Wenming Ma, Fady Medhat, Shakir Mohamed and Gr\'egoire Montavon. \item 参考文献: Lukas Michelbacher and Leslie N. Smith. \end{itemize} % CHECK: make sure the chapters are...
    Others
    - Registered: 2020-10-22 11:44
    - Last Modified: 2017-05-15 02:11
    - 10.3K bytes
    - Viewed (0)
  10. docs/source/pretrained_models.rst

    |                    |                                                            | | ``roberta-large`` fine-tuned on `MNLI <http://www.nyu.edu/projects/bowman/multinli/>`__.                                            |
    Plain Text
    - Registered: 2020-10-25 10:36
    - Last Modified: 2020-10-09 09:16
    - 93K bytes
    - Viewed (0)
Back to top