Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 53 for Peng (0.51 sec)

  1. lib/subutil/src/main/java/com/blankj/subutil/util/PinyinUtils.java

      biao  kun   kun   ti    fang  xiu   ran   mao   dan   kun   bin   fa    tiao  pi    zi    fa    ran   ti    pao   pi    mao   fu    er    rong  qu    none  xiu   gua   ji    peng  zhua  shao  sha   ti    li    bin   zong  ti    peng  song  zheng quan  zong  shun  jian  duo   hu    la    jiu   qi    lian  zhen  bin   peng  mo    san   man   man   seng  xu    lie   qian  qian  nong  huan  kuai  ning  bin   lie   rang  dou   dou   nao   hong  xi    dou   kan   dou   dou   jiu   chang yu    yu    li...
    Java
    - Registered: 2020-11-20 00:33
    - Last Modified: 2019-07-10 11:46
    - 129.1K bytes
    - Viewed (0)
  2. .mailmap

    Andres Ornelas <******@****.***>
    Caitlin Potter <******@****.***>
    Caitlin Potter <******@****.***> <******@****.***>
    Di Peng <******@****.***>
    Di Peng <******@****.***> <******@****.***>
    Georgios Kalpakas <******@****.***>
    Georgios Kalpakas <******@****.***> <******@****.***>
    Julie Ralph <******@****.***>
    Lucas Galfaso <******@****.***>
    Martin Staffa <******@****.***>
    Plain Text
    - Registered: 2020-11-20 06:53
    - Last Modified: 2017-11-22 12:24
    - 1.3K bytes
    - Viewed (0)
  3. model_cards/zanelim/singbert-large-sg/README.md

      'token_str': '.'},
     {'sequence': '[CLS] kopi c siew bao [SEP]',
      'score': 0.0017727474914863706,
      'token': 25945,
      'token_str': 'bao'},
     {'sequence': '[CLS] kopi c siew peng [SEP]',
      'score': 0.0012526646023616195,
      'token': 26473,
      'token_str': 'peng'}]
    
    >>> nlp("one teh c siew dai, and one kopi [MASK]")
    
    [{'sequence': '[CLS] one teh c siew dai, and one kopi. [SEP]',
      'score': 0.5249741077423096,
      'token': 1012,
    Plain Text
    - Registered: 2020-11-22 10:36
    - Last Modified: 2020-08-30 10:21
    - 6.5K bytes
    - Viewed (0)
  4. pipenv/vendor/cerberus/tests/test_registries.py

        document = {'a': {'bar': 'a'}, 'b': {'bar': 'b'}}
        assert_success(document, schema)
    
    
    def test_top_level_reference():
        schema_registry.add('peng', {'foo': {'type': 'integer'}})
        document = {'foo': 42}
        assert_success(document, 'peng')
    
    
    def test_rules_set_simple():
        rules_set_registry.add('foo', {'type': 'integer'})
        assert_success({'bar': 1}, {'bar': 'foo'})
    Python
    - Registered: 2020-11-24 21:31
    - Last Modified: 2019-05-15 14:42
    - 2.8K bytes
    - Viewed (0)
  5. model_cards/bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12/README.md

    sentence = ' '.join(tokenized)
    sentence = re.sub(r"\s's\b", "'s", sentence)
    ```
    
    ### BibTeX entry and citation info
    
    ```bibtex
    @InProceedings{peng2019transfer,
      author    = {Yifan Peng and Shankai Yan and Zhiyong Lu},
      title     = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
    Plain Text
    - Registered: 2020-11-22 10:36
    - Last Modified: 2020-11-05 08:03
    - 1.6K bytes
    - Viewed (0)
  6. src/google/protobuf/stubs/structurally_valid_unittest.cc

    // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
    // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
    
    // Copyright 2008 Google Inc. All Rights Reserved.
    // Author: ******@****.*** (Peter Peng)
    
    #include <google/protobuf/stubs/common.h>
    #include <gtest/gtest.h>
    
    namespace google {
    namespace protobuf {
    namespace internal {
    namespace {
    
    TEST(StructurallyValidTest, ValidUTF8String) {
    C++
    - Registered: 2020-10-07 06:16
    - Last Modified: 2019-09-30 16:28
    - 2.9K bytes
    - Viewed (0)
  7. model_cards/bionlp/bluebert_pubmed_mimic_uncased_L-24_H-1024_A-16/README.md

    sentence = ' '.join(tokenized)
    sentence = re.sub(r"\s's\b", "'s", sentence)
    ```
    
    ### BibTeX entry and citation info
    
    ```bibtex
    @InProceedings{peng2019transfer,
      author    = {Yifan Peng and Shankai Yan and Zhiyong Lu},
      title     = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
    Plain Text
    - Registered: 2020-11-22 10:36
    - Last Modified: 2020-11-18 18:37
    - 2.8K bytes
    - Viewed (0)
  8. model_cards/seiya/oubiobert-base-uncased/README.md

    The details of the pre-training procedure can be found in Wada, et al. (2020).  
    
    ## Evaluation
    
    We evaluated the performance of ouBioBERT in terms of the biomedical language understanding evaluation (BLUE) benchmark (Peng, et al., 2019). The numbers are mean (standard deviation) on five different random seeds.  
    
    
    | Dataset         |  Task Type                   |  Score       |
    |:----------------|:-----------------------------|-------------:|
    Plain Text
    - Registered: 2020-11-22 10:36
    - Last Modified: 2020-10-27 17:08
    - 2.4K bytes
    - Viewed (0)
  9. model_cards/allenai/wmt16-en-de-dist-12-1/README.md

    ### BibTeX entry and citation info
    
    ```
    @misc{kasai2020deep,
        title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
        author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
        year={2020},
        eprint={2006.10369},
        archivePrefix={arXiv},
        primaryClass={cs.CL}
    }
    Plain Text
    - Registered: 2020-11-22 10:36
    - Last Modified: 2020-11-17 02:43
    - 2.9K bytes
    - Viewed (0)
  10. model_cards/bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12/README.md

    sentence = ' '.join(tokenized)
    sentence = re.sub(r"\s's\b", "'s", sentence)
    ```
    
    ### BibTeX entry and citation info
    
    ```bibtex
    @InProceedings{peng2019transfer,
      author    = {Yifan Peng and Shankai Yan and Zhiyong Lu},
      title     = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
    Plain Text
    - Registered: 2020-11-22 10:36
    - Last Modified: 2020-11-06 08:22
    - 2.8K bytes
    - Viewed (0)
Back to top