Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 309 for Rebert (0.46 sec)

  1. site/data/core-team.yml

    - name: Mark Otto
      user: mdo
    
    - name: Jacob Thornton
      user: fat
    
    - name: Chris Rebert
      user: cvrebert
    
    - name: XhmikosR
      user: xhmikosr
    
    - name: Patrick H. Lauke
      user: patrickhlauke
    
    - name: Gleb Mazovetskiy
      user: glebm
    
    - name: Johann-S
      user: johann-s
    
    - name: Andres Galante
      user: andresgalante
    
    - name: Martijn Cuppens
      user: martijncuppens
    
    - name: Shohei Yoshida
    Plain Text
    - Registered: 2021-10-29 16:40
    - Last Modified: 2020-02-09 12:12
    - 394 bytes
    - Viewed (0)
  2. History.md

     * add license info to package.json (Chris Rebert)
     * xyz@~0.5.0 (David Chambers)
     * Remove unofficial signature of `children` (Mike Pennisi)
     * Fix bug in `css` method (Mike Pennisi)
     * Correct bug in implementation of `Cheerio#val` (Mike Pennisi)
    
    0.18.0 / 2014-11-06
    ==================
    
     * bump htmlparser2 dependency to ~3.8.1 (Chris Rebert)
     * Correct unit test titles (Mike Pennisi)
    Plain Text
    - Registered: 2021-10-25 10:16
    - Last Modified: 2018-10-08 19:13
    - 21.8K bytes
    - Viewed (0)
  3. .mailmap

    Brandon Aaron <******@****.***>
    Carl Danley <******@****.***>
    Carl Fürstenberg <******@****.***>
    Carl Fürstenberg <******@****.***> <******@****.***>
    Charles McNulty <******@****.***>
    Chris Rebert <******@****.***>
    Chris Rebert <******@****.***> <******@****.***>
    Christian Oliff <******@****.***>
    Christian Oliff <******@****.***> <******@****.***>
    Plain Text
    - Registered: 2021-10-25 05:12
    - Last Modified: 2020-02-24 18:10
    - 5.6K bytes
    - Viewed (0)
  4. AUTHORS.txt

    Stefan Bauckmeier  <******@****.***>
    Jon Evans <******@****.***>
    TJ Holowaychuk <******@****.***>
    Riccardo De Agostini <******@****.***>
    Michael Bensoussan <******@****.***>
    Louis-Rémi Babé <******@****.***>
    Robert Katić <robert******@****.***>
    Damian Janowski <******@****.***>
    Anton Kovalyov <******@****.***>
    Dušan B. Jovanovic <******@****.***>
    Earle Castledine <******@****.***>
    Rich Dougherty <******@****.***>
    Plain Text
    - Registered: 2021-10-25 05:12
    - Last Modified: 2020-02-24 18:10
    - 13.1K bytes
    - Viewed (0)
  5. AUTHORS

    Rick Ford <******@****.***>
    Riley Tomasek <******@****.***>
    Rob Arnold <******@****.***>
    Robert Binna <******@****.***>
    Robert Chang <******@****.***>
    Robert Haritonov <******@****.***>
    Robert Kielty <******@****.***>
    Robert Knight <robert******@****.***>
    Robert Martin <******@****.***>
    Robert Sedovsek <robert******@****.***>
    Robin Berjon <******@****.***>
    Robin Frischmann <******@****.***>
    Plain Text
    - Registered: 2021-10-31 11:45
    - Last Modified: 2021-03-31 22:13
    - 41.5K bytes
    - Viewed (0)
  6. tests/test_tokenization_deberta.py

            for tokenizer_class in tokenizer_classes:
                tokenizer = tokenizer_class.from_pretrained("microsoft/deberta-base")
    
                sequences = [
                    "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations",
                    "ALBERT incorporates two parameter reduction techniques",
    Python
    - Registered: 2021-10-31 10:36
    - Last Modified: 2021-04-30 12:08
    - 7.2K bytes
    - Viewed (0)
  7. docs/source/model_doc/albert.rst

    .. autoclass:: transformers.AlbertTokenizerFast
        :members:
    
    
    Albert specific outputs
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    
    .. autoclass:: transformers.models.albert.modeling_albert.AlbertForPreTrainingOutput
        :members:
    
    .. autoclass:: transformers.models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput
        :members:
    
    
    AlbertModel
    Plain Text
    - Registered: 2021-10-31 10:36
    - Last Modified: 2021-08-30 15:29
    - 8.5K bytes
    - Viewed (0)
  8. src/transformers/models/auto/modeling_auto.py

            ("tapas", "TapasForMaskedLM"),
            ("deberta", "DebertaForMaskedLM"),
            ("deberta-v2", "DebertaV2ForMaskedLM"),
            ("ibert", "IBertForMaskedLM"),
        ]
    )
    
    MODEL_FOR_CAUSAL_LM_MAPPING_NAMES = OrderedDict(
        [
            # Model for Causal LM mapping
            ("trocr", "TrOCRForCausalLM"),
            ("gptj", "GPTJForCausalLM"),
            ("rembert", "RemBertForCausalLM"),
    Python
    - Registered: 2021-10-31 10:36
    - Last Modified: 2021-10-28 12:23
    - 27.4K bytes
    - Viewed (7)
  9. docs/source/model_doc/deberta.rst

    DeBERTa
    -----------------------------------------------------------------------------------------------------------------------
    
    Overview
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    
    The DeBERTa model was proposed in `DeBERTa: Decoding-enhanced BERT with Disentangled Attention
    Plain Text
    - Registered: 2021-10-31 10:36
    - Last Modified: 2021-08-12 09:01
    - 6.1K bytes
    - Viewed (0)
  10. docs/source/model_doc/rembert.rst

    Tips:
    
    For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the
    embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input
    embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is
    also similar to the Albert one rather than the BERT one.
    
    RemBertConfig
    Plain Text
    - Registered: 2021-10-31 10:36
    - Last Modified: 2021-07-24 15:31
    - 6.5K bytes
    - Viewed (0)
Back to top