Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 1 of 1 for Request (0.17 sec)

  1. RELEASE.md

        now served with tensorflow/serving, it will accept requests using 'inputs'
        and 'outputs'. Starting at 1.2, such a model will accept the keys specified
        during export. Therefore, inference requests using 'inputs' and 'outputs'
        may start to fail. To fix this, either update any inference clients to send
        requests with the actual input and output keys used by the trainer code, or
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Mon Apr 29 19:17:57 GMT 2024
    - 727.7K bytes
    - Viewed (8)
Back to top