- Sort Score
- Result 10 results
- Languages All
Results 1 - 3 of 3 for activation (0.2 sec)
-
RELEASE.md
* Add `UnifiedGRU` as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from `hard_sigmoid` to `sigmoid`, and `reset_after` to True in 2.0. Historically recurrent activation is `hard_sigmoid` since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change
Plain Text - Registered: Tue May 07 12:40:20 GMT 2024 - Last Modified: Mon Apr 29 19:17:57 GMT 2024 - 727.7K bytes - Viewed (8) -
tensorflow/c/experimental/gradients/nn_grad.cc
AbstractTensorHandle* upstream_grad = grad_outputs[0]; AbstractTensorHandle* activations = forward_outputs_[0]; // Calculate Grad std::string name = "relu_grad"; TF_RETURN_IF_ERROR(ReluGrad(ctx, upstream_grad, activations, &grad_inputs[0], name.c_str())); return absl::OkStatus(); } ~ReluGradientFunction() override {
C++ - Registered: Tue Mar 26 12:39:09 GMT 2024 - Last Modified: Wed Feb 28 13:53:47 GMT 2024 - 5.7K bytes - Viewed (0) -
tensorflow/c/eager/tape.h
// function and deleted (as the backprop code creates lots of gradients the user // is not interested in). // // BackwardFunction needs to be a closure which stores intermediate activations // from the forward computation and calls a vector-jacobian product function // (also known as adjoint function) to compute, given downstream gradients, // upstream gradients. //
C - Registered: Tue Apr 30 12:39:09 GMT 2024 - Last Modified: Tue Apr 02 12:40:29 GMT 2024 - 47.2K bytes - Viewed (1)