- Sort Score
- Result 10 results
- Languages All
Results 1 - 5 of 5 for GradientTape (0.34 sec)
-
tensorflow/c/eager/tape.h
class GradientTape { public: // If `persistent` is true, GradientTape will not eagerly delete backward // functions (and hence the tensors they keep alive). Instead, everything // is deleted in ~GradientTape. Persistent GradientTapes are useful when // users want to compute multiple gradients over the same tape. explicit GradientTape(bool persistent) : persistent_(persistent) {}
C - Registered: Tue Apr 30 12:39:09 GMT 2024 - Last Modified: Tue Apr 02 12:40:29 GMT 2024 - 47.2K bytes - Viewed (1) -
tensorflow/c/eager/gradients.h
// gradients. class Tape : protected eager::GradientTape<AbstractTensorHandle, GradientFunction, TapeTensor> { public: using GradientTape<AbstractTensorHandle, GradientFunction, TapeTensor>::GradientTape; // Returns whether the tape is persistent, i.e., whether the tape will hold
C - Registered: Tue Apr 30 12:39:09 GMT 2024 - Last Modified: Mon Sep 26 10:27:05 GMT 2022 - 6.9K bytes - Viewed (0) -
tensorflow/c/eager/gradients.cc
for (int i = 0; i < tensors.size(); i++) { tensor_ids[i] = ToId(tensors[i]); tensor_dtypes[i] = tensors[i]->DataType(); } return GradientTape::ShouldRecord(tensor_ids, tensor_dtypes); } void Tape::DeleteTrace(const AbstractTensorHandle* t) { GradientTape::DeleteTrace(ToId(t)); } std::vector<int64_t> MakeTensorIDList( absl::Span<AbstractTensorHandle* const> tensors) {
C++ - Registered: Tue Apr 30 12:39:09 GMT 2024 - Last Modified: Thu Feb 15 09:49:45 GMT 2024 - 19.3K bytes - Viewed (0) -
tensorflow/c/experimental/gradients/array_grad_test.cc
bool UseFunction() const { return std::get<2>(GetParam()); } }; TEST_P(CppGradients, TestIdentityNGrad) { // This test is interesting because the current implementation of GradientTape // would return [0, 1] whereas we use build_default_zeros_grads=false here // so we get back [nullptr, 1]. AbstractTensorHandlePtr x1; { AbstractTensorHandle* x1_raw = nullptr;
C++ - Registered: Tue Mar 26 12:39:09 GMT 2024 - Last Modified: Wed Feb 28 13:53:47 GMT 2024 - 5K bytes - Viewed (0) -
RELEASE.md
* TF Core: * Corrected higher-order gradients of control flow constructs (`tf.cond`, `tf.while_loop`, and compositions like `tf.foldl`) computed with `tf.GradientTape` inside a `tf.function`. * Changed the default step size in `gradient_checker_v2.compute_gradients` to be exactly representable as a binary floating point numbers. This
Plain Text - Registered: Tue Apr 30 12:39:09 GMT 2024 - Last Modified: Mon Apr 29 19:17:57 GMT 2024 - 727.7K bytes - Viewed (8)