- Sort Score
- Result 10 results
- Languages All
Results 31 - 40 of 381 for computations (0.28 sec)
-
tensorflow/compiler/jit/device_compiler.h
// function/graph/cluster into an XlaCompilationResult (HLO) and // `ExecutableType` and tries saving/persisting the compiled HLO and executable // to disk. // // Since XLA computations must have static shapes, DeviceCompiler generates a // new XLA computation for each new set of input shapes. // TODO(b/255826209): De-templatize once we've moved to Device API completely. template <typename ExecutableType, typename ClientType>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 22 08:47:20 UTC 2024 - 22.1K bytes - Viewed (0) -
subprojects/core-api/src/main/java/org/gradle/api/provider/Provider.java
* </p> * * <p> * A typical use of a provider is to pass values from one Gradle model element to another, e.g. from a project extension * to a task, or between tasks. Providers also allow expensive computations to be deferred until their value is actually * needed, usually at task execution time. * </p> * * <p> * There are a number of ways to create a {@link Provider} instance. Some common methods: * </p> *
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Tue Apr 16 09:14:21 UTC 2024 - 10.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/transforms/tf2xla_rewriter.cc
XlaComputation& computation) { xla::DebugOptions debug_options; TF_ASSIGN_OR_RETURN(auto hlo_module_config, xla::HloModule::CreateModuleConfigFromProto( computation.proto(), debug_options)); TF_ASSIGN_OR_RETURN( std::unique_ptr<xla::HloModule> hlo_module, xla::HloModule::CreateFromProto(computation.proto(), hlo_module_config));
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 02 09:16:07 UTC 2024 - 18.9K bytes - Viewed (0) -
platforms/software/dependency-management/src/main/java/org/gradle/api/internal/artifacts/ivyservice/resolveengine/excludes/factories/NormalizingExcludeFactory.java
import static java.util.stream.Collectors.toSet; /** * This factory performs normalization of exclude rules. This is the smartest * of all factories and is responsible for doing some basic algebra computations. * It shouldn't be too slow, or the whole chain will pay the price. */ public class NormalizingExcludeFactory extends DelegatingExcludeFactory { private final Intersections intersections;
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Tue Oct 10 21:10:11 UTC 2023 - 17.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/internal/passes/clustering_passes.td
replicated TPU computation. The number of times a TPU computation is replicated is defined in the `tf.TPUReplicateMetadata` op (`num_replicas` attribute) and operand and result sizes of `tf.TPUReplicatedInput` and `tf.TPUReplicatedOutput` respectively must match, excluding packed tensors. It is also assumed ops of the same TPU computation do not have ops outside
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Apr 30 02:01:13 UTC 2024 - 19.8K bytes - Viewed (0) -
src/cmd/vendor/golang.org/x/text/language/match.go
c = High } } // We store the results of the computations of the tie-breaker rules along // with the best match. There is no need to do the checks once we determine // we have a winner, but we do still need to do the tie-breaker computations. // We use "beaten" to keep track if we still need to do the checks.
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed Jan 24 13:01:26 UTC 2024 - 25.1K bytes - Viewed (0) -
tensorflow/compiler/jit/flags.h
#include "tensorflow/core/protobuf/config.pb.h" #include "tensorflow/core/util/command_line_flags.h" namespace tensorflow { struct XlaAutoJitFlag { // Control compilation of operators into XLA computations on CPU and GPU // devices. 0 = use ConfigProto setting; -1 = off; 1 = on for things very // likely to be improved; 2 = on for everything. //
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 17 18:52:57 UTC 2024 - 14.5K bytes - Viewed (0) -
src/crypto/ecdh/ecdh_test.go
if err != nil { t.Fatal(err) } aliceSecret, err := aliceKey.ECDH(bobKey.PublicKey()) if err != nil { t.Fatal(err) } if !bytes.Equal(bobSecret, aliceSecret) { t.Error("two ECDH computations came out different") } }) } type countingReader struct { r io.Reader n int } func (r *countingReader) Read(p []byte) (int, error) { n, err := r.r.Read(p) r.n += n return n, err
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed Mar 27 18:23:49 UTC 2024 - 18K bytes - Viewed (0) -
tensorflow/compiler/jit/flags.cc
void AppendMarkForCompilationPassFlagsInternal(std::vector<Flag>* flag_list) { std::vector<Flag> new_flags = { Flag("tf_xla_auto_jit", SetterForXlaAutoJitFlag, "0", "Control compilation of operators into XLA computations on CPU and " "GPU devices. 0 = use ConfigProto setting; -1 = off; 1 = on for " "things very likely to be improved; 2 = on for everything; "
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 17 18:52:57 UTC 2024 - 24.5K bytes - Viewed (0) -
src/math/rand/v2/rand.go
// x1:x0 := r.Uint64() // 0:hi, lo1:lo0 := bits.Mul64(x1:x0, 0:n) // Writing out the multiplication in terms of bits.Mul32 allows // using direct hardware instructions and avoiding // the computations involving these zeros. x := r.Uint64() lo1a, lo0 := bits.Mul32(uint32(x), n) hi, lo1b := bits.Mul32(uint32(x>>32), n) lo1, c := bits.Add32(lo1a, lo1b, 0) hi += c if lo1 == 0 && lo0 < uint32(n) {
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 22 02:25:49 UTC 2024 - 12.8K bytes - Viewed (0)