- Sort Score
- Result 10 results
- Languages All
Results 21 - 30 of 1,130 for computation (0.3 sec)
-
android/guava/src/com/google/common/util/concurrent/ListenableFuture.java
* The listener will run when the {@code Future}'s computation is {@linkplain Future#isDone() * complete} or, if the computation is already complete, immediately. * * <p>There is no guaranteed ordering of execution of listeners, but any listener added through * this method is guaranteed to be called once the computation is complete. *
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Mon Jun 26 21:13:41 UTC 2023 - 8K bytes - Viewed (0) -
platforms/ide/ide-plugins/src/main/java/org/gradle/plugins/ide/idea/internal/IdeaProjectInternal.java
* the convention computation that is not compatible with Isolated Projects. */ @Nullable public IdeaLanguageLevel getRawLanguageLevel() { return languageLevel; } /** * Returns the user-defined value for the {@link #getTargetBytecodeVersion()} without triggering * the convention computation that is not compatible with Isolated Projects. */
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Tue Dec 12 13:32:59 UTC 2023 - 1.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/internal/legalize_tf_to_hlo.cc
LOG_FIRST_N(INFO, 1) << "Compiling MLIR computation to XLA HLO using the " "Combined MLIR Tf2Xla Bridge."; absl::StatusOr<std::string> mlir_compilation = internal::CompileFromMlirToXlaHlo( /*lower_to_xla_hlo=*/false, computation, metadata, device_type, shape_determination_fns, use_tuple_args, compilation_result,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sun Apr 14 20:29:34 UTC 2024 - 3.7K bytes - Viewed (0) -
platforms/ide/ide-plugins/src/main/java/org/gradle/plugins/ide/idea/internal/IdeaModuleInternal.java
* the convention computation that is not compatible with Isolated Projects. */ public @Nullable IdeaLanguageLevel getRawLanguageLevel() { return languageLevel; } /** * Returns the user-defined value for the {@link #getTargetBytecodeVersion()} without triggering * the convention computation that is not compatible with Isolated Projects. */
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Mon Dec 11 12:33:41 UTC 2023 - 1.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/utils/tf_xla_mlir_translate.cc
xla::XlaComputation computation, return_value.valid() ? builder.Build(return_value) : builder.Build()); auto hlo_module = computation.proto(); xla::HloProto hlo_proto; hlo_proto.mutable_hlo_module()->Swap(&hlo_module); compilation_result->computation = std::make_shared<xla::XlaComputation>(); xla::XlaComputation* xla_computation = compilation_result->computation.get();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 18.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/api/v1/compile_tf_graph.h
namespace v1 { // Compiles the given Tensorflow graph into xla::HLO. The result is in // compilation_result. If the input computation is in MLIR, it will be // converted to a Tensorflow graph. Otherwise, the graph compiler will be run. absl::Status CompileTensorflowGraphToHlo( const std::variant<tpu::MlirToHloArgs, tpu::FunctionToHloArgs>& computation, const tpu::TPUCompileMetadataProto& metadata, bool use_tuple_args,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Apr 13 08:08:57 UTC 2024 - 2.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/functionalize-if.mlir
// RUN: tf-opt %s --run-tf-graph-optimization --graph-passes=FunctionalizeControlFlowForXlaPass | FileCheck %s func.func @main() { tf_executor.graph { %0 = tf_executor.island wraps "tf._TPUReplicate"() {computation = @foo, Tinputs = [], Tbroadcast_inputs = [], NumVariables = 0, Tguaranteed_constants = [], output_types = []} : () -> () loc("_TPUReplicate") tf_executor.fetch } func.return } func.func @foo() { tf_executor.graph {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Mar 28 12:06:33 UTC 2022 - 2K bytes - Viewed (0) -
futures/listenablefuture1/src/com/google/common/util/concurrent/ListenableFuture.java
* The listener will run when the {@code Future}'s computation is {@linkplain Future#isDone() * complete} or, if the computation is already complete, immediately. * * <p>There is no guaranteed ordering of execution of listeners, but any listener added through * this method is guaranteed to be called once the computation is complete. *
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Mon Jun 26 21:13:41 UTC 2023 - 8K bytes - Viewed (0) -
guava/src/com/google/common/util/concurrent/ListenableFuture.java
* The listener will run when the {@code Future}'s computation is {@linkplain Future#isDone() * complete} or, if the computation is already complete, immediately. * * <p>There is no guaranteed ordering of execution of listeners, but any listener added through * this method is guaranteed to be called once the computation is complete. *
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Mon Jun 26 21:13:41 UTC 2023 - 8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/transforms/tf2xla_rewriter.cc
XlaComputation& computation) { xla::DebugOptions debug_options; TF_ASSIGN_OR_RETURN(auto hlo_module_config, xla::HloModule::CreateModuleConfigFromProto( computation.proto(), debug_options)); TF_ASSIGN_OR_RETURN( std::unique_ptr<xla::HloModule> hlo_module, xla::HloModule::CreateFromProto(computation.proto(), hlo_module_config));
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 02 09:16:07 UTC 2024 - 18.9K bytes - Viewed (0)