- Sort Score
- Result 10 results
- Languages All
Results 101 - 110 of 333 for computations (0.25 sec)
-
tensorflow/compiler/jit/extract_outside_compilation_pass_test.cc
} private: std::unique_ptr<DeviceMgr> device_mgr_; std::unique_ptr<ProcessFunctionLibraryRuntime> pflr_; }; TEST_F(ExtractOutsideCompilationForFunctionTest, Basic) { // Build the XLA computation func. // "const0" // "identity0" = "const0" (outside compilation cluster "0") // "identity1" = "identity0" (outside compilation cluster "1") // "identity2" = "identity1" FunctionDefLibrary fdl; {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 41K bytes - Viewed (0) -
SECURITY.md
should be used with caution when working with untrusted models. ### Saved graphs and checkpoints When loading untrusted serialized computation graphs (in form of a `GraphDef`, `SavedModel`, or equivalent on-disk format), the set of computation primitives available to TensorFlow is powerful enough that you should assume that the TensorFlow process effectively executes arbitrary code.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sun Oct 01 06:06:35 UTC 2023 - 9.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfrt/tests/saved_model/testdata/xla_launch_xla_reduce_window.mlir
%cst_3 = "tf.Const"() {value = dense<4> : tensor<1xi32>} : () -> tensor<1xi32> %0 = "tf.XlaReduceWindow"(%arg0, %arg1, %cst_0, %cst_1, %cst_2, %cst_3, %cst) {computation = @sum_reducer} : (tensor<7xf32>, tensor<f32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1x2xi32>) -> tensor<10xf32> func.return %0 : tensor<10xf32> }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Aug 14 15:35:49 UTC 2023 - 1.6K bytes - Viewed (0) -
RELEASE.md
* Introducing `tf.types.experimental.AtomicFunction` as the fastest way to perform TF computations in Python. * Can be accessed through `inference_fn` property of `ConcreteFunction`s * Does not support gradients. * See `tf.types.experimental.AtomicFunction` documentation for how to call and use it.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_compile_on_demand_op.cc
XlaCompiler::CompileOptions GetCompileOptions(bool for_pjrt = false) { XlaCompiler::CompileOptions compile_options; compile_options.is_entry_computation = true; // Optimization: where possible, have the computation return a naked array // rather than a one-element tuple. compile_options.always_return_tuple = false; if (for_pjrt) { compile_options.use_tuple_arg = false; compile_options.always_return_tuple = true; }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 29 08:39:39 UTC 2024 - 13.4K bytes - Viewed (0) -
tensorflow/compiler/aot/quantize.h
namespace tensorflow { namespace tfcompile { using QuantizeXlaFn = std::function<Status(const tf2xla::Config& config, xla::XlaComputation* computation)>; // Set the static quantization function to the `fn` if it hasn't been set. // Return false if the static function has been set. bool RegisterQuantizeFn(const QuantizeXlaFn& fn); } // namespace tfcompile
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 1.4K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_compilation_cache.proto
} // Represents an entry in the XLA compile cache. message XlaSerializedCacheEntry { // Used to uniqely identify this entry in its persisted representation. XlaSerializedCacheKey key = 1; // The computation (HLO) that compilation was done for. It is correlated to // the input TF graph so we can use it to fingerprint the compiled binary. We // serialize this rather than the input graphdef because it provides a
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 1.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/internal/legalize_tf_to_hlo.h
namespace tf2xla { namespace internal { // Legalize the given MLIR module to XLA HLO using a combination of the MLIR // Bridge and XlaBuilder absl::StatusOr<XlaCompilationResult> LegalizeTfToHlo( const tpu::MlirToHloArgs& computation, const tpu::TPUCompileMetadataProto& metadata, bool use_tuple_args, llvm::StringRef device_type, XlaShapeLayoutHelpers::ShapeDeterminationFns shape_determination_fns,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sun Apr 14 20:29:34 UTC 2024 - 2K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfrt/ir/mlrt/tf_ops.td
$mlir_module is a serialized MLIR module with a `main` function that contains target computation. $metadata is a serialized TPUCompileMetadataProto describing the shapes and types of the inputs to the computation, as well as a mapping onto the TPU pod topology.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 22 21:35:32 UTC 2024 - 6.7K bytes - Viewed (0) -
src/cmd/internal/obj/riscv/cpu.go
// // If you modify this table, you MUST run 'go generate' to regenerate anames.go! const ( // Unprivileged ISA (Document Version 20190608-Base-Ratified) // 2.4: Integer Computational Instructions AADDI = obj.ABaseRISCV + obj.A_ARCHSPECIFIC + iota ASLTI ASLTIU AANDI AORI AXORI ASLLI ASRLI ASRAI ALUI AAUIPC AADD ASLT ASLTU AAND
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed Mar 20 14:19:33 UTC 2024 - 13.1K bytes - Viewed (0)