- Sort Score
- Result 10 results
- Languages All
Results 61 - 70 of 412 for Computation (0.15 sec)
-
tensorflow/compiler/mlir/lite/stablehlo/transforms/legalize_hlo_patterns.td
// supports float types. tf.round with integer input type will become an // identity op, so we will never face an mhlo.floor with an integer input type. // The pattern matched executes the following computation: // frac = x - floor(x) // to_even = (floor(x) - 2 * floor(0.5 * x)) == 1 // if frac > 0.5 || (frac == 0.5 && to_even) // return floor(x) + 1 // else // return floor(x) def : Pat<(MHLO_SelectOp
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Feb 03 08:58:22 UTC 2024 - 34K bytes - Viewed (0) -
samples/helloworld/src/app.py
import os import math from flask import Flask, request app = Flask(__name__) @app.route('/hello') def hello(): version = os.environ.get('SERVICE_VERSION') # do some cpu intensive computation x = 0.0001 for i in range(0, 1000000): x = x + math.sqrt(x) return 'Hello version: %s, instance: %s\n' % (version, os.environ.get('HOSTNAME')) @app.route('/health') def health():
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Tue Jun 20 13:44:21 UTC 2023 - 1.1K bytes - Viewed (0) -
SECURITY.md
should be used with caution when working with untrusted models. ### Saved graphs and checkpoints When loading untrusted serialized computation graphs (in form of a `GraphDef`, `SavedModel`, or equivalent on-disk format), the set of computation primitives available to TensorFlow is powerful enough that you should assume that the TensorFlow process effectively executes arbitrary code.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sun Oct 01 06:06:35 UTC 2023 - 9.6K bytes - Viewed (0) -
tensorflow/compiler/jit/device_compiler.h
}; // Compiles a function into a XlaCompiler::CompilationResult that can be used // to execute an XLA Computation. Compilation results are cached. Compilation // is skipped if there is a cache hit. `function` is the name of a Tensorflow // function to compile. `args` is a description of the arguments to the // computation. // // `compile_mode` controls the behavior of the compilation cache on a cache
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 22 08:47:20 UTC 2024 - 22.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/passes.h
// Creates a pass that lifts operations on external resource variables from // device computation nested in `tf_device::LaunchOp` out so that resource // variable load operations are all before device computation while resource // variable store operations are all after device computation. After this pass, // device computation no longer interacts with external resource variables.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:18:05 UTC 2024 - 31.8K bytes - Viewed (0) -
android/guava/src/com/google/common/util/concurrent/Uninterruptibles.java
* <li>To get uninterruptibility and remove checked exceptions, use {@link * Futures#getUnchecked}. * </ul> * * @throws ExecutionException if the computation threw an exception * @throws CancellationException if the computation was cancelled */ @CanIgnoreReturnValue @ParametricNullness public static <V extends @Nullable Object> V getUninterruptibly(Future<V> future)
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Tue Apr 04 09:45:04 UTC 2023 - 14.4K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_launch_util.h
// For case 3, we need to create a PjRtBuffer from the raw device mem pointer, // and we need to ensure the PjRtBuffer persists till XLA computation is // complete. Therefore we put the newly created PjRtBuffer into `owned_args`. // Caller is responsible to ensure `owned_args` lives till the end of XLA // computation. Status PreparePjRtExecutableArguments( int num_missing_prefix_ctx_inputs, const std::vector<int>& input_mapping,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 09:53:30 UTC 2024 - 11.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/internal/passes/extract_outside_compilation.cc
} } } // Since we have the outputs from host and device computation after moving // outside compiled ops, we can create the actual parallel_execute regions. // Still, one region is for the host computation for outside compilation and // the other one is for the original Device cluster computation. mlir::tf_device::ParallelExecuteOp CreateFinalParallelExecuteOp(
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Apr 30 21:25:12 UTC 2024 - 68.3K bytes - Viewed (0) -
tensorflow/compiler/jit/extract_outside_compilation_pass_test.cc
} private: std::unique_ptr<DeviceMgr> device_mgr_; std::unique_ptr<ProcessFunctionLibraryRuntime> pflr_; }; TEST_F(ExtractOutsideCompilationForFunctionTest, Basic) { // Build the XLA computation func. // "const0" // "identity0" = "const0" (outside compilation cluster "0") // "identity1" = "identity0" (outside compilation cluster "1") // "identity2" = "identity1" FunctionDefLibrary fdl; {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 41K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/g3doc/enable_mlir_bridge.md
is a global **Context** that holds all the equivalences. You can manipulate the **Context** with following code. Note that it must be added early in your program (at least before any of your model computation). ``` tf.config.experimental.enable_mlir_bridge() ``` ## How to disable the old TPU bridge? Due to how TPU bridges are designed to work, you don't actually need to disable
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Jan 13 23:12:13 UTC 2020 - 989 bytes - Viewed (0)