- Sort Score
- Result 10 results
- Languages All
Results 111 - 120 of 381 for computations (0.42 sec)
-
samples/helloworld/src/app.py
import os import math from flask import Flask, request app = Flask(__name__) @app.route('/hello') def hello(): version = os.environ.get('SERVICE_VERSION') # do some cpu intensive computation x = 0.0001 for i in range(0, 1000000): x = x + math.sqrt(x) return 'Hello version: %s, instance: %s\n' % (version, os.environ.get('HOSTNAME')) @app.route('/health') def health():
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Tue Jun 20 13:44:21 UTC 2023 - 1.1K bytes - Viewed (0) -
android/guava/src/com/google/common/util/concurrent/Uninterruptibles.java
* <li>To get uninterruptibility and remove checked exceptions, use {@link * Futures#getUnchecked}. * </ul> * * @throws ExecutionException if the computation threw an exception * @throws CancellationException if the computation was cancelled */ @CanIgnoreReturnValue @ParametricNullness public static <V extends @Nullable Object> V getUninterruptibly(Future<V> future)
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Tue Apr 04 09:45:04 UTC 2023 - 14.4K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_launch_util.h
// For case 3, we need to create a PjRtBuffer from the raw device mem pointer, // and we need to ensure the PjRtBuffer persists till XLA computation is // complete. Therefore we put the newly created PjRtBuffer into `owned_args`. // Caller is responsible to ensure `owned_args` lives till the end of XLA // computation. Status PreparePjRtExecutableArguments( int num_missing_prefix_ctx_inputs, const std::vector<int>& input_mapping,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 09:53:30 UTC 2024 - 11.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/internal/passes/extract_outside_compilation.cc
} } } // Since we have the outputs from host and device computation after moving // outside compiled ops, we can create the actual parallel_execute regions. // Still, one region is for the host computation for outside compilation and // the other one is for the original Device cluster computation. mlir::tf_device::ParallelExecuteOp CreateFinalParallelExecuteOp(
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Apr 30 21:25:12 UTC 2024 - 68.3K bytes - Viewed (0) -
tensorflow/compiler/jit/extract_outside_compilation_pass_test.cc
} private: std::unique_ptr<DeviceMgr> device_mgr_; std::unique_ptr<ProcessFunctionLibraryRuntime> pflr_; }; TEST_F(ExtractOutsideCompilationForFunctionTest, Basic) { // Build the XLA computation func. // "const0" // "identity0" = "const0" (outside compilation cluster "0") // "identity1" = "identity0" (outside compilation cluster "1") // "identity2" = "identity1" FunctionDefLibrary fdl; {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 41K bytes - Viewed (0) -
SECURITY.md
should be used with caution when working with untrusted models. ### Saved graphs and checkpoints When loading untrusted serialized computation graphs (in form of a `GraphDef`, `SavedModel`, or equivalent on-disk format), the set of computation primitives available to TensorFlow is powerful enough that you should assume that the TensorFlow process effectively executes arbitrary code.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sun Oct 01 06:06:35 UTC 2023 - 9.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfrt/tests/saved_model/testdata/xla_launch_xla_reduce_window.mlir
%cst_3 = "tf.Const"() {value = dense<4> : tensor<1xi32>} : () -> tensor<1xi32> %0 = "tf.XlaReduceWindow"(%arg0, %arg1, %cst_0, %cst_1, %cst_2, %cst_3, %cst) {computation = @sum_reducer} : (tensor<7xf32>, tensor<f32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1x2xi32>) -> tensor<10xf32> func.return %0 : tensor<10xf32> }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Aug 14 15:35:49 UTC 2023 - 1.6K bytes - Viewed (0) -
RELEASE.md
* Introducing `tf.types.experimental.AtomicFunction` as the fastest way to perform TF computations in Python. * Can be accessed through `inference_fn` property of `ConcreteFunction`s * Does not support gradients. * See `tf.types.experimental.AtomicFunction` documentation for how to call and use it.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_compile_on_demand_op.cc
XlaCompiler::CompileOptions GetCompileOptions(bool for_pjrt = false) { XlaCompiler::CompileOptions compile_options; compile_options.is_entry_computation = true; // Optimization: where possible, have the computation return a naked array // rather than a one-element tuple. compile_options.always_return_tuple = false; if (for_pjrt) { compile_options.use_tuple_arg = false; compile_options.always_return_tuple = true; }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 29 08:39:39 UTC 2024 - 13.4K bytes - Viewed (0) -
tensorflow/cc/framework/while_gradients.h
#include "tensorflow/cc/framework/scope.h" #include "tensorflow/core/graph/while_context.h" // Utility functions for constructing while loop gradients namespace tensorflow { // Adds the gradient computation for the while loop associated with // `while_ctx`. `grad_inputs` are the partial derivatives w.r.t. the loop // outputs, i.e. the exit nodes. The partial derivatives w.r.t. the loop
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Oct 05 15:48:53 UTC 2022 - 1.7K bytes - Viewed (0)