- Sort Score
- Result 10 results
- Languages All
Results 101 - 110 of 145 for tpu0 (0.06 sec)
-
tensorflow/compiler/jit/xla_platform_info.h
// type. absl::StatusOr<DeviceType> GetCompilationDeviceType( const DeviceType& platform_device_type); // Builds a DeviceCompiler that uses xla::LocalClient using `platform_info` and // `compilation_device_type` (in non-TPU case) and sets *xla_device_compiler to // point to it. Uses flags from `MarkForCompilationPassFlags` for configuring // the persistor used in the DeviceCompiler. The platform ID from // `platform_info` must not be null in CPU case.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 09:53:30 UTC 2024 - 7.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/stablehlo/cc/calibration/component.cc
if (!is_calibration_required) return absl::OkStatus(); // `duplicate_shape_determining_constants = false` because the // resulting graph of this step is not expected to be loaded on TPU. const ExportOptions export_opts = { /*duplicate_shape_determining_constants=*/false, /*unfreeze_constants=*/false, checkpoint_dir, /*debug_name=*/absl::StrCat(kName, kExportStepSuffix)};
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 14 06:31:57 UTC 2024 - 9.2K bytes - Viewed (0) -
RELEASE.md
`.predict` is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models). * Automatic outside compilation is now enabled for Cloud TPUs. This allows `tf.summary` to be used more conveniently with Cloud TPUs. * Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/internal/mlir_bridge_pass_util.cc
const FunctionLibraryDefinition* function_library) { auto predicate = [](const Graph& graph) { for (const Node* node : graph.nodes()) { // _tpu_replicate is used in replicated TPU graphs. It will be converted // to_replication_info and _xla_compile_device_type in phase 1 pipelines. if (node->attrs().FindByString(std::string(kTpuReplicateAttr))) { return true; } }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 07 12:22:33 UTC 2024 - 8.9K bytes - Viewed (0) -
tensorflow/compiler/jit/flags.cc
#include "absl/strings/strip.h" #include "tensorflow/compiler/mlir/tensorflow/utils/dump_graph.h" #include "xla/parse_flags_from_env.h" #include "tensorflow/core/platform/macros.h" #include "tensorflow/core/tpu/kernels/sparse_core_xla_flags_defaults.h" #include "tensorflow/core/util/command_line_flags.h" namespace tensorflow { namespace { BuildXlaOpsPassFlags* build_ops_flags;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 17 18:52:57 UTC 2024 - 24.5K bytes - Viewed (0) -
tensorflow/compiler/jit/encapsulate_util.h
extern const char kXlaLiftedArgOutsideCompilationAttrName[]; // Attribute indicating that this is an IdentityN node receiving inputs for a // outside compilation Placeholder node (the original outside compilation node // is moved out of TPU computation, and we left a Placeholder node there). // Attribute value will be a string, which is the outside compilation cluster // name for the outside compilation Placeholder node.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 22 06:59:07 UTC 2024 - 7.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/prepare_tpu_computation_for_tf_export.mlir
// RUN: tf-opt %s -split-input-file -verify-diagnostics -prepare-tpu-computation-for-tf-export | FileCheck %s // CHECK-LABEL: @ShardingAttr func.func @ShardingAttr(%arg0: tensor<128x10xf32> {mhlo.sharding = "\08\03\1A\02\01\02\22\02\00\01"}, %arg1: tensor<10x1024xf32> {mhlo.sharding = "\08\01\1A\01\01\22\01\00"}, %arg2: tensor<128x1024xf32> {mhlo.sharding = ""}) -> (tensor<128x10xf32>, tensor<10x1024xf32>, tensor<128x1024xf32>) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 14 18:46:36 UTC 2024 - 9.2K bytes - Viewed (0) -
ci/official/README.md
- Different Python versions - Linux, MacOS, and Windows machines (these pool definitions are internal) - x86 and arm64 - CPU-only, or with NVIDIA CUDA support (Linux only), or with TPUs ## How to Test Your Changes to TensorFlow You may check how your changes will affect TensorFlow by: 1. Creating a PR and observing the presubmit test results 2. Running the CI scripts locally, as explained below
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 01 03:21:19 UTC 2024 - 8K bytes - Viewed (0) -
src/runtime/mbitmap.go
return } tp0 := s.typePointersOfType(typ, addr) tp1 := s.typePointersOf(addr, size) failed := false for { var addr0, addr1 uintptr tp0, addr0 = tp0.next(addr + size) tp1, addr1 = tp1.next(addr + size) if addr0 != addr1 { failed = true break } if addr0 == 0 { break } } if failed { tp0 := s.typePointersOfType(typ, addr)
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 23 00:18:55 UTC 2024 - 60K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfrt/tests/tf_to_corert/fallback.mlir
// RUN: tf-tfrt-opt -tf-to-tfrt %s | FileCheck %s --dump-input=fail --dump-input-filter=all // RUN: tf-tfrt-opt -pass-pipeline='builtin.module(tf-to-tfrt{target-tpurt=true tpu-use-core-selector=false})' %s | FileCheck %s --dump-input=fail --dump-input-filter=all // CHECK-LABEL: func @_tfrt_fallback_init // CHECK-SAME: {{.*}} !tfrt.chain // CHECK: tfrt_fallback_async.createop(%arg0) key(0) device("/device:CPU:0") "tf.ParseExampleV2"()
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 08 00:18:59 UTC 2024 - 9.1K bytes - Viewed (0)