- Sort Score
- Result 10 results
- Languages All
Results 11 - 20 of 35 for NVIDIA (0.41 sec)
-
tensorflow/compiler/mlir/tensorflow/ir/tf_ops_device_helper.h
class RuntimeDevices; // Returns true if at least one GPU device is available at runtime. bool CanUseGpuDevice(const RuntimeDevices &devices); // Returns true if all of the GPUs available at runtime support TensorCores // (NVIDIA compute capability >= 7.0). bool CanUseTensorCores(const RuntimeDevices &devices); // Returns true if operation does not have explicit device placement that would // prevent it from running on GPU device.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Nov 12 21:57:12 UTC 2021 - 1.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_ops_device_helper.cc
bool CanUseGpuDevice(const RuntimeDevices &devices) { return llvm::any_of(devices.device_names(), IsGpuDevice); } // Returns true if all of the GPUs available at runtime support TensorCores // (NVIDIA compute capability >= 7.0). bool CanUseTensorCores(const RuntimeDevices &devices) { auto has_tensor_cores = [&](const DeviceNameUtils::ParsedName &device) { auto md = devices.GetGpuDeviceMetadata(device);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 21 08:41:18 UTC 2022 - 2.4K bytes - Viewed (0) -
.github/workflows/trusted-partners.yml
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Sep 12 14:49:29 UTC 2023 - 2.4K bytes - Viewed (0) -
ci/official/containers/linux_arm64/Dockerfile
COPY builder.patchelf/build_patchelf.sh /build_patchelf.sh COPY apt.conf /etc/apt/ RUN /build_patchelf.sh ################################################################################ FROM nvidia/cuda:12.3.1-devel-ubuntu20.04 as devel ################################################################################ COPY --from=builder /dt10 /dt10
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Jan 08 09:32:19 UTC 2024 - 4.1K bytes - Viewed (0) -
tensorflow/compiler/jit/tests/auto_clustering_test.cc
TF_ASSERT_OK( RunAutoClusteringTestWithPbtxt("keras_imagenet_main_graph_mode")); } TEST_F(AutoClusteringTestImpl, OpenSeq2SeqGNMT) { // Model is from https://github.com/NVIDIA/OpenSeq2Seq. // Generated from // // python run.py \ // --config_file=example_configs/text2text/en-de/en-de-gnmt-like-4GPUs.py \ // --use_xla_jit TF_ASSERT_OK(
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Jan 13 20:13:03 UTC 2022 - 3.6K bytes - Viewed (0) -
pkg/scheduler/framework/plugins/nodeaffinity/node_affinity_test.go
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Mon Dec 18 12:00:10 UTC 2023 - 38.7K bytes - Viewed (0) -
configure.py
Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". query_item: string for feature related to the variable, e.g. "CUDA for Nvidia GPUs". enabled_by_default: boolean for default behavior. question: optional string for how to ask for user input. yes_reply: optional string for reply when feature is enabled.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Jun 10 04:32:44 UTC 2024 - 53.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/layout_optimization_layout_assignment_gpu_cc_60.mlir
func.func @transposeConv2D_3x3_f16(%input: tensor<1x28x28x64xf16>, %filter: tensor<3x3x64x64xf16>) -> tensor<1x26x26x64xf16> { // cuDNN prefers NCHW data format for spatial convolutions in f16 before // compute capability 7.0 (NVIDIA Tensor Cores). // CHECK: "tf.Conv2D"(%[[INPUT_TRANSPOSE:[0-9]*]], %arg1) // CHECK-SAME: data_format = "NCHW" %0 = "tf.Conv2D"(%input, %filter) { data_format = "NHWC", padding = "VALID",
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 21 08:41:18 UTC 2022 - 5.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/layout_optimization_to_nhwc.mlir
// it to NCHW before padding, and does all computations in NCHW (this is the // default setup for ResNet model trained in fp32 on GPU). // // To be able to use Tensor Cores on latest NVIDIA GPUs this model has to be // converted to NHWC data format. // Padding in spatial dimension (NCHW) %0 = "tf.Const"() {value = dense<[[0, 0], [0, 0], [3, 3], [3, 3]]> : tensor<4x2xi32>} : () -> tensor<4x2xi32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 30 06:52:55 UTC 2023 - 7.3K bytes - Viewed (0) -
docs/en/data/external_links.yml
Neon link: https://neon.tech/blog/deploy-a-serverless-fastapi-app-with-neon-postgres-and-aws-app-runner-at-any-scale title: Deploy a Serverless FastAPI App with Neon Postgres and AWS App Runner at any scale - author: Kurtis Pykes - NVIDIA link: https://developer.nvidia.com/blog/building-a-machine-learning-microservice-with-fastapi/ title: Building a Machine Learning Microservice with FastAPI - author: Ravgeet Dhillon - Twilio link: https://www.twilio.com/en-us/blog/booking-appointments-twilio-notion-fastapi...
Registered: Mon Jun 17 08:32:26 UTC 2024 - Last Modified: Wed Jun 12 00:47:57 UTC 2024 - 22K bytes - Viewed (0)