- Sort Score
- Result 10 results
- Languages All
Results 11 - 20 of 23 for NVIDIA (0.11 sec)
-
pkg/scheduler/framework/plugins/nodeaffinity/node_affinity_test.go
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Mon Dec 18 12:00:10 UTC 2023 - 38.7K bytes - Viewed (0) -
configure.py
Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". query_item: string for feature related to the variable, e.g. "CUDA for Nvidia GPUs". enabled_by_default: boolean for default behavior. question: optional string for how to ask for user input. yes_reply: optional string for reply when feature is enabled.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Jun 10 04:32:44 UTC 2024 - 53.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/layout_optimization_to_nhwc.mlir
// it to NCHW before padding, and does all computations in NCHW (this is the // default setup for ResNet model trained in fp32 on GPU). // // To be able to use Tensor Cores on latest NVIDIA GPUs this model has to be // converted to NHWC data format. // Padding in spatial dimension (NCHW) %0 = "tf.Const"() {value = dense<[[0, 0], [0, 0], [3, 3], [3, 3]]> : tensor<4x2xi32>} : () -> tensor<4x2xi32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 30 06:52:55 UTC 2023 - 7.3K bytes - Viewed (0) -
docs/en/data/external_links.yml
Neon link: https://neon.tech/blog/deploy-a-serverless-fastapi-app-with-neon-postgres-and-aws-app-runner-at-any-scale title: Deploy a Serverless FastAPI App with Neon Postgres and AWS App Runner at any scale - author: Kurtis Pykes - NVIDIA link: https://developer.nvidia.com/blog/building-a-machine-learning-microservice-with-fastapi/ title: Building a Machine Learning Microservice with FastAPI - author: Ravgeet Dhillon - Twilio link: https://www.twilio.com/en-us/blog/booking-appointments-twilio-notion-fastapi...
Registered: Mon Jun 17 08:32:26 UTC 2024 - Last Modified: Wed Jun 12 00:47:57 UTC 2024 - 22K bytes - Viewed (0) -
tensorflow/compiler/jit/BUILD
tf_cuda_cc_test( name = "pjrt_compile_util_test", srcs = ["pjrt_compile_util_test.cc"], tags = [ "config-cuda-only", "no_oss", # This test only runs with GPU. "requires-gpu-nvidia", "xla", ], deps = [ ":pjrt_compile_util", ":test_util", ":xla_gpu_jit", "//tensorflow/cc:function_ops", "//tensorflow/cc:math_ops",
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 31 00:41:19 UTC 2024 - 61.5K bytes - Viewed (0) -
pkg/kubelet/stats/provider_test.go
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Thu Mar 07 08:12:16 UTC 2024 - 20K bytes - Viewed (0) -
RELEASE.md
The `tensorflow` pip package has a new, optional installation method for Linux that installs necessary Nvidia CUDA libraries through pip. As long as the Nvidia driver is already installed on the system, you may now run `pip install tensorflow[and-cuda]` to install TensorFlow's Nvidia CUDA library dependencies in the Python environment. Aside from the Nvidia driver, no other pre-existing Nvidia CUDA packages are necessary. * Enable JIT-compiled i64-indexed kernels on GPU for large tensors...
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0) -
.bazelrc
build:cuda_clang --config=cuda # Enable TensorRT optimizations https://developer.nvidia.com/tensorrt build:cuda_clang --config=tensorrt build:cuda_clang --action_env=TF_CUDA_CLANG="1" build:cuda_clang --@local_config_cuda//:cuda_compiler=clang # Select supported compute capabilities (supported graphics cards). # This is the same as the official TensorFlow builds. # See https://developer.nvidia.com/cuda-gpus#compute
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 17:12:54 UTC 2024 - 52.9K bytes - Viewed (0) -
src/debug/elf/elf.go
EM_TILEPRO Machine = 188 /* Tilera TILEPro multicore architecture family */ EM_MICROBLAZE Machine = 189 /* Xilinx MicroBlaze 32-bit RISC soft processor core */ EM_CUDA Machine = 190 /* NVIDIA CUDA architecture */ EM_TILEGX Machine = 191 /* Tilera TILE-Gx multicore architecture family */ EM_CLOUDSHIELD Machine = 192 /* CloudShield architecture family */
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Apr 16 00:01:16 UTC 2024 - 134.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_ops_a_m.cc
if (one_by_one && trivial_strides && trivial_dilations) { return "NHWC"; } // If filter spatial dimensions are unknown or not 1x1 we prefer NCHW, because // it's the fastest option on NVIDIA GPUs with cuDNN library support. return "NCHW"; } //===----------------------------------------------------------------------===// // Conv2dBackpropFilterOp
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 146.7K bytes - Viewed (0)