- Sort Score
- Result 10 results
- Languages All
Results 11 - 20 of 650 for _kernel (0.12 sec)
-
tensorflow/c/kernels/summary_op_test.cc
std::unique_ptr<OpKernel> kernel = CreateOpKernel(DeviceType(DEVICE_CPU), nullptr, nullptr, def, 1, &status); ASSERT_TRUE(status.ok()) << status.ToString(); OpKernelContext::Params params; DummyDevice dummy_device(nullptr); params.device = &dummy_device; params.op_kernel = kernel.get(); AllocatorAttributes alloc_attrs; params.output_attr_array = &alloc_attrs;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Jul 18 15:10:51 UTC 2022 - 6.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/python/integration_test/quantize_model_test.py
y_shape = [v if v is not None else n for v in shapes[1]] class MatmulModel(module.Module): def __init__(self, bias: Optional[core.Tensor]): self._bias = bias self._kernel = np.random.uniform(size=y_shape).astype('f4') self._min = (-0.8, -0.8, -0.9) self._max = (0.9, 0.9, 1.0) @def_function.function( input_signature=[
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 17 03:36:50 UTC 2024 - 235.6K bytes - Viewed (0) -
tensorflow/c/kernels_test.cc
inputs.emplace_back(); p.inputs = inputs; Status status; std::unique_ptr<OpKernel> kernel = GetFakeKernel(device_name, op_name, node_name, &status); TF_EXPECT_OK(status); ASSERT_NE(nullptr, kernel.get()); p.op_kernel = kernel.get(); OpKernelContext ctx(&p); kernel->Compute(&ctx); ASSERT_EQ(2, num_inputs); ASSERT_EQ(1, num_outputs);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 50.4K bytes - Viewed (0) -
tensorflow/c/kernels/bitcast_op_test.cc
std::unique_ptr<OpKernel> kernel = CreateOpKernel(DeviceType(DEVICE_CPU), nullptr, nullptr, def, 1, &status); ASSERT_TRUE(status.ok()) << status.ToString(); OpKernelContext::Params params; DummyDevice dummy_device(nullptr); params.device = &dummy_device; params.op_kernel = kernel.get(); gtl::InlinedVector<TensorValue, 4> inputs;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Jul 18 15:10:51 UTC 2022 - 5.5K bytes - Viewed (0) -
tensorflow/c/kernels/summary_op.cc
} ~Params() { TF_DeleteStatus(status); TF_DeleteTensor(tags); TF_DeleteTensor(values); } }; // dummy functions used for kernel registration void* ScalarSummaryOp_Create(TF_OpKernelConstruction* ctx) { return nullptr; } void ScalarSummaryOp_Delete(void* kernel) {} // Helper functions for compute method bool IsSameSize(TF_Tensor* tensor1, TF_Tensor* tensor2);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 6.2K bytes - Viewed (0) -
tensorflow/compiler/jit/kernels/xla_ops.h
// It does not have corresponding OpDef because it is never present // in the GraphDef. // Currently, it is used by eager runtime. FunctionLibraryRuntime creates // this kernel when asked to create a kernel for an XLA-compiled function. // // `has_ref_vars`: whether the input computation can have reference variables. // TODO(cheshire): instead derive this information from the input graph.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 16 23:44:26 UTC 2023 - 4.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/transforms/tf2xla_rewriter.cc
} tensorflow::OpKernel* op_kernel_raw; status = params_.function_library->CreateKernel(props, &op_kernel_raw); if (!status.ok()) { return op_->emitRemark() << "failed to create tf2xla kernel: " << status.ToString(); } // Transfer ownership of the kernel to a local smart pointer. auto op_kernel = absl::WrapUnique(op_kernel_raw);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 02 09:16:07 UTC 2024 - 18.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/quantization/device_target.h
protected: // Adds the kernel spec with the custom scale function for the kernel. LogicalResult RegisterKernel(llvm::StringRef kernel, const KernelSpecs::Signature& signature, const ScaleFn& fn, const ScaleDecomposeFn& dfn); // Adds the kernel spec with the scale constraint type for the kernel. LogicalResult RegisterKernel(llvm::StringRef kernel,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Mar 08 10:41:08 UTC 2024 - 7.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfr/README.md
These ops can also be composite ops. * (Performance) User defines a custom kernel for a regular structure (i.e. LSTM), but it is hard to add the logic to fuse the individual ops to target this kernel in the inference graph. * *Solution*: The user should define a new TF op, which corresponds to the fused kernel, with composition, and use this op to build the model for
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 29 18:32:13 UTC 2022 - 6.2K bytes - Viewed (0) -
src/vendor/golang.org/x/sys/cpu/cpu_linux_arm64.go
// When this happens, we have two options. If the Linux kernel is new // enough (4.11+), we can read the arm64 registers directly which'll // trap into the kernel and then return back to userspace. // // But on older kernels, such as Linux 4.4.180 as used on many Synology // devices, calling readARM64Registers (specifically getisar0) will // cause a SIGILL and we'll die. So for older kernels, parse /proc/cpuinfo // instead. //
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 08 16:12:58 UTC 2024 - 3.4K bytes - Viewed (0)