- Sort Score
- Result 10 results
- Languages All
Results 21 - 30 of 171 for Bias (0.05 sec)
-
tensorflow/c/experimental/ops/nn_ops.h
// Adds `bias` to `value`. Status BiasAdd(AbstractContext* ctx, AbstractTensorHandle* const value, AbstractTensorHandle* const bias, AbstractTensorHandle** output, const char* data_format = "NHWC", const char* name = nullptr, const char* raw_device_name = nullptr); // The backward operation for "BiasAdd" on the "bias" tensor. Status BiasAddGrad(AbstractContext* ctx,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 10 19:11:36 UTC 2022 - 2.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfr/examples/mnist/ops_defs.py
'NewFullyConnected', inputs=['input_: T', 'filter_: T', 'bias: T'], attrs=['act: {"", "RELU", "RELU6", "TANH"} = ""'], derived_attrs=['T: {float, int8}'], outputs=['o: T']) def _composite_fully_connected(input_, filter_, bias, act): res = tf.raw_ops.MatMul( a=input_, b=filter_, transpose_a=False, transpose_b=True) res = tf.raw_ops.Add(x=res, y=bias) if act == 'RELU': return tf.raw_ops.Relu(features=res)
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Aug 31 20:23:51 UTC 2023 - 6.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/tests/optimize-after-quantization.mlir
func.return %1 : tensor<256x8x7x3xf32> // CHECK: %[[weight:.*]] = arith.constant dense<3.000000e+00> : tensor<3x3x3x3xf32> // CHECK: %[[bias:.*]] = arith.constant dense<[1.500000e+00, 3.000000e+00, 4.500000e+00]> // CHECK: %[[conv:.*]] = "tfl.conv_2d"(%arg0, %[[weight]], %[[bias]]) // CHECK: return %[[conv]] : tensor<256x8x7x3xf32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Jan 05 18:35:42 UTC 2024 - 1.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/stablehlo/transforms/uniform_quantized_stablehlo_to_tfl_pass.cc
} } // Creates a new `tfl.qconst` op for the bias. The bias values are 0s, because // this bias a dummy bias (note that bias fusion is not considered for this // transformation). The quantization scale for the bias is input scale * // filter scale. `filter_const_op` is used to retrieve the filter scales and // the size of the bias constant. TFL::QConstOp CreateTflConstOpForDummyBias(
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Apr 22 09:00:19 UTC 2024 - 99.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model_mark_initialized_variables.mlir
func.func @serving_default(%arg0: tensor<!tf_type.resource<tensor<100x50xf32>>> {tf.resource_name = "dense/kernel"}, %arg1: tensor<!tf_type.resource<tensor<50xf32>>> {tf.resource_name = "dense/bias"}) -> (tensor<100x50xf32> {tf_saved_model.index_path = ["dense_2"]}) attributes {tf.entry_function = {control_outputs = "", inputs = "", outputs = "dense_2/Add:0"}, tf_saved_model.exported_names = ["serving_default"]} {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 30 06:52:55 UTC 2023 - 2.1K bytes - Viewed (0) -
tensorflow/c/experimental/ops/nn_ops.cc
} // Op: BiasAdd() // Summary: Adds `bias` to `value`. // // Description: // This is a special case of `tf.add` where `bias` is restricted to be 1-D. // Broadcasting is supported, so `value` may have any number of dimensions. Status BiasAdd(AbstractContext* ctx, AbstractTensorHandle* const value, AbstractTensorHandle* const bias, AbstractTensorHandle** output,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 10 19:11:36 UTC 2022 - 5.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/utils/lstm_utils.h
// that also contains other supporting ops needed to construct the operands for // the fused op. The caller provides the containing FuncOp as input with // arguments specifying the input, weight, projection and bias. // The weight, projection, bias and layer norm scale all need to be // RankedTensorType. // This class sets the layer norm coefficients to NoneType. class ConvertLSTMCellSimpleToFusedLSTM { public:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Jun 03 00:14:05 UTC 2023 - 7.3K bytes - Viewed (0) -
src/time/zoneinfo_windows.go
std.offset = -int(i.Bias) * 60 l.cacheStart = alpha l.cacheEnd = omega l.cacheZone = std l.tx = make([]zoneTrans, 1) l.tx[0].when = l.cacheStart l.tx[0].index = 0 return } // StandardBias must be ignored if StandardDate is not set, // so this computation is delayed until after the nzone==1 // return above. std.offset = -int(i.Bias+i.StandardBias) * 60 dst := &l.zone[1]
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu Sep 14 07:20:34 UTC 2023 - 6.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/utils/arithmetic_count_util.h
} const int64_t cost_per_col = 2 * weight_type.getNumElements(); *count = cost_per_col * cols; auto bias = op->getOperand(2); if (bias) { auto bias_type = mlir::dyn_cast_or_null<mlir::RankedTensorType>(bias.getType()); if (bias_type && bias_type.hasStaticShape()) { *count += output_type.getNumElements(); } } return true; }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 3.1K bytes - Viewed (0) -
src/math/frexp.go
switch { case f == 0: return f, 0 // correctly return -0 case IsInf(f, 0) || IsNaN(f): return f, 0 } f, exp = normalize(f) x := Float64bits(f) exp += int((x>>shift)&mask) - bias + 1 x &^= mask << shift x |= (-1 + bias) << shift frac = Float64frombits(x) return
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Apr 11 16:34:30 UTC 2022 - 929 bytes - Viewed (0)