- Sort Score
- Result 10 results
- Languages All
Results 41 - 50 of 64 for normalization (0.4 sec)
-
subprojects/core-api/src/main/java/org/gradle/api/Project.java
/** * Provides access to configuring input normalization. * * @since 4.0 */ InputNormalizationHandler getNormalization(); /** * Configures input normalization. * * @since 4.0 */ void normalization(Action<? super InputNormalizationHandler> configuration); /**
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Thu May 30 04:56:22 UTC 2024 - 74.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/prepare_quantize_helper.h
int index = enumerated_intermediates.first; auto& tensor_property = enumerated_intermediates.second; // intermediate tensors 0, 1, 2, 3 are only used with layer normalization. if (!lstm_variant.use_layer_norm && index != 4) { continue; } TypeAttr attr = op->template getAttrOfType<TypeAttr>(intermediate_attributes[index]);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 03 18:01:23 UTC 2024 - 28K bytes - Viewed (0) -
platforms/software/dependency-management/src/main/java/org/gradle/api/internal/artifacts/transform/DefaultTransform.java
.severity(ERROR) .details("This is not allowed for cacheable transforms") .solution("Use a different normalization strategy via @PathSensitive, @Classpath or @CompileClasspath")); } } } @Override public FileNormalizer getInputArtifactNormalizer() { return fileNormalizer;
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Thu Apr 18 08:26:19 UTC 2024 - 34.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/optimize_patterns.td
(AxesIsLastDimension $axes, $sum_input), (HasTwoUse $exp), (HasOneUse $sum)]>; // Convert softmax(x-max(x)) into softmax(x) as the softmax op already deals // with the max normalization. def FoldNormalizationIntoSoftmax : Pat< (TFL_SoftmaxOp (TFL_SubOp:$sub $input, (TFL_ReduceMaxOp:$max $max_input, (Arith_ConstantOp I32ElementsAttr: $axes),
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 16 20:31:41 UTC 2024 - 66.4K bytes - Viewed (0) -
pkg/controller/podautoscaler/horizontal.go
} return recommendation, reason, message } // convertDesiredReplicasWithBehaviorRate performs the actual normalization, given the constraint rate // It doesn't consider the stabilizationWindow, it is done separately func (a *HorizontalController) convertDesiredReplicasWithBehaviorRate(args NormalizationArg) (int32, string, string) {
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Sat May 04 18:33:12 UTC 2024 - 63.6K bytes - Viewed (0) -
platforms/documentation/docs/src/docs/userguide/optimizing-performance/build-cache/build_cache.adoc
In order to handle volatile inputs for your tasks consider <<incremental_build.adoc#sec:configure_input_normalization,configuring input normalization>>. [[sec:task_output_caching_disabled_by_default]] === Marking tasks as non-cacheable by default There are certain tasks that don't benefit from using the build cache.
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Wed May 15 11:30:10 UTC 2024 - 26.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_ops.td
The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range `[0, 255]`. `uint8` values are unchanged. The op uses two different normalization algorithms: * If the input values are all positive, they are rescaled so the largest one is 255. * If any input value is negative, the values are shifted so input value 0.0
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 24 04:08:35 UTC 2024 - 90.5K bytes - Viewed (0) -
tests/integration/security/authz_test.go
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Wed May 08 23:36:51 UTC 2024 - 50.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/ir/tfl_ops.td
INTERSPEECH, 2014. The coupling of input and forget gate (CIFG) is based on: http://arxiv.org/pdf/1503.04069.pdf Greff et al. 'LSTM: A Search Space Odyssey' The layer normalization is based on: https://arxiv.org/pdf/1607.06450.pdf Ba et al. 'Layer Normalization' }]; let arguments = ( ins TFL_TensorOf<[F32, QI8, QI16]>:$input, // Weights TFL_TensorOfOrNone<[F32, QI8]>:$input_to_input_weights,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Jun 06 19:09:08 UTC 2024 - 186K bytes - Viewed (0) -
src/net/netip/netip_test.go
// Match /0 either order {pfx("1.2.3.0/32"), pfx("0.0.0.0/0"), true}, {pfx("0.0.0.0/0"), pfx("1.2.3.0/32"), true}, {pfx("1.2.3.0/32"), pfx("5.5.5.5/0"), true}, // normalization not required; /0 means true // IPv6 overlapping {pfx("5::1/128"), pfx("5::0/8"), true}, {pfx("5::0/8"), pfx("5::1/128"), true}, // IPv6 not overlapping
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Jun 04 17:10:01 UTC 2024 - 54.3K bytes - Viewed (0)