- Sort Score
- Result 10 results
- Languages All
Results 51 - 60 of 283 for Scales (0.15 sec)
-
tensorflow/compiler/mlir/tfr/ir/tfr_ops.cc
public: // Replace quant_rescale (input, scale, zp) with // tf.Cast(tf.Round(tf.Cast(input, f32) * scale) + tf.Cast(zp, f32), i32) LogicalResult matchAndRewrite(TFRQuantRescaleOp rescale_op, PatternRewriter &rewriter) const override { Value input = rescale_op.getInput(); Value scale = rescale_op.getScale(); Value zp = rescale_op.getZp();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Nov 21 16:55:41 UTC 2023 - 38.2K bytes - Viewed (0) -
src/internal/trace/traceviewer/http.go
and the other metrics. The "Network", "Timers", and "Syscalls" traces indicate events in the runtime that cause goroutines to wake up. </p> <p> The visualization allows you to navigate events at scales ranging from several seconds to a handful of nanoseconds. Consult the documentation for the Chromium <a href='https://www.chromium.org/developers/how-tos/trace-event-profiling-tool/'>Trace Event Profiling Tool<a/>
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Nov 21 21:29:53 UTC 2023 - 12.6K bytes - Viewed (0) -
src/crypto/internal/edwards25519/field/fe_test.go
"math/bits" mathrand "math/rand" "reflect" "testing" "testing/quick" ) func (v Element) String() string { return hex.EncodeToString(v.Bytes()) } // quickCheckConfig returns a quick.Config that scales the max count by the // given factor if the -short flag is not set. func quickCheckConfig(slowScale int) *quick.Config { cfg := new(quick.Config) if !testing.Short() { cfg.MaxCountScale = float64(slowScale) }
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Aug 28 17:26:17 UTC 2023 - 13.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfr/ir/tfr_ops.td
`scale` and `zero point`. Currently, the allowed activations are `NONE`, `RELU`, `RELU6` and `RELU_N1_TO_1`. Example: ```mlir %3, %4 = tfr.quant_act_range(%2, %1, %0) : (tfr.attr, float, i64) -> (tfr.tensor, tfr.tensor) ``` }]; let arguments = (ins TFR_AttrType:$act, F32:$scale, I64:$zp);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Apr 22 10:54:29 UTC 2024 - 17.4K bytes - Viewed (0) -
pkg/registry/batch/job/strategy.go
// rule for checking the format of completedIndexes expects them to be // below .spec.completions, however, this it is ok if the // status.completedIndexes go beyond completions just after a user scales // down a Job. isIndexed := ptr.Deref(newJob.Spec.CompletionMode, batch.NonIndexedCompletion) == batch.IndexedCompletion
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Fri Mar 08 16:43:24 UTC 2024 - 18.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/flatbuffer_export.cc
mlir::dyn_cast<mlir::quant::UniformQuantizedType>(element_type)) { std::vector<float> scales = {static_cast<float>(qtype.getScale())}; std::vector<int64_t> zero_points = {qtype.getZeroPoint()}; q_params = tflite::CreateQuantizationParameters( builder_, /*min=*/0, /*max=*/0, builder_.CreateVector<float>(scales), builder_.CreateVector<int64_t>(zero_points));
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:41:49 UTC 2024 - 164.5K bytes - Viewed (0) -
architecture/ambient/ztunnel.md
* Specifically, ztunnel should be able to send a request to the control plane to answer "I got a request to send traffic to 1.1.1.1, what is 1.1.1.1?" * While this is not needed for small scales, this is important for the long tail of massive clusters (think 1 million endpoints), where the entire set of endpoints cannot reasonably be replicated to each ztunnel. * It should not be client-specific.
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Thu Apr 25 22:35:16 UTC 2024 - 16.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/passes.td
"enable post training quantization. Only used in tests">, Option<"legacy_float_scale_", "legacy-float-scale", "bool", "false", "calculate quantization scales in float instead of double">, Option<"disable_per_channel_", "disable-per-channel", "bool", "false", "Whether disable per-channel quantized weights.">,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 24 20:30:06 UTC 2024 - 22.6K bytes - Viewed (0) -
src/cmd/vendor/github.com/google/pprof/internal/driver/commands.go
"to facilitate comparison with original graph."), "unit": helpText( "Measurement units to display", "Scale the sample values to this unit.", "For time-based profiles, use seconds, milliseconds, nanoseconds, etc.", "For memory profiles, use megabytes, kilobytes, bytes, etc.", "Using auto will scale each value independently to the most natural unit."), "compact_labels": "Show minimal headers",
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Fri Feb 16 15:19:53 UTC 2024 - 18.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/prepare_quantize_helper.h
int activation_number_of_bits_; }; // Returns a function that returns the quantized type of a bias input. // The scale of bias is a multiplication of given scale and scales from the // quantization type of other operands. inline quant::AccumulatorScaleFunc GetUniformQuantizedTypeForBiasWithScale( double scale) { return [=](const std::vector<quant::QuantParams>& quant_params, const int adjusted_quant_dim,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 03 18:01:23 UTC 2024 - 28K bytes - Viewed (0)