- Sort Score
- Result 10 results
- Languages All
Results 61 - 70 of 595 for weights (0.39 sec)
-
docs/em/docs/tutorial/body-nested-models.md
!!! tip โ๏ธ ๐คฏ ๐ ๐ป ๐ด ๐โ๐ฆบ `str` ๐. โ๏ธ Pydantic โ๏ธ ๐ง ๐ฝ ๐ ๏ธ. ๐ โ ๐, โ๏ธ ๐ ๐ ๏ธ ๐ฉโ๐ป ๐ช ๐ด ๐จ ๐ป ๐, ๐ ๐ ๐ป ๐ ๐ ๐ข, Pydantic ๐ ๐ ๐ซ & โ ๐ซ. & `dict` ๐ ๐จ `weights` ๐ ๐ค โ๏ธ `int` ๐ & `float` ๐ฒ. ## ๐ โฎ๏ธ **FastAPI** ๐ โ๏ธ ๐ ๐ช ๐ Pydantic ๐ท, โช ๐ง ๐ ๐ ๐ , ๐ & ๐. โ๏ธ โฎ๏ธ ๐ ๐ฐ: * ๐จโ๐จ ๐โ๐ฆบ (๐ ๏ธ ๐ โ) * ๐ฝ ๐ ๏ธ (.โ.. โ / ๐ ๏ธ) * ๐ฝ ๐ฌ
Registered: Mon Jun 17 08:32:26 UTC 2024 - Last Modified: Fri Mar 22 01:42:11 UTC 2024 - 9.2K bytes - Viewed (0) -
docs/pt/docs/tutorial/body-nested-models.md
Isso significa que, embora os clientes da API sรณ possam enviar strings como chaves, desde que essas strings contenham inteiros puros, o Pydantic irรก convertรช-los e validรก-los. E o `dict` que vocรช recebe como `weights` terรก, na verdade, chaves `int` e valores` float`. ## Recapitulaรงรฃo Com **FastAPI** vocรช tem a flexibilidade mรกxima fornecida pelos modelos Pydantic, enquanto seu cรณdigo รฉ mantido simples, curto e elegante.
Registered: Mon Jun 17 08:32:26 UTC 2024 - Last Modified: Thu Apr 18 19:53:19 UTC 2024 - 7.4K bytes - Viewed (0) -
pkg/scheduler/framework/plugins/interpodaffinity/scoring.go
} func (m scoreMap) processTerm(term *framework.AffinityTerm, weight int32, pod *v1.Pod, nsLabels labels.Set, node *v1.Node, multiplier int32) { if term.Matches(pod, nsLabels) { if tpValue, tpValueExist := node.Labels[term.TopologyKey]; tpValueExist { if m[term.TopologyKey] == nil { m[term.TopologyKey] = make(map[string]int64) } m[term.TopologyKey][tpValue] += int64(weight * multiplier) } } }
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Fri Dec 15 03:30:06 UTC 2023 - 10.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/common/attrs_and_constraints.h
inline constexpr std::array<int64_t, 4> kNchwToNhwcPermutation = {0, 2, 3, 1}; // Permutation from the OIHW (== (output features, input features, height, // width)) tensor format to HWIO. This is commonly used to transpose convolution // weights represented as OIHW format to HWIO, which is more desirable for // certain downstream optimization passes (e.g. XLA).
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 9.9K bytes - Viewed (0) -
src/go/doc/comment/text.go
// โ[The least weight subsequence problem],โ FOCS 1985, pp. 137-143. // // [The least weight subsequence problem]: https://doi.org/10.1109/SFCS.1985.60 func wrap(words []string, max int) (seq []int) { // The algorithm requires that our scoring function be concave, // meaning that for all iโ โค iโ < jโ โค jโ, // weight(iโ, jโ) + weight(iโ, jโ) โค weight(iโ, jโ) + weight(iโ, jโ). //
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu Oct 19 12:02:03 UTC 2023 - 8.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/passes/quantize_composite_functions.cc
enable_per_channel_quantization_)); // Apply activation-weight quantization. if (quantization_method_ == tensorflow::quantization::QuantizationMethod::METHOD_STATIC_RANGE_INT8) { // For XLA case, weight quantization will be applied for the remaining f32 // weights even in SRQ. pm.addNestedPass<func::FuncOp>(
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 54.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/schema/schema.fbs
SPARSE = 1, DENSE = 2, } table LSHProjectionOptions { type: LSHProjectionType; } table SVDFOptions { rank:int; fused_activation_function:ActivationFunctionType; // For weights-only quantization, use asymmetric quantization for non // constant inputs at evaluation time. asymmetric_quantize_inputs:bool; } // An implementation of TensorFlow RNNCell. table RNNOptions {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 03 18:01:23 UTC 2024 - 41.7K bytes - Viewed (0) -
pilot/pkg/xds/endpoints/endpoint_builder.go
} func (e *LocalityEndpoints) refreshWeight() { var weight *wrapperspb.UInt32Value if len(e.llbEndpoints.LbEndpoints) == 0 { weight = nil } else { weight = &wrapperspb.UInt32Value{} for _, lbEp := range e.llbEndpoints.LbEndpoints { weight.Value += lbEp.GetLoadBalancingWeight().Value } } e.llbEndpoints.LoadBalancingWeight = weight } func (e *LocalityEndpoints) AssertInvarianceInTest() {
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Sun Apr 28 02:18:19 UTC 2024 - 26.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/passes/lift_quantizable_spots_as_functions.cc
"Non-constant weights are not supported at the moment," " except matmul and einsum."); } else if (!quant_options_.enable_two_input_tensors() && !is_unitwise_quantization_enabled) { return absl::InternalError( "Quantization is disabled for this op due to the non-constant " "weight. You can enable it by setting `enable_two_input_tensors` "
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 10 04:07:09 UTC 2024 - 16.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/tests/prepare-quantize-dynamic-range.mlir
// RUN: tf-opt %s -tfl-prepare-quantize-dynamic-range="min-elements-for-weights=4000 enable-custom-op-quantization=CustomTestOp=1-3,CustomTestOp3=3" | FileCheck --check-prefix=MinElement %s // RUN: tf-opt %s -tfl-prepare-quantize-dynamic-range="min-elements-for-weights=19" | FileCheck --check-prefix=LSTMOpQuantized %s // RUN: tf-opt %s -tfl-prepare-quantize-dynamic-range="min-elements-for-weights=21" | FileCheck --check-prefix=LSTMOpNotQuantized %s
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 02 09:41:17 UTC 2024 - 38.2K bytes - Viewed (0)