- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 14 for feedsif (0.28 sec)
-
android/guava/src/com/google/common/io/BaseEncoding.java
* omitted} or {@linkplain #withPadChar(char) replaced}. * * <p>No line feeds are added by default, as per <a * href="http://tools.ietf.org/html/rfc4648#section-3.1">RFC 4648 section 3.1</a>, Line Feeds in * Encoded Data. Line feeds may be added using {@link #withSeparator(String, int)}. */ public static BaseEncoding base64() { return BASE64; }
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Fri Mar 15 16:33:32 UTC 2024 - 41.7K bytes - Viewed (0) -
guava/src/com/google/common/io/BaseEncoding.java
* omitted} or {@linkplain #withPadChar(char) replaced}. * * <p>No line feeds are added by default, as per <a * href="http://tools.ietf.org/html/rfc4648#section-3.1">RFC 4648 section 3.1</a>, Line Feeds in * Encoded Data. Line feeds may be added using {@link #withSeparator(String, int)}. */ public static BaseEncoding base64() { return BASE64; }
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Fri Mar 15 16:33:32 UTC 2024 - 41.7K bytes - Viewed (0) -
src/encoding/xml/read_test.go
http://code.google.com/p/rietveld/issues/detail?id=155 The server side of the protocol is trivial: 1. add a &lt;link rel=&quot;hub&quot; href=&quot;hub-server&quot;&gt; tag to all feeds that will be pubsubhubbubbed. 2. every time one of those feeds changes, tell the hub with a simple POST request. I have tested this by adding debug prints to a local hub server and checking that the server got the right publish requests.
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Mar 26 19:58:28 UTC 2024 - 29.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/api/v1/cluster_tf.cc
// inference. pm.addPass(mlir::TF::CreateGuaranteeAllFuncsOneUsePass()); pm.addPass(mlir::TF::CreateTFShapeInferencePass()); // For V1 compatibility, we process a module where the graph does not have // feeds and fetched. We extract first the TPU computation in a submodule, // where it'll be in a function with args and returned values, much more // like a TF v2 module. We can then run the usual pipeline on this nested
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Mar 28 22:25:18 UTC 2024 - 9.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/internal/passes/tpu_sharding_identification_pass.cc
} // Returns a TPUPartitionedInput op connected to a `tf_device.cluster_func` // operand value if it has an XLA sharding. If value is a resource type then // TPUPartitionedInput op will be connected to a ReadVariable op that feeds into // a `tf_device.cluster_func`. mlir::Operation* GetXlaShardingFromOperand(Value value) { Value value_to_visit = value; if (auto read_var = value_to_visit.getDefiningOp<mlir::TF::ReadVariableOp>())
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Apr 30 02:01:13 UTC 2024 - 28.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/python/quantize_model.py
related config. representative_dataset: a generator that returns a dictionary in {input_key: input_value} format or a tuple with signature key and a dictionary in {input_key: input_value} format that feeds calibration data for quantizing model. This should be provided when the model is not a QAT model. Returns: A SavedModel object with TF quantization applied. Raises:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 17 03:36:50 UTC 2024 - 34.2K bytes - Viewed (0) -
maven-core/src/main/java/org/apache/maven/execution/MavenSession.java
* Adapt a {@link MavenExecutionRequest} to a {@link Settings} object for use in the Maven core. * We want to make sure that what is ask for in the execution request overrides what is in the settings. * The CLI feeds into an execution request so if a particular value is present in the execution request * then we will take that over the value coming from the user settings. */
Registered: Wed Jun 12 09:55:16 UTC 2024 - Last Modified: Mon Mar 25 10:50:01 UTC 2024 - 16.6K bytes - Viewed (0) -
tensorflow/compiler/jit/compilability_check_util.h
// constant arguments? Even constant arguments get an _Arg node in the graph // instantiated for Function compilation. The tf2xla kernel for constant _Arg // nodes takes the constant value, converts it to XlaLiteral, and feeds it // to xla::ComputationBuilder.ConstantLiteral, which returns the handle. This // constant XlaLiteral is included in the HLO graph, and subsequently, in // the actual executable, which is copied to the device before being
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 14.9K bytes - Viewed (0) -
tensorflow/cc/saved_model/loader.cc
// leaving behind non-GC'ed state. // // Detailed motivation behind this approach, from ashankar@: // // Each call to Session::Run() that identifies a new subgraph (based on feeds // and fetches) creates some datastructures that live as long as the session // (the partitioned graph, associated executors etc.). // // A pathological case of this would be if say the initialization op
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Apr 02 04:36:00 UTC 2024 - 23K bytes - Viewed (0) -
tensorflow/compiler/jit/compilability_check_util.cc
encapsulating_function, uncompilable_nodes); LogNotCompilable(node, uncompilable_reason); return false; } // _Arg nodes in a top-level function represent feeds and _Retval nodes in a // top-level function represent fetches. if (stack_depth == 1 && (node.type_string() == "_Arg" || node.type_string() == "_Retval")) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 12 06:33:33 UTC 2024 - 30.3K bytes - Viewed (0)