- Sort Score
- Num 10 results
- Language All
Results 11 - 20 of 114 for LLM (0.04 seconds)
-
docs/tr/llm-prompt.md
Kader Miyanyedi <******@****.***> 1768941243 +0300
Created: Sun Apr 05 07:19:11 GMT 2026 - Last Modified: Tue Jan 20 20:34:03 GMT 2026 - 2.7K bytes - Click Count (0) -
scripts/general-llm-prompt.md
Motov Yurii <******@****.***> 1773831336 +0100
Created: Sun Apr 05 07:19:11 GMT 2026 - Last Modified: Wed Mar 18 10:55:36 GMT 2026 - 14.6K bytes - Click Count (0) -
docs/en/docs/contributing.md
#### LLM Prompt per Language Each language has a directory: [https://github.com/fastapi/fastapi/tree/master/docs](https://github.com/fastapi/fastapi/tree/master/docs), in it you can see a file `llm-prompt.md` with the prompt specific for that language. For example, for Spanish, the prompt is at: [`docs/es/llm-prompt.md`](https://github.com/fastapi/fastapi/blob/master/docs/es/llm-prompt.md).
Created: Sun Apr 05 07:19:11 GMT 2026 - Last Modified: Mon Mar 23 13:59:26 GMT 2026 - 10.6K bytes - Click Count (0) -
src/main/resources/fess_llm.xml
<component name="markdownRenderer" class="org.codelibs.fess.helper.MarkdownRenderer"> <postConstruct name="init"/> </component> <!-- LLM client manager --> <component name="llmClientManager" class="org.codelibs.fess.llm.LlmClientManager"> </component> <!-- LLM client components are provided by fess-llm-* plugins via fess_llm++.xml -->
Created: Tue Mar 31 13:07:34 GMT 2026 - Last Modified: Wed Mar 04 15:19:41 GMT 2026 - 779 bytes - Click Count (0) -
src/main/java/org/codelibs/fess/llm/AbstractLlmClient.java
/** * Gets the configured LLM type. * * @return the LLM type from configuration */ protected abstract String getLlmType(); /** * Gets the configuration prefix for this provider. * Used to look up per-prompt-type parameters from FessConfig. * * @return the config prefix (e.g. "rag.llm.openai") */Created: Tue Mar 31 13:07:34 GMT 2026 - Last Modified: Sat Mar 21 06:04:58 GMT 2026 - 72K bytes - Click Count (0) -
src/main/java/org/codelibs/fess/llm/LlmClientManager.java
* It manages registered LLM clients and provides access to the configured * LLM provider based on the current configuration. */ public class LlmClientManager { private static final Logger logger = LogManager.getLogger(LlmClientManager.class); /** The list of registered LLM clients. */ protected final List<LlmClient> clientList = new CopyOnWriteArrayList<>(); /**Created: Tue Mar 31 13:07:34 GMT 2026 - Last Modified: Thu Mar 19 11:10:51 GMT 2026 - 17.4K bytes - Click Count (0) -
src/main/java/org/codelibs/fess/llm/LlmClient.java
* governing permissions and limitations under the License. */ package org.codelibs.fess.llm; import java.util.List; import java.util.Map; /** * Interface for LLM (Large Language Model) clients. * Implementations provide integration with different LLM providers * such as Ollama, OpenAI, and Google Gemini. * * In addition to low-level chat operations, this interface defines
Created: Tue Mar 31 13:07:34 GMT 2026 - Last Modified: Thu Mar 19 07:04:54 GMT 2026 - 7.3K bytes - Click Count (0) -
scripts/translate.py
if lang == "en": continue lang_prompt_path = Path(f"docs/{lang}/llm-prompt.md") if lang_prompt_path.exists(): translatable_langs.append(lang) return translatable_langs @app.command() def list_llm_translatable() -> list[str]: translatable_langs = get_llm_translatable() print("LLM translatable languages:", translatable_langs) return translatable_langs
Created: Sun Apr 05 07:19:11 GMT 2026 - Last Modified: Thu Mar 19 17:37:41 GMT 2026 - 15.8K bytes - Click Count (0) -
src/main/java/org/codelibs/fess/llm/LlmStreamCallback.java
*/ package org.codelibs.fess.llm; /** * Callback interface for receiving streaming chat responses from LLM. */ @FunctionalInterface public interface LlmStreamCallback { /** * Called for each chunk of the streaming response. * * @param chunk the text chunk from the LLM response * @param done true if this is the final chunk */Created: Tue Mar 31 13:07:34 GMT 2026 - Last Modified: Mon Jan 12 10:32:40 GMT 2026 - 1.2K bytes - Click Count (0) -
docs/en/docs/advanced/vibe.md
The body should be annotated with `Any`, because the request and the response would be... well... **anything**. 🤷 The idea is that you would receive the payload and send it **directly** to an LLM provider, using a `prompt` to tell the LLM what to do, and return the response **as is**. No questions asked. You don't even need to write the body of the function. The `@app.vibe()` decorator does everything for you based on AI vibes:
Created: Sun Apr 05 07:19:11 GMT 2026 - Last Modified: Wed Apr 01 16:16:24 GMT 2026 - 2K bytes - Click Count (0)