- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 161 for crawlen (0.05 sec)
-
src/main/resources/fess_label_nl.properties
labels.wizard_button_register_again=Continu aanmaken labels.wizard_button_register_next=Maken labels.wizard_start_crawling_title=Start crawlen labels.wizard_start_crawler_title=Crawler labels.wizard_start_crawling_desc=U kunt nu beginnen met crawlen door op de knop "Start crawlen" te klikken. labels.wizard_button_start_crawling=Start crawlen labels.wizard_button_finish=Overslaan labels.search_list_configuration=Zoeken labels.search_list_button_delete=Verwijderen
Registered: Sat Dec 20 09:19:18 UTC 2025 - Last Modified: Sat Dec 13 02:21:17 UTC 2025 - 46.1K bytes - Viewed (1) -
src/main/java/org/codelibs/fess/exec/Crawler.java
* <li>File system crawling - crawls file systems and documents</li> * <li>Data store crawling - crawls databases and other data sources</li> * <li>Combined crawling - runs multiple crawling types simultaneously</li> * </ul> * * <p>Command line usage: * <pre> * java org.codelibs.fess.exec.Crawler [options...] * -s, --sessionId sessionId : Session ID for the crawling session
Registered: Sat Dec 20 09:19:18 UTC 2025 - Last Modified: Fri Nov 28 16:29:12 UTC 2025 - 31.4K bytes - Viewed (0) -
fess-crawler/src/main/java/org/codelibs/fess/crawler/Crawler.java
* <p>The crawler can be configured with various parameters, such as the number of threads, * the maximum depth of crawling, and URL filters. * * <p>Example usage: * <pre> * Crawler crawler = new Crawler(); * crawler.addUrl("http://example.com/"); * crawler.execute(); * crawler.close(); * </pre> */ public class Crawler implements Runnable, AutoCloseable {
Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Mon Nov 24 03:59:47 UTC 2025 - 17K bytes - Viewed (0) -
README.md
}); ``` ## Advanced Configuration ### Multi-Instance Crawling ```java // Create multiple crawler instances Crawler crawler1 = container.getComponent("crawler"); crawler1.setSessionId("session1"); crawler1.addUrl("https://site1.com"); Crawler crawler2 = container.getComponent("crawler"); crawler2.setSessionId("session2"); crawler2.addUrl("https://site2.com"); // Execute concurrently
Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Sun Aug 31 05:32:52 UTC 2025 - 15.3K bytes - Viewed (0) -
fess-crawler-lasta/src/test/java/org/codelibs/fess/crawler/CrawlerTest.java
crawler1.addUrl(url1); crawler1.getCrawlerContext().setMaxAccessCount(maxCount); crawler1.getCrawlerContext().setNumOfThread(numOfThread); final Crawler crawler2 = crawlerContainer.getComponent("crawler"); crawler2.setSessionId(crawler2.getSessionId() + "2"); crawler2.setBackground(true);
Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Sat Sep 06 04:15:37 UTC 2025 - 12.8K bytes - Viewed (0) -
src/main/java/org/codelibs/fess/helper/DataIndexHelper.java
/** * Helper class for managing data crawling operations in Fess. * This class coordinates the execution of data store crawling processes, * managing multiple concurrent crawling threads and handling the indexing * of crawled documents into the search engine. * * <p>The DataIndexHelper supports:</p> * <ul> * <li>Concurrent crawling of multiple data configurations</li>
Registered: Sat Dec 20 09:19:18 UTC 2025 - Last Modified: Fri Nov 28 16:29:12 UTC 2025 - 19K bytes - Viewed (0) -
src/main/java/org/codelibs/fess/ds/callback/FileListIndexUpdateCallbackImpl.java
import org.codelibs.fess.Constants; import org.codelibs.fess.crawler.builder.RequestDataBuilder; import org.codelibs.fess.crawler.client.CrawlerClient; import org.codelibs.fess.crawler.client.CrawlerClientFactory; import org.codelibs.fess.crawler.entity.ResponseData; import org.codelibs.fess.crawler.entity.ResultData; import org.codelibs.fess.crawler.exception.ChildUrlsException; import org.codelibs.fess.crawler.exception.CrawlerSystemException;
Registered: Sat Dec 20 09:19:18 UTC 2025 - Last Modified: Fri Nov 28 16:29:12 UTC 2025 - 29.7K bytes - Viewed (3) -
src/main/java/org/codelibs/fess/job/CrawlJob.java
/** * CrawlJob is responsible for executing the crawling process in Fess. * This job launches a separate crawler process that can crawl web sites, file systems, * and data sources based on the configured crawling settings. * * <p>The job supports selective crawling by specifying configuration IDs for different * types of crawlers (web, file, data). It manages the crawler process lifecycle,
Registered: Sat Dec 20 09:19:18 UTC 2025 - Last Modified: Fri Nov 28 16:29:12 UTC 2025 - 19.6K bytes - Viewed (0) -
src/main/java/org/codelibs/fess/indexer/IndexUpdater.java
import org.codelibs.fess.Constants; import org.codelibs.fess.crawler.Crawler; import org.codelibs.fess.crawler.entity.AccessResult; import org.codelibs.fess.crawler.entity.AccessResultData; import org.codelibs.fess.crawler.entity.OpenSearchAccessResult; import org.codelibs.fess.crawler.entity.OpenSearchUrlQueue; import org.codelibs.fess.crawler.service.DataService; import org.codelibs.fess.crawler.service.UrlFilterService;
Registered: Sat Dec 20 09:19:18 UTC 2025 - Last Modified: Fri Nov 28 16:29:12 UTC 2025 - 32.9K bytes - Viewed (0) -
src/main/java/org/codelibs/fess/crawler/FessCrawlerThread.java
import org.codelibs.fess.app.service.FailureUrlService; import org.codelibs.fess.crawler.builder.RequestDataBuilder; import org.codelibs.fess.crawler.client.CrawlerClient; import org.codelibs.fess.crawler.entity.RequestData; import org.codelibs.fess.crawler.entity.ResponseData; import org.codelibs.fess.crawler.entity.UrlQueue; import org.codelibs.fess.crawler.log.LogType; import org.codelibs.fess.exception.ContainerNotAvailableException;
Registered: Sat Dec 20 09:19:18 UTC 2025 - Last Modified: Thu Dec 11 09:47:03 UTC 2025 - 19.5K bytes - Viewed (0)