- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 59 for sitemap (0.06 sec)
-
fess-crawler/src/test/java/org/codelibs/fess/crawler/processor/impl/SitemapsResponseProcessorTest.java
// Test handling of duplicate URLs in sitemap ResponseData responseData = new ResponseData(); byte[] content = "<sitemap></sitemap>".getBytes(); responseData.setResponseBody(content); SitemapUrl sitemap1 = new SitemapUrl(); sitemap1.setLoc("https://example.com/duplicate"); SitemapUrl sitemap2 = new SitemapUrl(); sitemap2.setLoc("https://example.com/duplicate");Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Thu Nov 13 13:29:22 UTC 2025 - 12K bytes - Viewed (0) -
fess-crawler/src/test/java/org/codelibs/fess/crawler/helper/SitemapsHelperTest.java
+ " </sitemap>\n" + " <sitemap>\n" + " <lastmod>2025-01-02</lastmod>\n" + " </sitemap>\n" + " <sitemap>\n" + " <loc>http://www.example.com/sitemap2.xml</loc>\n" + " </sitemap>\n" + "</sitemapindex>"; final InputStream in = new ByteArrayInputStream(xml.getBytes()); final SitemapSet sitemapSet = sitemapsHelper.parse(in); final Sitemap[] sitemaps = sitemapSet.getSitemaps();
Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Mon Nov 24 03:59:47 UTC 2025 - 36.7K bytes - Viewed (0) -
fess-crawler/src/test/java/org/codelibs/fess/crawler/entity/RobotsTxtTest.java
robotsTxt.addSitemap("https://example.com/sitemap.xml"); robotsTxt.addSitemap("https://example.com/sitemap2.xml"); String[] sitemaps = robotsTxt.getSitemaps(); assertEquals(2, sitemaps.length); assertEquals("https://example.com/sitemap.xml", sitemaps[0]); assertEquals("https://example.com/sitemap2.xml", sitemaps[1]); } public void test_addSitemapNoDuplicates() {Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Thu Nov 13 13:29:22 UTC 2025 - 14.4K bytes - Viewed (0) -
fess-crawler/src/main/java/org/codelibs/fess/crawler/helper/SitemapsHelper.java
/** * Helper class for parsing and validating sitemaps. * It supports XML sitemaps, XML sitemap indexes, and text sitemaps, * and can handle GZIP compressed sitemaps. * The class provides methods to check if an input stream is a valid sitemap, * and to parse an input stream into a {@link SitemapSet} object. * It uses SAX parser for XML sitemaps and XML sitemap indexes, * and handles potential exceptions during parsing.Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Fri Nov 14 13:19:40 UTC 2025 - 34.9K bytes - Viewed (0) -
fess-crawler/src/test/java/org/codelibs/fess/crawler/CrawlerContextTest.java
} /** * Test sitemaps add and remove operations */ public void test_sitemaps() { // Initial state assertNull(crawlerContext.removeSitemaps()); // Add sitemaps String[] sitemaps = new String[] { "http://example.com/sitemap.xml", "http://test.com/sitemap.xml" }; crawlerContext.addSitemaps(sitemaps); // Remove and verifyRegistered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Sat Sep 06 04:15:37 UTC 2025 - 25.6K bytes - Viewed (0) -
fess-crawler/src/main/java/org/codelibs/fess/crawler/entity/RobotsTxt.java
} /** * Adds a sitemap URL to the list of sitemaps. * * @param url The URL of the sitemap to be added */ public void addSitemap(final String url) { if (!sitemapList.contains(url)) { sitemapList.add(url); } } /** * Returns an array of sitemap URLs. * * @return an array of sitemap URLs */Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Mon Nov 24 03:59:47 UTC 2025 - 18.5K bytes - Viewed (0) -
fess-crawler/src/main/java/org/codelibs/fess/crawler/helper/RobotsTxtHelper.java
protected static final Pattern CRAWL_DELAY_RECORD = Pattern.compile("^crawl-delay:\\s*([^\\s]+)\\s*$", Pattern.CASE_INSENSITIVE); /** * Pattern for Sitemap record. */ protected static final Pattern SITEMAP_RECORD = Pattern.compile("^sitemap:\\s*([^\\s]+)\\s*$", Pattern.CASE_INSENSITIVE); /** Whether robots.txt processing is enabled. */ protected boolean enabled = true; /**
Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Fri Nov 14 12:52:01 UTC 2025 - 11.4K bytes - Viewed (0) -
README.md
controller.setDefaultIntervalTime(1000); }); ``` ### Sitemap Support ```java // Enable sitemap processing container.singleton("sitemapsRule", SitemapsRule.class, rule -> { rule.addRule("url", ".*sitemap.*"); }); // Add sitemap URL crawler.addUrl("https://example.com/sitemap.xml"); ``` ## Data Access and Storage ### Accessing Crawled Data ```java
Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Sun Aug 31 05:32:52 UTC 2025 - 15.3K bytes - Viewed (0) -
CLAUDE.md
extractorFactory.addExtractor("text/html", tikaExtractor, 1); // Fallback ``` ### Helpers **RobotsTxtHelper**: RFC 9309 parsing, user-agent matching, crawl-delay, sitemaps **SitemapsHelper**: Sitemap XML parsing, index handling **MimeTypeHelper**: MIME detection via Tika **EncodingHelper**: Charset detection with BOM **UrlConvertHelper**: URL normalization --- ## Development Workflow
Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Fri Nov 28 17:31:34 UTC 2025 - 10.7K bytes - Viewed (0) -
fess-crawler/src/test/java/org/codelibs/fess/crawler/helper/RobotsTxtHelperTest.java
assertFalse(robotsTxt.allows("/ddd", "Hoge Crawler")); String[] sitemaps = robotsTxt.getSitemaps(); assertEquals(2, sitemaps.length); assertEquals("http://www.example.com/sitmap.xml", sitemaps[0]); assertEquals("http://www.example.net/sitmap.xml", sitemaps[1]); } public void testParse_disable() {
Registered: Sat Dec 20 11:21:39 UTC 2025 - Last Modified: Mon Nov 24 03:59:47 UTC 2025 - 20.6K bytes - Viewed (0)