Go to file
benoit74 f22bb9218c
Merge pull request #226 from openzim/check_url_fail
Fail on all HTTP error codes in check_url
2023-10-23 11:14:41 +02:00
.github Fixed #178: publish images for arm64 2023-08-23 12:14:12 +00:00
test account for new failed field in crawl.json 2023-04-27 11:56:14 +00:00
.dockerignore add .dockerignore to speed up docker build 2020-10-16 19:12:15 +00:00
.gitignore split zimit from core browsertrix-crawler, which has been moved to https://github.com/webrecorder/browsertrix-crawler 2020-11-03 17:21:54 +00:00
CHANGELOG.md Fail on all HTTP error codes in check_url 2023-10-23 11:09:16 +02:00
Dockerfile releasing 1.5.3 with crawler 0.11.2 2023-10-02 10:51:06 +00:00
LICENSE Added LICENSE document 2020-09-01 10:22:32 +02:00
README.md minor spelling mistake 2023-07-13 12:49:34 +00:00
zimit.py Fail on all HTTP error codes in check_url 2023-10-23 11:09:16 +02:00



Zimit is a scraper allowing to create ZIM file from any Web site.

Docker Build CodeFactor License: GPL v3

⚠️ Important: this tool uses warc2zim to create Zim files and thus require the Zim reader to support Service Workers. At the time of zimit:1.0, that's mostly kiwix-android and kiwix-serve. Note that service workers have protocol restrictions as well so you'll need to run it either from localhost or over HTTPS.

Technical background

This version of Zimit runs a single-site headless-Chrome based crawl in a Docker container and produces a ZIM of the crawled content.

The system extends the crawling system in Browsertrix Crawler and converts the crawled WARC files to ZIM using warc2zim

The zimit.py is the entrypoint for the system.

After the crawl is done, warc2zim is used to write a zim to the /output directory, which can be mounted as a volume.

Using the --keep flag, the crawled WARCs will also be kept in a temp directory inside /output


zimit is intended to be run in Docker.

To build locally run:

docker build -t ghcr.io/openzim/zimit .

The image accepts the following parameters, as well as any of the warc2zim ones; useful for setting metadata, for instance:

  • --url URL - the url to be crawled (required)
  • --workers N - number of crawl workers to be run in parallel
  • --wait-until - Puppeteer setting for how long to wait for page load. See page.goto waitUntil options. The default is load, but for static sites, --wait-until domcontentloaded may be used to speed up the crawl (to avoid waiting for ads to load for example).
  • --name - Name of ZIM file (defaults to the hostname of the URL)
  • --output - output directory (defaults to /output)
  • --limit U - Limit capture to at most U URLs
  • --exclude <regex> - skip URLs that match the regex from crawling. Can be specified multiple times. An example is --exclude="(\?q=|signup-landing\?|\?cid=)", where URLs that contain either ?q= or signup-landing? or ?cid= will be excluded.
  • --scroll [N] - if set, will activate a simple auto-scroll behavior on each page to scroll for upto N seconds
  • --keep - if set, keep the WARC files in a temp directory inside the output directory

The following is an example usage. The --shm-size flags is needed to run Chrome in Docker.

Example command:

docker run ghcr.io/openzim/zimit zimit --help
docker run ghcr.io/openzim/zimit warc2zim --help
docker run  -v /output:/output \
       --shm-size=1gb ghcr.io/openzim/zimit zimit --url URL --name myzimfile --workers 2 --waitUntil domcontentloaded

The puppeteer-cluster provides monitoring output which is enabled by default and prints the crawl status to the Docker log.

Note: Image automatically filters out a large number of ads by using the 3 blocklists from anudeepND. If you don't want this filtering, disable the image's entrypoint in your container (docker run --entrypoint="" ghcr.io/openzim/zimit ...).

Nota bene

A first version of a generic HTTP scraper was created in 2016 during the Wikimania Esino Lario Hackathon.

That version is now considered outdated and archived in 2016 branch.


GPLv3 or later, see LICENSE for more details.