Community-made simple search and REST API for VVZ.
Head to https://vvzapi.ch and start playing around with the search or API!
The schema is inspired by the VVZ Manual (starts page 18).
Attributes have been translated to english, dropped (in cases where the value was internal and not visible on VVZ), or additional attributes have been added that were not present in the documentation.
The word choices might be confusing if you're not used to them. Importantly, the term "unit" (or "learning unit" as the 1:1 translated from the German "Lerneinheit") is used for what is commonly understood as a course (Discrete Mathematics, Big Data, etc.). "Unit" is the more general term, as VVZ also lists non-courses like thesis and other projects.
A unit can have multiple times and places it can take place at. These individual slots are called "courses" in the API (a somewhat loose translation of the German "Lehrveranstaltung").
The search is inspired by Scryfall.
Note
For some reason all semesters 2009-2019 (both S and W) are simply not available. Accessing any of them throws a 403 Forbidden. I wonder if this is just some short-term problem or if they'll never come back. Some of the data (for all courses) is available in the Complete Catalogue, but I currently do not have any plans of parsing data from PDFs.
This project uses semantic versioning. Breaking changes will result in a bump of the major version. There should not be any breaking changes to the endpoints of any endpoints that are the same or lower version than the major version. If the current version is 2.x.x, the endpoints under /v1 and /v2 will not be intentionally updated in a way that would break or completely change their usage. But /v3 would then still be in prerelease and might change anytime.
The idea behind the VVZ API is to more easily enable the creation of various cool tools requiring course/VVZ data. If you have an idea for something that should absolutely be in the API, but is missing, open up an issue and let's start discussing it!
I'm grateful for any form of contribution, may it be adding documentation, implementing new features, opening issues for errors or something else. Head to the Local Development section below to learn more about how to get the API running locally.
Depending on what you intend to test locally, you can opt to download a dump of the database (head to the API docs to find the endpoint) to develop locally with the most up-to-date data.
Additionally, for ease of development, the devcontainer setup will initialze the essentials in a docker/podman container without cluttering your system with dependencies.
Locally, a SQLite database is used. Running the migrations automatically creates the database.
uv run alembic upgrade headsRequired if any model was modified.
uv run alembic revision --autogenerate -m "message"uv run scrapy genspider <scraper name> <scraper name>.pyuv run -m scraper.mainOr for just one of the spiders:
uv run scrapy crawl units
uv run scrapy crawl lecturersuv run scrapy shell "<url>"uv run scrapy parse --spider=units -c <cb func> "<url>"
uv run scrapy parse --spider=units -c parse_start_url "https://www.vvz.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?lang=de&semkez=2003S&seite=0"
uv run scrapy parse --spider=units -c parse_unit "https://www.vvz.ethz.ch/Vorlesungsverzeichnis/lerneinheit.view?semkez=2025W&ansicht=ALLE&lerneinheitId=192945&lang=en"There might be outdated or unused files in the html cache directories. Using the cleanup script everything that is not needed can be removed. Additionally it can also be used to purposely delete at most amount valid cached files from one or more semesters that are older than age-seconds.
uv run scraper/util/cleanup_scrapy.py [--dry-run] [--amount <int>] [--age-seconds <int>] [-d <semester>]*The scraper can also be started locally by running the docker image directly, if desired.
docker run \
-e SEMESTER=W \
-e START_YEAR=2024 \
-e END_YEAR=2024 \
-v $PWD/data:/app/.scrapy \
markbeep/vvzapi-scraper:nightlyIn the data directory there'll be a httpcache directory containing all crawled HTML files and a scrapercache directory containing scraper specific files and potentially a file called error_pages.jsonl with errors.
uv run fastapi dev api/main.pyTailwind is used in combination with DaisyUI. Download the source files using the following commands:
curl -sLo api/static/daisyui.mjs https://github.com/saadeghi/daisyui/releases/latest/download/daisyui.mjs
curl -sLo api/static/daisyui-theme.mjs https://github.com/saadeghi/daisyui/releases/latest/download/daisyui-theme.mjsThen run tailwindcss:
tailwindcss -i api/static/tw.css -o api/static/globals.css --watchuv run basedpyright