diff --git a/QUIRKS b/QUIRKS deleted file mode 100644 index 8c07166..0000000 --- a/QUIRKS +++ /dev/null @@ -1,65 +0,0 @@ -## Quirks - -This library aims to be compatible with the old ScraperWiki Classic library, -which was itself quite quirky. - -It's not an exact drop in replacement, this document describes the differences. - -## Differences - -### Datastore differences - -The local `scraperwiki.sqlite` is powered by -[SQLAlchemy](http://sqlalchemy.org), so some things -work a bit differently. - -Data is stored to a local sqlite database named `scraperwiki.sqlite`. - -Bizarre table and column names are supported. - -`scraperwiki.sqlite.execute` will return an empty list of keys on an -empty select statement result. - -`scraperwiki.sqlite.attach` downloads the whole datastore from ScraperWiki, the first time it runs; it then uses the cached database - -### Other Differences - -## Status of implementation -In general, features that have not been implemented raise a `NotImplementedError`. - -### Datastore -`scraperwiki.sqlite` may implement some less explored features -of its API slightly differently from ScraperWiki. - -### Utils -`scraperwiki.utils` is implemented, as well as the following functions. - -* `scraperwiki.log` -* `scraperwiki.scrape` -* `scraperwiki.pdftoxml` - -### Deprecated -These submodules are deprecated and thus will not be implemented. - -* `scraperwiki.apiwrapper` -* `scraperwiki.datastore` -* `scraperwiki.jsqlite` -* `scraperwiki.metadata` -* `scraperwiki.newsql` -* `scraperwiki.sqlite.attach` -* `scraperwiki.utils.swimport` -* `scraperwiki.geo` -* `scraperwiki.log` - -verbose parameter to various functions is deprecated - -### Specs -Here are some ScraperWiki scrapers that demonstrate the non-local library's quirks. - -https://scraperwiki.com/scrapers/scraperwiki-python/ -https://scraperwiki.com/scrapers/cast/ -https://scraperwiki.com/scrapers/things_happen_when_you_do_not_commit/ -https://scraperwiki.com/scrapers/what_does_show_tables_return/ -https://scraperwiki.com/scrapers/on_conflict/ -https://scraperwiki.com/scrapers/spaces_in_table_names/ -https://scraperwiki.com/scrapers/spaces_in_table_names_1/ diff --git a/README b/README index 8f87636..a567be4 100644 --- a/README +++ b/README @@ -2,8 +2,10 @@ ScraperWiki Python library ========================== This is a Python library for scraping web pages and saving data. -It is the easiest way to save data on the ScraperWiki platform, and it -can also be used locally or on your own servers. + +**Warning: This library is now in maintenance mode.** + +**The library has been updated to work with Python 3.14 but there are no guarantees on future maintenance.** Installing ---------- @@ -30,7 +32,6 @@ the schema automatically according to the data you save. Currently only supports SQLite. It will make a local SQLite database. It is based on `SQLAlchemy `_. -You should expect it to support other SQL databases at a later date. scraperwiki.sql.save(unique_keys, data[, table_name="swdata"]) Saves a data record into the datastore into the table given by ``table_name``. diff --git a/benchmark.py b/benchmark.py deleted file mode 100755 index e8b5e60..0000000 --- a/benchmark.py +++ /dev/null @@ -1,15 +0,0 @@ -#! /usr/bin/env python3 -import scraperwiki -import os - -rows = [{"id": i, "test": i * 2, "s": "abc"} for i in range(1000)] - -try: - os.remove("scraperwiki.sqlite") -except FileNotFoundError: - pass - -scraperwiki.sql.save(["id"], rows) - -for i, row in enumerate(rows): - scraperwiki.sql.save(["id"], row)