- Python: 3.12 or better
Configure git to automatically lint your code and validate validate your commit messages:
$ make setup_git_hooksSet up a virtual environment and install dependencies:
$ uv venv
$ source .venv/bin/activate
$ make install && make install_devConfigure the .env file to connect to the local database:
$ cp .env.sample .env- soil map unit: (possibly disjoint) geographic area that is associated with soil component percentage / arial coverage
- soil series: collection of related soil components
- soil component: description of various soil properties at specific depth intervals
- simple features: https://r-spatial.github.io/sf/index.html
- well-known geometry: https://paleolimbot.github.io/wk/
- R package for querying soilDB: https://ncss-tech.github.io/soilDB/
- dplyr: https://dplyr.tidyverse.org/
Input: a specific point in lat/lon, and a set of depth intervals.
- Query for all map units within 1km of the point.
- Fall back to STATSGO at 10km if SSURGO is incomplete, or else declare not available if area not surveyed.
- Associate each map unit with its polygons' minimum distance to the point in question.
- Infill missing components by rescaling them to sum to 100.
- Calculate the component probabilities by, for each component, dividing the distance-weighted sum of that component's probability in each map unit by the total distance-weighted sum of each component's probability in each map unit.
- Limit to components in the top 12 component series by probability.
- Query the local database for the component horizons.
- Return the individual probabilities of data at each horizon based on the weighted sum of each component's data at each horizon.
• This folder contains the data schema and processed soil database tables that are ingested into the mySQL database.
• https://nrcs.app.box.com/s/vs999nq9ruyetb9b4l7okmssdggh8okn
• https://www.nrcs.usda.gov/resources/data-and-reports/ssurgo/stats2go-metadata
• https://nrcs.app.box.com/v/soils/folder/17971946225
There are several smaller test suites:
- There is a set of "unit" tests, which really are testing the entire codebase more or less, but don't rely on any external API services, instead using snapshotted data from those services. You can run these tests with
make test_unit.- These tests mostly produce snapshots of algorithm output rather than validating specific properties of the output, so they moreso validate that the algorithm hasn't changed (or how it has changed) rather than that it is correct. If the snapshots have changed in a desirable way, you can update them with
make test_update_unit_snapshots.
- These tests mostly produce snapshots of algorithm output rather than validating specific properties of the output, so they moreso validate that the algorithm hasn't changed (or how it has changed) rather than that it is correct. If the snapshots have changed in a desirable way, you can update them with
- For US only, there is a set of "integration" tests which run the algorithm against the live API services, but just confirm that the algorithm doesn't crash, they don't validate the output since it can change over time. These can be run with
make test_integration. - The unit and integration tests can be run together with
make testfor convenience: this is what must pass for a PR to be mergeable. - The API snapshots themselves can be checked against the live API for drift using
make test_api_snapshot. They can be updated to the new live API values usingmake test_update_api_snapshots.
There is a large suite of integration tests which takes many hours to run. It comes in the format of two scripts:
- Run
make generate_bulk_test_results_usormake generate_bulk_test_results_globalto run the algorithm over a collection of thousands soil pits with soil IDs given by trained data collectors, which will accumulate the results in a log file. This can take several hours or potentially need to run overnight due (especially the US tests are slow due to the speed of external API services). - Run
RESULTS_FILE=$RESULTS_FILE make process_bulk_test_results_usorRESULTS_FILE=$RESULTS_FILE make process_bulk_test_results_globalto view statistics calculated over that log file. This can be run concurrently withgenerate_bulk_test_resultsto see statistics over the soil pits which have been run so far. - It has been nice to have these as two separate scripts because then you can iterate on the processing and display of statistics without interrupting the data collection.
- It would be of value to also be able to run these US tests against snapshotted API data, it would just be much more onerous to collect and update the data.
-
Beaudette, D., Roudier, P., Brown, A. (2023). aqp: Algorithms for Quantitative Pedology. R package version 2.0.
-
Beaudette, D.E., Roudier, P., O'Geen, A.T. Algorithms for quantitative pedology: A toolkit for soil scientists, Computers & Geosciences, Volume 52, March 2013, Pages 258-268, ISSN 0098-3004.
-
soilDB: Beaudette, D., Skovlin, J., Roecker, S., Brown, A. (2024). soilDB: Soil Database Interface. R package version 2.8.3.