diff --git a/README.md b/README.md index 40ab0d6..b1880d3 100644 --- a/README.md +++ b/README.md @@ -36,8 +36,8 @@ See [documentation](https://ourzora.github.io/offchain/) for more examples and t ## Contributing -We welcome contributions that add support for new metadata standards, new ways of retreiving metadata, and ways of normalizing them to a consistent form. -We are commited to integrating contributions to our indexer and making the results available in our API. +We welcome contributions that add support for new metadata standards, new ways of retrieving metadata, and ways of normalizing them to a consistent form. +We are committed to integrating contributions to our indexer and making the results available in our API. You should be able to contribute a new standard for metadata, and have NFTs that adhere to that metadata standard be returned correctly from queries to `api.zora.co`. We hope this helps to foster innovation in how diff --git a/docs/concepts.md b/docs/concepts.md index 4f221fd..f36e736 100644 --- a/docs/concepts.md +++ b/docs/concepts.md @@ -60,7 +60,7 @@ metadata = pipeline.run([token])[0] By default, the pipeline uses `https://cloudflare-eth.com` as the provider for the `ContractCaller`. This is a free Ethereum RPC provider, which means that it is very easy to exceed the rate-limit. -The code below illustrates how to use a custom RPC provider to prevent getting rate-limited if token uris need to be retireved from the contract: +The code below illustrates how to use a custom RPC provider to prevent getting rate-limited if token uris need to be retrieved from the contract: ```python from offchain import MetadataPipeline diff --git a/docs/contributing/collection_parser.md b/docs/contributing/collection_parser.md index 6ca88c1..a6754b0 100644 --- a/docs/contributing/collection_parser.md +++ b/docs/contributing/collection_parser.md @@ -44,7 +44,7 @@ class ENSParser(CollectionParser): The token uri is needed to tell the parser where to fetch the metadata from. If the token uri is not passed in as part of the input, the pipeline will attempt to fetch it from the `tokenURI(uint256)` function on the contract. -Note, it is not uncommon for token uris to be base64 encoded data is stored entirely on chain e.g. Nouns, Zorbs. +Note, it is not uncommon for token uris to be base64 encoded data stored entirely on chain e.g. Nouns, Zorbs. ENS hosts their own metadata service and token uris are constructed in the following format: @@ -137,7 +137,7 @@ This should return the following data from the ENS metadata service: The next step is to convert the metadata into the [standardized metadata format](../models/metadata.md). -Each field in the new metadata format should either map a field in the standardized metadata format or be added as an `MetadataField` under the `additional_fields` property. +Each field in the new metadata format should either map a field in the standardized metadata format or be added as a `MetadataField` under the `additional_fields` property. In the case of ENS, the metadata format has the following fields: @@ -171,7 +171,7 @@ Each of these fields can be mapped into the standard metadata format: --- -And this is how it would look programatically: +And this is how it would look programmatically: ```python class ENSParser(CollectionParser): @@ -339,7 +339,7 @@ def test_ens_parser_parses_metadata(self): assert parser.parse_metadata(token=token, raw_data=None) == expected_metadata ``` -In addition to testing your parser, you'll need to verify that the parser has been registered and added to the pipeline correctly. The tests in `tests/metadata/registries/test_parser_registry.py` should break if the not modified to include your new parser class. +In addition to testing your parser, you'll need to verify that the parser has been registered and added to the pipeline correctly. The tests in `tests/metadata/registries/test_parser_registry.py` should break if they are not modified to include your new parser class. --- diff --git a/docs/contributing/guidelines.md b/docs/contributing/guidelines.md index 3a5cee5..f13e5b3 100644 --- a/docs/contributing/guidelines.md +++ b/docs/contributing/guidelines.md @@ -1,6 +1,6 @@ # Guidelines -This section is an overview for the 3 main types of contributions that are possible for `offchain`. +This section is an overview of the 3 main types of contributions that are possible for `offchain`. ## Contributing a Collection Parser diff --git a/docs/contributing/schema_parser.md b/docs/contributing/schema_parser.md index b56fcaa..e342cf0 100644 --- a/docs/contributing/schema_parser.md +++ b/docs/contributing/schema_parser.md @@ -1,4 +1,4 @@ -# Contributing a Schema Paser +# Contributing a Schema Parser A guide on how to contribute a schema parser. @@ -79,7 +79,7 @@ This should return the following data from the IPFS Gateway: ```json { "name": "Obligatory Song About the iPhone 5", - "description": "A new song, everyday, forever. Song A Day is an ever-growing collection of unique songs created by Jonathan Mann, starting January 1st, 2009. Each NFT is a 1:1 representation of that days song, and grants access to SongADAO, the orgninzation that controls all the rights and revenue to the songs. Own a piece of the collection to help govern the future of music. http://songaday.world", + "description": "A new song, everyday, forever. Song A Day is an ever-growing collection of unique songs created by Jonathan Mann, starting January 1st, 2009. Each NFT is a 1:1 representation of that days song, and grants access to SongADAO, the organization that controls all the rights and revenue to the songs. Own a piece of the collection to help govern the future of music. http://songaday.world", "token_id": 1351, "image": "ipfs://QmX2ZdS13khEYqpC8Jz4nm7Ub3He3g5Ws22z3QhunC2k58/1351", "animation_url": "ipfs://QmVHjFbGEqXfYuoPpJR4vmRacGM29KR5UenqbidJex8muB/1351", @@ -348,7 +348,7 @@ def test_opensea_parser_parses_metadata(self, raw_crypto_coven_metadata): ) ``` -In addition to testing your parser, you'll need to verify that the parser has been registered and added to the pipeline correctly. The tests in `tests/metadata/registries/test_parser_registry.py` should break if the not modified to include your new parser class. +In addition to testing your parser, you'll need to verify that the parser has been registered and added to the pipeline correctly. The tests in `tests/metadata/registries/test_parser_registry.py` should break if they are not modified to include your new parser class. ---