Skip to content
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
128 changes: 128 additions & 0 deletions doc/mainstay.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# Mainstay integration

Options for attesting state to a mainstay proof-of-publication service.

Assumptions:

Mainstay service is available over http interface (or via SOCKS5 Tor proxy).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the question to ask if what types of clients the mainstay integration aims to serve during the few first rollouts. Out of mind, I believe one of the main target is Nostr client (including civkit-sample) and civkit marketd service (notarize all the trade orders received) and a more long-term scale LSPs / Lightning delegated infrastructure (e.g watchtower).

If we’re considering those clients in priority, realistically the interface to prioritize are the following:

  • (unauthenticated / unencrypted) websocket over tcp
  • bolt8’s noise connection over tcp

Those ones are already wip in civkit-node.

W.r.t to communications between civkit-notaryd (i.e either mainstay service proxy or one of its main running process) and civkitd there is a tonic interface (civkitservices) using gRPC over HTTP/2.

Mainstay service is available and funded with a valid `token_id` for verifiation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding - Client buy “publication slot” with a bitcoin payment, gets a credential and then redeem the service at anytime in the future (under max service policy time window) with cleartext credentials and an identifier. The identifier allows binding between the credentials redemption payload and the protocol-specific request.

This identifier can be the valid token_id mentioned here.

Note this is matching the issuance / redemption flow of the staking credential framework:
https://github.com/civkit/staking-credentials-spec/blob/main/60-staking-credentials-archi.md#credentials-issuance

The token_id can be the service_id implemented for ServiceDeliveranceRequest / ServiceDeliveranceResult here:
https://github.com/civkit/staking-credentials/blob/main/staking-credentials/src/common/msgs.rs#L119

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this makes sense.

Funding (via LN payment) is performed in advance and out of band for subscrption. (i.e. `token_id` is already performed.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The funding can happens through the “issuance” protocol flow of staking credentials mentioned above. Pay-per-usage or subscription can be defined as service policy, though for privacy-preserving reasons if subscription is opted-in new credentials / tokens should be refreshed for every service unit deliverance.

A user A should not be able to be dissociated from user B based on its service consumption pattern (ideally).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a fundamental issue with mainstay (or any proof-of-publication mechanism or service). The commitments must be provably unique in a given publication space, and so user A must have exclusive access to their own publication space (i.e. 'slot' in mainstay), necessitating user credentials. The credentials can be updated, but the identification of the publication space they are linked to cannot be - the service will always know it's the same user posting commitments to the same slot.

But I don't think there is an issue with this privacy-wise. The user can blind the commitments themselves if required, and store the blinding nonces with the proofs for verification.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I see the provably unique requirement in a proof-of-publication, though on the exclusive access I wonder if a user signature (and therefore posession of a secret key) could be included in the commitment scope. If you have duplication or equivocation of a publication space it can be disregarded at both client / server level. If my understanding of proof-of-publication space is correct.

Otherwise yes credential can be re-used indefinitely by the user, like the service provider binds a slot at the first credential redemption, and allow re-use of it.

Mainstay proofs are stored and made available, but that verification against `bitcoind` and staychain occur separately.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the nice advantage with current civkitd architecture, there is a separate logic in-charge of the disk operations NoteProcessor, and this aims to handle storage for “hosted” civkit services (such as civkit-notaryd or civkit-martked instance). In the future, if it becomes its own process it could be run independently on the civkit service, not the civkitd one.

One advantage of dissociating mainstay service from the backend storage is to enable the replication of storage over multiple civkitd nodes instance for redundancy.

Storage service requirement will need to be agree on as it can become a source of denial-of-service.

I think it’s good than verification against bitcoind and staychain occurs on the client-side and proofs are just fetched by them when they need it.

Lastly, I believe it would be very valuable to have standardization of the mainstay proofs, that way it can be consumed by civkit-sample scoring / reputation engine to rank market board services e.g.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, yes.

The mainstay model as it currently works is:

  • user creates commitments to data
  • sends commitments to mainstay service API.
  • User queries mainstay periodically for commitment status. Once the root is committed to a confirmed bitcoin transaction, the user queries the mainstay API for both the TxID of the root commitment and the proof (path).
  • User stores the TxIDs and proof locally.

The user can then choose either: just trust the mainstay service provider that the tx is confirmed, the proof is valid and done correctly, and just keep the data in case it is needed for future dispute. OR verify the commitment once it's received against bitcoind.

In the current service, verification is handled by the pymainstay client.

The proof format (i.e. a single slot proof) returned by the API is currently like:

        "attestation":
        {
            "merkle_root": "f46a58a0cc796fade0c7854f169eb86a06797ac493ea35f28dbe35efee62399b",
            "txid": "38fa2c6e103673925aaec50e5aadcbb6fd0bf1677c5c88e27a9e4b0229197b13",
            "confirmed": true,
            "inserted_at": "16:06:41 23/01/19"
        },
        "merkleproof":
        {
            "position": 1,
            "merkle_root": "f46a58a0cc796fade0c7854f169eb86a06797ac493ea35f28dbe35efee62399b",
            "commitment": "5555c29bc4ac63ad3aa4377d82d40460440a67f6249b463453ca6b451c94e053",
            "ops": [
            {
                "append": false,
                "commitment": "21b0a66806bdc99ac4f2e697d05cb17c757ae10deb851ee869830d617e4f519c"
            },
            {
                "append": true,
                "commitment": "622d1b5efe11e9031f1b25aac11587e0ff81a37e9565ded16ee8e82bbc0c2fc1"
            },
            {
                "append": true,
                "commitment": "406ab5d975ae922753fad4db83c3716ed4d2d1c6a0191f8336c76000962f63ba"
            }]
        }
    ```

A chain of thses (along with the data sequence) gives the full history/publication proof. 

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, the mainstay mode is quite simple and I think it fits well in the civkit service framework.

There is just a relay (i.e civkitd) added as an intermediary between the user and mainstay service API. Multiple relays can be used to front-load or duplicate proof storages.

Good to have proofs that can be queried from service or in by the client (in case of service unavailability).

Mainstay proof format is simple, that’s good.


## Config

Node is configured with the mainstay server URL, the slot index and the authentication token:

```
pub struct MainstayConfig {
url: String,
position: u64,
token: String,
}
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think one key element which sounds missing from a mainstay service is a long-term pubkey, ideally using a public key on Bitcoin’s secp256k1 curve, see introduction of https://github.com/lightning/bolts/blob/master/08-transport.md#bolt-8-encrypted-and-authenticated-transport

I think the url can stay and it could be announced in the future where is civkit service gossip periodically issued by the civkitd to announce itself to the rest of the network, see https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-node_announcement-message

Unclear what will be a slot index, like where in a batched mainstay proof this client proof is inserted. Authentication token or credential is assumed to be dynamic thanks to the issuance flow. Other fields that could be added is the list of “mainstay” features supported, though this can become more sophisticated later I think.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK - so the long term pubkey is to receive messages via tcp (as opposed to an onion address).

The slot index is unique to a user/client. It is assigned by the mainstay service when a user first pays. The slot index cannot change for a single proof-of-publication.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK - so the long term pubkey is to receive messages via tcp (as opposed to an onion address)

In fact both, see BOLT4 on how pubkey is used for onion routing: https://github.com/lightning/bolts/blob/master/04-onion-routing.md

Understood the slot index unique to a user/client.


This can be added to `/src/config.rs`

## Commitment function

Impliementation of a commitment function that performs a POST request to the `/commitment/send` mainstay service route, with payload:

```
payload = {
commitment: commitment,
position: 0,
token: '4c8c006d-4cee-4fef-8e06-bb8112db6314',
};
```

`commitment` is a 32 byte value encoded as a 64 character hex string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally the payload can be defined as a new tlv_stream (see https://github.com/lightning/bolts/blob/master/01-messaging.md#type-length-value-format) for future backward-compatibility addition of new fields to existing message types. Then this tlv_stream can be added as the content of nostr EVENT and signed by the client, then forward to civkitd.

I think it’s a bit of protocol hacking though in the future this allow nice thing, like leveraging the nostr tag field to have “mempool” like semantic of relay messages, or extract the tlv_record to be wrapped as an onion and routed accordingly.


This can be performed using the `Reqwest` http client library (as in mercury server), e.g.

```
use reqwest;

pub struct Request(reqwest::blocking::RequestBuilder);

impl Request {
//Construct a request from the give payload and config
pub fn from(
payload: Option<&Payload>,
command: &String,
config: &MainstayConfig,
signature: Option<String>,
) -> Result<Self> {
//Build request
let client = reqwest::blocking::Client::new();
let url = reqwest::Url::parse(&format!("{}/{}", config.url(), command))?;

//If there is a payload this is a 'POST' request, otherwise a 'GET' request
let req = match payload {
Some(p) => {
let payload_str = String::from(serde_json::to_string(&p)?);
let payload_enc = encode(payload_str);
let mut data = HashMap::new();
data.insert("X-MAINSTAY-PAYLOAD", &payload_enc);
let sig_str = match signature {
Some(s) => s,
None => String::from(""),
};
data.insert("X-MAINSTAY-SIGNATURE", &sig_str);
client
.post(url)
.header(reqwest::header::CONTENT_TYPE, "application/json")
.json(&data)
}
None => client
.get(url)
.header(reqwest::header::CONTENT_TYPE, "application/json"),
};

Ok(Self(req))
}

pub fn send(self) -> std::result::Result<reqwest::blocking::Response, reqwest::Error> {
self.0.send()
}
}
```

## Commitment construction

The node will construct commitments from specified *events* () in `src/events.rs`. The `event_id` already hashes the full payload of the event object and can be used for commitment directly.

Initially assume all events are committed. Can add config to set commitment for specific events.

## Commitment compression

By committing a a *cumulative* hash to the mainstay slot, and saving the cumulative hash alongside the `event_id`, then if individual commitment operations fail, or the mainstay service is temporarily unavailable, the unbroken sequence of events is verifiable as unquine up until the the latest commitment operation.

In this approach, for each *event* that occurs (in time sequence), the `event_id` is concatenated with the previous cumulative hash and committed to mainstay.

So, for the first event: `event_id` is used as the commitment and sent to the mainstay commitment endpoint. This is labeled `event_id[0]`.

For the next event (`event_id[1]`), the commitment hash is computed: `comm_hash[1] = SHA256(event_id[0] || event_id[1])` and committted to the mainstay service API.

For the next event (`event_id[2]`), the commitment hash is computed: `comm_hash[2] = SHA256(comm_hash[1] || event_id[2])` and committted to the mainstay service API.

For the next event (`event_id[3]`), the commitment hash is computed: `comm_hash[3] = SHA256(comm_hash[2] || event_id[3])` and committted to the mainstay service API.

And so on, committing the chain. `comm_hash[n]` does not strictly need to be saved as the chain can be reconstructed directly from the `event_id[n]` saved in the DB.

## Proof retreival

TODO

After each commitment, retreive the slot proof from the mainstay server API (call to GET `/commitment/commitment` route with the `commitment` hex string). This will return attestion info (`TxID`) and the slot proof (Merkle proof).

```
pub struct Proof {
merkle_root: Commitment,
commitment: Commitment,
ops: Vec<Commitment>,
append: Vec<bool>,
position: u64,
}
```

This will need to be stored in a new DB table corresponding to events.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this assumes than the mainstay server look on periodically at bitcoind to get a list of confirmed txids, and when a target “notarization_tx” (commitment / anchor name already widely used in lightning parlance) has been included, the proof is finalized by the mainstay server and shared back to civkitd for storage and retrieval by clients.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - mainstay server can just be queried for txids and proofs as they become available. Checking against bitcoind only required if you want to check its all been included correctly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good.