-
Notifications
You must be signed in to change notification settings - Fork 6
Description
Currently, when we need to fetch a new kind of signed data from an API provider, we just call their signed HTTP gateway with the new parameters. In the current state of the signed API pusher, a redeployment would be needed for each new dAPI. This is by design, as it prevents the availability of the signed data from being interrupted by us. However, @metobom thinks this will require far too many redeployments, annoying the user.
A middle ground could be to have two groups of pushed data
- An established group of signed data that is pushed by a pusher with a static config
- A newer group of signed data that is pushed by a pusher with remotely updated config that will graduate to group 1 in batches to avoid frequent redeployment
This could be implemented by the signed API having an endpoint that signals to the pusher the additional Beacons that it wants to push data for. In the signed API GET interface, for each Beacon, it would be specified (for example with a flag) that the signed data is pushed based on a static config (safe to use) or remotely controlled config (not as safe to use) for transparency.
Note that this can also be implemented with a scheme where the pusher config can be specified to be remote (as in https://api3workspace.slack.com/archives/C05S589E7B4/p1695730031236799) and having the API provider deploy two separate pushers but this approach has some problems
- Handling two deployments will be significantly more difficult for the API provider
- It assumes the remote config is provided by the pusher operator so it is allowed to control anything (like rate limiting parameters)
- It doesn't let the signed API user differentiate between signed data that only the API provider can deny access to vs. signed data the config provider can also deny access to (though this may have to be that way, as an API provider can self-host a remote config/a remote config doesn't necessarily indicate that it's not safe)