Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions .gitmodules

This file was deleted.

21 changes: 11 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,16 @@ All requirements for running the API are packaged and uploaded to AWS as a lambd
- Hatch (https://hatch.pypa.io/) for building and running the project. This is a Python project manager that can be installed via `pip install hatch`.

## Deployment Steps
1. Init and update the git submodule using: `git submodule init` followed by `git submodule update` in the root of the project.
2. Install layer dependencies. cd to the `layer` directory. Run `./1-install.sh` to create a virtual Python environment and install the project dependencies via pip.
3. Package the AWS layer. In the `layer` directory, run `./2-package.sh`. This will make a file called `layer_content.zip` which will later be referenced by Open Tofu.
4. Zip all functions. AWS Lambda functions are usually deployed from zip files. cd to the main project directory and then run `./zip-all.sh`
5. Create `config.yaml` in the root directory of the repo to contain the proper values needed for the deployment. A default config.yaml for a 6-perspective deployment with the controller in us-east-2 is included in this repo as `config.example.yaml`. This config can be made the active config by running `cp config.example.yaml config.yaml` in the root directory.
6. Run `hatch run ./configure.py` from the root directory of the repo to generate Open Tofu files from templates.
7. Deploy the entire package with Open Tofu. cd to the `open-tofu` directory where .tf files are located. Then run `tofu init`. Then run `tofu apply` and type `yes` at the confirmation prompt.
8. Get the URL of the deployed API endpoint by running `hatch run ./get_api_url.py` in the root directory.
9. Get the API Key generated by AWS by running `hatch run ./get_api_key.py` in the root directory. The deployment is configured to reject any API call that does not have this key passed via the `x-api-key` HTTP header.
1. From the project root directory, run `hatch run lambda-layer:install`. This will create a virtual Python environment in the `layer` directory and install the project dependencies via pip.
2. Package the AWS layer. In the `layer` directory, run `./package.sh`. This will make two files: `python311_layer_content.zip` and `mpic_coordinator_layer_content.zip` which will later be referenced by Open Tofu.
3. Zip all functions. AWS Lambda functions are usually deployed from zip files. cd to the main project directory and then run `./zip-all.sh`
4. Create `config.yaml` in the root directory of the repo to contain the proper values needed for the deployment. A default config.yaml for a 6-perspective deployment with the controller in us-east-2 is included in this repo as `config.example.yaml`. This config can be made the active config by running `cp config.example.yaml config.yaml` in the root directory.
5. Run `hatch run ./configure.py` from the root directory of the repo to generate Open Tofu files from templates.
6. Deploy the entire package with Open Tofu. cd to the `open-tofu` directory where .tf files are located. Then run `tofu init`. Then run `tofu apply` and type `yes` at the confirmation prompt. This provides a standard install with DNSSEC enabled which causes the system to incur expenses even when it is not in use (due to the AWS VPC NAT Gateways needed). To reduce the AWS bill, DNSSEC can also be disabled by appending `-var="dnssec_enabled=false"` to `tofu apply` (i.e., `tofu apply -var="dnssec_enabled=false"`).
7. Get the URL of the deployed API endpoint by running `hatch run ./get_api_url.py` in the root directory.
8. Get the API Key generated by AWS by running `hatch run ./get_api_key.py` in the root directory. The deployment is configured to reject any API call that does not have this key passed via the `x-api-key` HTTP header.

For convenience `./deploy.sh` in the project root will perform all of these steps (using `-var="dnssec_enabled=false"`) with the exception of copying over the example config to the operational config and running `tofu init` in the open-tofu dir.

## Testing
The following is an example of a test API call that uses bash command substitution to fill in the proper values for the API URL and the API key.
Expand All @@ -40,7 +41,7 @@ The above sample must be run from the root directory of a deployed Open MPIC aws

The API is compliant with the [Open MPIC Specification](https://github.com/open-mpic/open-mpic-specification).

Documentation based on the API specification can be viewed [here](https://open-mpic.org/documentation.html).
Documentation based on the API specification used in this version can be viewed [here](https://open-mpic.org/documentation.html?commit=f763382c38a867dda3253afded017f9e3a24ead5).

## Development
Code changes can easily be deployed by editing the .py files and then rezipping the project via `./zip-all.sh` and `./2-package.sh` in the `layer` directory. Then, running `tofu apply` run from the open-tofu directory will update only on the required resources and leave the others unchanged. If any `.tf.template` files are changed or `config.yaml` is edited, `hatch run ./configure.py` must be rerun followed by `tofu apply` in the open-tofu directory.
Expand Down
6 changes: 0 additions & 6 deletions clean.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,9 @@ rm open-tofu/*.generated.tf
rm -r layer/create_layer_virtualenv
rm -r layer/python311_layer_content
rm -r layer/mpic_coordinator_layer_content
rm -r layer/mpic_caa_checker_layer_content
rm -r layer/mpic_dcv_checker_layer_content
rm -r layer/mpic_common_layer_content

rm layer/python311_layer_content.zip
rm layer/mpic_coordinator_layer_content.zip
rm layer/mpic_caa_checker_layer_content.zip
rm layer/mpic_dcv_checker_layer_content.zip
rm layer/mpic_common_layer_content.zip

rm "${FUNCTIONS_DIR}"/mpic_coordinator_lambda/mpic_coordinator_lambda.zip
rm "${FUNCTIONS_DIR}"/mpic_caa_checker_lambda/mpic_caa_checker_lambda.zip
Expand Down
9 changes: 3 additions & 6 deletions configure.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ def main(raw_args=None):
stream.write(deployment_id_to_write)

# Read the deployment id.
deployment_id = 0
with open(args.deployment_id_file) as stream:
deployment_id = int(stream.read())

Expand All @@ -48,14 +47,14 @@ def main(raw_args=None):
try:
config = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print(f"Error loading YAML config at {args.config}. Project not configured. Error details: {exec}.")
print(f"Error loading YAML config at {args.config}. Project not configured. Error details: {exc}.")
exit()
aws_available_regions = {}
with open(args.available_regions) as stream:
try:
aws_available_regions = yaml.safe_load(stream)['aws-available-regions']
except yaml.YAMLError as exc:
print(f"Error loading YAML config at {args.available_regions}. Project not configured. Error details: {exec}.")
print(f"Error loading YAML config at {args.available_regions}. Project not configured. Error details: {exc}.")
exit()

# Remove all old files.
Expand Down Expand Up @@ -96,7 +95,6 @@ def main(raw_args=None):
main_tf_string = main_tf_string.replace("{{absolut-max-attempts-with-key}}", f"absolute_max_attempts = \"{config['absolute-max-attempts']}\"")
else:
main_tf_string = main_tf_string.replace("{{absolut-max-attempts-with-key}}", "")


# Replace enforce distinct rir regions.
main_tf_string = main_tf_string.replace("{{enforce-distinct-rir-regions}}", f"\"{1 if config['enforce-distinct-rir-regions'] else 0}\"")
Expand Down Expand Up @@ -154,7 +152,6 @@ def main(raw_args=None):
# Set the RIR region to load into env variables.
aws_perspective_tf_region = aws_perspective_tf_region.replace("{{rir-region}}", f"{rir_region}")


if not args.aws_perspective_tf_template.endswith(".tf.template"):
print(f"Error: invalid tf template name: {args.aws_perspective_tf_template}. Make sure all tf template files end in '.tf.template'.")
exit()
Expand All @@ -166,4 +163,4 @@ def main(raw_args=None):

# Main module init for direct invocation.
if __name__ == '__main__':
main()
main()
2 changes: 1 addition & 1 deletion deploy.sh
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
#!/bin/bash
./clean.sh; cd layer; ./1-install.sh; ./2-package.sh; cd ..; hatch run ./configure.py; ./zip-all.sh; cd open-tofu; tofu apply -auto-approve; cd ..
./clean.sh; hatch run lambda-layer:install; cd layer; ./package.sh; cd ..; hatch run ./configure.py; ./zip-all.sh; cd open-tofu; tofu apply -var="dnssec_enabled=false" -auto-approve; cd ..
5 changes: 0 additions & 5 deletions layer/1-install.sh

This file was deleted.

43 changes: 0 additions & 43 deletions layer/2-package.sh

This file was deleted.

19 changes: 19 additions & 0 deletions layer/package.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/bin/bash
# make common python3.11 layer for all lambda functions
mkdir -p python311_layer_content/python
cd python311_layer_content
cp -r ../create_layer_virtualenv/lib python/
zip -r ../python311_layer_content.zip python
cd .. # should be at layer directory

py_exclude=('*.pyc' '*__pycache__*')

# make mpic_coordinator lambda layer for mpic coordinator lambda function
mkdir -p mpic_coordinator_layer_content/python
cp -r ../resources mpic_coordinator_layer_content/python/resources # TODO consider a more elegant approach
cd mpic_coordinator_layer_content
zip -r ../mpic_coordinator_layer_content.zip python -x "${py_exclude[@]}" # Zip the mpic_coordinator lambda layer
rm -r python # clean up, mostly not to bother the IDE which will find this duplicate code!
cd .. # should be at layer directory


5 changes: 0 additions & 5 deletions layer/requirements.txt

This file was deleted.

Loading
Loading