Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 9 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,19 @@ All requirements for running the API are packaged and uploaded to AWS as a lambd
- Hatch (https://hatch.pypa.io/) for building and running the project. This is a Python project manager that can be installed via `pip install hatch`.

## Deployment Steps
1. From the project root directory, run `hatch run lambda-layer:install`. This will create a virtual Python environment in the `layer` directory and install the project dependencies via pip.
2. Package the AWS layer. In the `layer` directory, run `./package.sh`. This will make two files: `python311_layer_content.zip` and `mpic_coordinator_layer_content.zip` which will later be referenced by Open Tofu.
3. Zip all functions. AWS Lambda functions are usually deployed from zip files. cd to the main project directory and then run `./zip-all.sh`
4. Create `config.yaml` in the root directory of the repo to contain the proper values needed for the deployment. A default config.yaml for a 6-perspective deployment with the controller in us-east-2 is included in this repo as `config.example.yaml`. This config can be made the active config by running `cp config.example.yaml config.yaml` in the root directory.
5. Run `hatch run ./configure.py` from the root directory of the repo to generate Open Tofu files from templates.
1. Create `config.yaml` in the root directory of the repo to contain the proper values needed for the deployment. A default config.yaml for a 6-perspective deployment with the controller in us-east-2 is included in this repo as `config.example.yaml`. This config can be made the active config by running `cp config.example.yaml config.yaml` in the root directory.
2. Create a virtual Python environment in the `layer` directory and install the project dependencies via pip. This can be executed by running `hatch run lambda:layer-install`.
3. Package two AWS layers by executing `package-layer.sh`. This will make two files: `python3_layer_content.zip` and `mpic_coordinator_layer_content.zip` which will later be referenced by Open Tofu. This can be done by running `./package-layer.sh` or `hatch run lambda:layer-package`.
4. Run `configure.py` from the root directory of the repo to generate Open Tofu files from templates. This can be separately executed by running `hatch run ./configure.py` or `hatch run lambda:configure-tf`.
5. Zip all Lambda functions. AWS Lambda functions are usually deployed from zip files. This can be separately executed by running `./zip-all.sh` or `hatch run lambda:zip-all`.
6. Deploy the entire package with Open Tofu. cd to the `open-tofu` directory where .tf files are located. Then run `tofu init`. Then run `tofu apply` and type `yes` at the confirmation prompt. This provides a standard install with DNSSEC enabled which causes the system to incur expenses even when it is not in use (due to the AWS VPC NAT Gateways needed). To reduce the AWS bill, DNSSEC can also be disabled by appending `-var="dnssec_enabled=false"` to `tofu apply` (i.e., `tofu apply -var="dnssec_enabled=false"`).
7. Get the URL of the deployed API endpoint by running `hatch run ./get_api_url.py` in the root directory.
8. Get the API Key generated by AWS by running `hatch run ./get_api_key.py` in the root directory. The deployment is configured to reject any API call that does not have this key passed via the `x-api-key` HTTP header.

For convenience `./deploy.sh` in the project root will perform all of these steps (using `-var="dnssec_enabled=false"`) with the exception of copying over the example config to the operational config and running `tofu init` in the open-tofu dir.
For convenience:
* `./deploy.sh` in the project root will clean the environment and perform steps 2-6 (using `-var="dnssec_enabled=false"`), with the exception of copying over the example config to the operational config and running `tofu init` in the open-tofu dir.
* `hatch run lambda:prepare` will run steps 2-5 in a single command.
* `hatch run lambda:deploy` will clean the environment and then run steps 2-6, in the same manner as `deploy.sh`.

## Testing
The following is an example of a test API call that uses bash command substitution to fill in the proper values for the API URL and the API key.
Expand Down
5 changes: 1 addition & 4 deletions clean.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,8 @@ FUNCTIONS_DIR="src/aws_lambda_mpic"

rm open-tofu/*.generated.tf

rm -r layer/create_layer_virtualenv
rm -r layer/python311_layer_content
rm -r layer/mpic_coordinator_layer_content

rm layer/*.zip
rm -r layer/create_layer_virtualenv

rm "${FUNCTIONS_DIR}"/mpic_coordinator_lambda/mpic_coordinator_lambda.zip
rm "${FUNCTIONS_DIR}"/mpic_caa_checker_lambda/mpic_caa_checker_lambda.zip
Expand Down
2 changes: 1 addition & 1 deletion deploy.sh
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
#!/bin/bash
./clean.sh; hatch run lambda-layer:install; cd layer; ./package.sh; cd ..; hatch run ./configure.py; ./zip-all.sh; cd open-tofu; tofu apply -var="dnssec_enabled=false" -auto-approve; cd ..
./clean.sh; hatch env prune; hatch run lambda-layer:install; ./package-layer.sh; hatch run ./configure.py; ./zip-all.sh; cd open-tofu; tofu apply -var="dnssec_enabled=false" -auto-approve; cd ..
19 changes: 0 additions & 19 deletions layer/package.sh

This file was deleted.

12 changes: 6 additions & 6 deletions open-tofu/aws-perspective.tf.template
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Each layer must be created in the region of the functions.
resource "aws_lambda_layer_version" "python311_open_mpic_layer_{{region}}" {
filename = "../layer/python311_layer_content.zip"
layer_name = "python311_open_mpic_layer_{{region}}_{{deployment-id}}"
source_code_hash = "${filebase64sha256("../layer/python311_layer_content.zip")}"
resource "aws_lambda_layer_version" "python3_open_mpic_layer_{{region}}" {
filename = "../layer/python3_layer_content.zip"
layer_name = "python3_open_mpic_layer_{{region}}_{{deployment-id}}"
source_code_hash = "${filebase64sha256("../layer/python3_layer_content.zip")}"
compatible_runtimes = ["python3.11"]
provider = aws.{{region}}
}
Expand Down Expand Up @@ -211,7 +211,7 @@ resource "aws_lambda_function" "mpic_dcv_checker_lambda_{{region}}" {
runtime = "python3.11"
architectures = ["arm64"]
layers = [
aws_lambda_layer_version.python311_open_mpic_layer_{{region}}.arn,
aws_lambda_layer_version.python3_open_mpic_layer_{{region}}.arn,
]
vpc_config {
subnet_ids = [for s in aws_subnet.subnet_private_{{region}} : s.id]
Expand Down Expand Up @@ -241,7 +241,7 @@ resource "aws_lambda_function" "mpic_caa_checker_lambda_{{region}}" {
runtime = "python3.11"
architectures = ["arm64"]
layers = [
aws_lambda_layer_version.python311_open_mpic_layer_{{region}}.arn,
aws_lambda_layer_version.python3_open_mpic_layer_{{region}}.arn,
]
vpc_config {
subnet_ids = [for s in aws_subnet.subnet_private_{{region}} : s.id]
Expand Down
10 changes: 5 additions & 5 deletions open-tofu/main.tf.template
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ provider "aws" {
}

# Python open-mpic layer (contains third party libraries)
resource "aws_lambda_layer_version" "python311_open_mpic_layer" {
filename = "../layer/python311_layer_content.zip"
layer_name = "python_311_open_mpic_layer_{{deployment-id}}"
source_code_hash = "${filebase64sha256("../layer/python311_layer_content.zip")}"
resource "aws_lambda_layer_version" "python3_open_mpic_layer" {
filename = "../layer/python3_layer_content.zip"
layer_name = "python3_open_mpic_layer_{{deployment-id}}"
source_code_hash = "${filebase64sha256("../layer/python3_layer_content.zip")}"
compatible_runtimes = ["python3.11"]
}

Expand Down Expand Up @@ -75,7 +75,7 @@ resource "aws_lambda_function" "mpic_coordinator_lambda" {
architectures = ["arm64"]
timeout = 60
layers = [
aws_lambda_layer_version.python311_open_mpic_layer.arn,
aws_lambda_layer_version.python3_open_mpic_layer.arn,
aws_lambda_layer_version.mpic_coordinator_layer.arn,
]
environment {
Expand Down
2 changes: 1 addition & 1 deletion open-tofu/variables.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
variable "dnssec_enabled" {
type = bool
description = "Enable DNSSEC"
default = true
default = false
}
17 changes: 17 additions & 0 deletions package-layer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/bin/bash
# make common python3 layer for all lambda functions
mkdir -p layer/python3_layer_content/python
cp -r layer/create_layer_virtualenv/lib layer/python3_layer_content/python/
(cd layer/python3_layer_content && zip -r ../python3_layer_content.zip python)

py_exclude=("*.pyc" "*__pycache__*" "*.pyo" "*.pyd")

# make mpic_coordinator lambda layer for mpic coordinator lambda function
mkdir -p layer/mpic_coordinator_layer_content/python
cp -r resources layer/mpic_coordinator_layer_content/python/resources # TODO consider a more elegant approach
# Zip the mpic_coordinator lambda layer
(cd layer/mpic_coordinator_layer_content && zip -r ../mpic_coordinator_layer_content.zip python -x "${py_exclude[@]}")

# clean up, mostly for the IDE which could otherwise detect duplicate code
rm -r layer/python3_layer_content
rm -r layer/mpic_coordinator_layer_content
37 changes: 31 additions & 6 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,16 @@ classifiers = [
"Programming Language :: Python :: Implementation :: PyPy",
]
dependencies = [
#"open-mpic-core @ git+https://github.com/open-mpic/open-mpic-core-python.git@ds-trace-logging",
# "open-mpic-core @ git+https://github.com/open-mpic/open-mpic-core-python.git@birgelee-dcv-caa-perspective-code-response",
"pyyaml==6.0.1",
"requests>=2.32.3",
"dnspython==2.6.1",
"pydantic==2.8.2",
"aiohttp==3.11.11",
"aws-lambda-powertools[parser]==3.2.0",
"open-mpic-core==4.6.1",
"open-mpic-core==5.0.0",
"aioboto3~=13.3.0",
"black==24.8.0",
]

[project.optional-dependencies]
Expand All @@ -54,6 +55,10 @@ Source = "https://github.com/open-mpic/aws-lambda-python"
#[dirs.env]
#virtual = ".hatch"

[tool.api]
spec_version = "3.0.0"
spec_repository = "https://github.com/open-mpic/open-mpic-specification"

[tool.hatch]
version.path = "src/aws_lambda_mpic/__about__.py"
build.sources = ["src", "resources"]
Expand All @@ -69,20 +74,37 @@ PIP_INDEX_URL = "https://pypi.org/simple/"
#PIP_EXTRA_INDEX_URL = "https://test.pypi.org/simple/" # FIXME here temporarily to test open-mpic-core packaging
PIP_VERBOSE = "1"

[tool.hatch.envs.lambda-layer]
[tool.hatch.envs.lambda]
skip-install = true
python = "3.11"
type="virtual"
path="layer/create_layer_virtualenv"

[tool.hatch.envs.lambda-layer.env-vars]
[tool.hatch.envs.lambda.env-vars]
#PIP_EXTRA_INDEX_URL = "https://test.pypi.org/simple/"
PIP_ONLY_BINARY = ":all:"
#PIP_PLATFORM = "manylinux2014_aarch64"
#PIP_TARGET = "layer/create_layer_virtualenv2/lib/python3.11/site-packages" # does not work... bug in pip 24.2?

[tool.hatch.envs.lambda-layer.scripts]
install = "pip install . --platform manylinux2014_aarch64 --only-binary=:all: --target layer/create_layer_virtualenv/lib/python3.11/site-packages"
[tool.hatch.envs.lambda.scripts]
layer-install = "pip install . --platform manylinux2014_aarch64 --only-binary=:all: --target layer/create_layer_virtualenv/lib/python3.11/site-packages"
layer-package = "sh ./package-layer.sh"
configure-tf = "python configure.py"
zip-lambdas = "sh ./zip-all.sh"
apply-tf = "(cd open-tofu && tofu apply -var=\"dnssec_enabled=false\" -auto-approve)"
prepare = [
"layer-install",
"layer-package",
"configure-tf",
"zip-lambdas"
]
clean = "sh ./clean.sh"
deploy = [
"clean",
"prepare",
"apply-tf"
]


[tool.hatch.envs.test]
skip-install = false
Expand Down Expand Up @@ -135,3 +157,6 @@ include_namespace_packages = true
omit = [
"*/src/*/__about__.py",
]

[tool.black]
line-length = 120
2 changes: 1 addition & 1 deletion src/aws_lambda_mpic/__about__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.4.0"
__version__ = "0.4.1"
Original file line number Diff line number Diff line change
Expand Up @@ -3,26 +3,25 @@

from aws_lambda_powertools.utilities.parser import event_parser

from open_mpic_core.common_domain.check_request import CaaCheckRequest
from open_mpic_core.mpic_caa_checker.mpic_caa_checker import MpicCaaChecker
from open_mpic_core.common_util.trace_level_logger import get_logger
from open_mpic_core import CaaCheckRequest
from open_mpic_core import MpicCaaChecker
from open_mpic_core import get_logger

logger = get_logger(__name__)


class MpicCaaCheckerLambdaHandler:
def __init__(self):
self.perspective_code = os.environ['AWS_REGION']
self.default_caa_domain_list = os.environ['default_caa_domains'].split("|")
self.log_level = os.environ['log_level'] if 'log_level' in os.environ else None
self.default_caa_domain_list = os.environ["default_caa_domains"].split("|")
self.log_level = os.environ["log_level"] if "log_level" in os.environ else None

self.logger = logger.getChild(self.__class__.__name__)
if self.log_level:
self.logger.setLevel(self.log_level)

self.caa_checker = MpicCaaChecker(default_caa_domain_list=self.default_caa_domain_list,
perspective_code=self.perspective_code,
log_level=self.logger.level)
self.caa_checker = MpicCaaChecker(
default_caa_domain_list=self.default_caa_domain_list, log_level=self.logger.level
)

def process_invocation(self, caa_request: CaaCheckRequest):
try:
Expand All @@ -34,9 +33,9 @@ def process_invocation(self, caa_request: CaaCheckRequest):

caa_response = event_loop.run_until_complete(self.caa_checker.check_caa(caa_request))
result = {
'statusCode': 200, # note: must be snakeCase
'headers': {'Content-Type': 'application/json'},
'body': caa_response.model_dump_json()
"statusCode": 200, # note: must be snakeCase
"headers": {"Content-Type": "application/json"},
"body": caa_response.model_dump_json(),
}
return result

Expand Down
Loading