diff --git a/docs/search/search_index.json b/docs/search/search_index.json index c22113e..8f662c3 100644 --- a/docs/search/search_index.json +++ b/docs/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Platform Engineering on Google Cloud","text":"
Platform engineering is an emerging practice in organizations to enable cross functional collaboration in order to deliver business value faster. It treats the internal groups; application developers, operators, security, infrastructure admins, etc. as customers and provides them the foundational platforms to accelerate their work. The key goals of platform engineering are providing everything as self-service, golden paths, improved collaboration, abstraction of technical complexities, all of which simplify the software development lifecycle, contributing towards delivering business values to consumers. Platform engineering is more effective in cloud computing as it helps realize the benefits possible on cloud like automation, security, productivity, faster time-to-market.
"},{"location":"#overview","title":"Overview","text":"Google Cloud offers decomposable, elastic, secure, scalable and cost efficient tools built on the guiding principles of platform engineering. With a focus on developer experience and innovation coupled with practices like SRE embedded into the tools, they make a good place to begin your platform journey to empower the developers to enhance their experience and increase their productivity.
This repository contains a collection of guides, examples and design patterns spanning Google Cloud products and best in class OSS tools, which you can use to help build an internal developer platform.
For more information, see Platform Engineering on Google Cloud.
"},{"location":"#resources","title":"Resources","text":""},{"location":"#design-patterns","title":"Design Patterns","text":"Copy any code you need from this repository into your own project.
Warning: Do not depend directly on the samples in this repository. Breaking changes may be made at any time without warning.
"},{"location":"#contributing-changes","title":"Contributing changes","text":"Entirely new samples are not accepted. Bugfixes are welcome, either as pull requests or as GitHub issues.
See CONTRIBUTING.md for details on how to contribute.
"},{"location":"#licensing","title":"Licensing","text":"Copyright 2024 Google LLC Code in this repository is licensed under the Apache 2.0. See LICENSE.
"},{"location":"code-of-conduct/","title":"Code of Conduct","text":""},{"location":"code-of-conduct/#our-pledge","title":"Our Pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"code-of-conduct/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"code-of-conduct/#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project email address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
This Code of Conduct also applies outside the project spaces when the Project Steward has a reasonable belief that an individual's behavior may have a negative impact on the project or its community.
"},{"location":"code-of-conduct/#conflict-resolution","title":"Conflict Resolution","text":"We do not believe that all conflict is bad; healthy debate and disagreement often yield positive results. However, it is never okay to be disrespectful or to engage in behavior that violates the project\u2019s code of conduct.
If you see someone violating the code of conduct, you are encouraged to address the behavior directly with those involved. Many issues can be resolved quickly and easily, and this gives people more control over the outcome of their dispute. If you are unable to resolve the matter for any reason, or if the behavior is threatening or harassing, report it. We are dedicated to providing an environment where participants feel welcome and safe.
Reports should be directed to [PROJECT STEWARD NAME(s) AND EMAIL(s)], the Project Steward(s) for [PROJECT NAME]. It is the Project Steward\u2019s duty to receive and address reported violations of the code of conduct. They will then work with a committee consisting of representatives from the Open Source Programs Office and the Google Open Source Strategy team. If for any reason you are uncomfortable reaching out to the Project Steward, please email opensource@google.com.
We will investigate every complaint, but you may not receive a direct response. We will use our discretion in determining when and how to follow up on reported incidents, which may range from not taking action to permanent expulsion from the project and project-sponsored spaces. We will notify the accused of the report and provide them an opportunity to discuss it before any action is taken. The identity of the reporter will be omitted from the details of the report supplied to the accused. In potentially harmful situations, such as ongoing harassment or threats to anyone's safety, we may take action without notice.
"},{"location":"code-of-conduct/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
"},{"location":"contributing/","title":"How to Contribute","text":"We'd love to accept your patches and contributions to this project.
"},{"location":"contributing/#before-you-begin","title":"Before you begin","text":""},{"location":"contributing/#sign-our-contributor-license-agreement","title":"Sign our Contributor License Agreement","text":"Contributions to this project must be accompanied by a Contributor License Agreement (CLA). You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project.
If you or your current employer have already signed the Google CLA (even if it was for a different project), you probably don't need to do it again.
Visit https://cla.developers.google.com/ to see your current agreements or to sign a new one.
"},{"location":"contributing/#review-our-community-guidelines","title":"Review our Community Guidelines","text":"This project follows Google's Open Source Community Guidelines.
"},{"location":"contributing/#contribution-process","title":"Contribution process","text":""},{"location":"contributing/#code-reviews","title":"Code Reviews","text":"All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.
"},{"location":"contributing/#development-guide","title":"Development guide","text":"This document contains technical information to contribute to this repository.
"},{"location":"contributing/#site","title":"Site","text":"This repository includes scripts and configuration to build a site using Material for MkDocs:
config/mkdocs: MkDocs configuration filesscripts/run-mkdocssh: script to build the site.github/workflows/documentation.yaml: GitHub Actions workflow that builds the site, and pushes a commit with changes on the current branch.To build the site, run the following command from the root of the repository:
scripts/run-mkdocs.sh\n"},{"location":"contributing/#preview-the-site","title":"Preview the site","text":"To preview the site, run the following command from the root of the repository:
scripts/run-mkdocs.sh \"serve\"\n"},{"location":"contributing/#linting-and-formatting","title":"Linting and formatting","text":"We configured several linters and formatters for code and documentation in this repository. Linting and formatting checks run as part of CI workflows.
Linting and formatting checks are configured to check changed files only by default. If you change the configuration of any linter or formatter, these checks run against the entire repository.
To run linting and formatting checks locally, you do the following:
scripts/lint.sh\n To automatically fix certain linting and formatting errors, you do the following:
LINTER_CONTAINER_FIX_MODE=\"true\" scripts/lint.sh\n"},{"location":"reference-architectures/accelerating-migrations/","title":"Accelerate migrations through platform engineering golden paths","text":"This document helps you adopt platform engineering by designing a process to onboard and migrate your existing applications to use your internal developer platform (IDP). It also provides guidance to help you evaluate the opportunity to design a platform engineering process, and to explore how it might function. Google Cloud provides tools, products, guidance, and professional services to help you adopt platform engineering in your environments.
This document is aimed at the following personas:
The Cloud Native Computing Foundation defines a golden path as an integrated bundle of templates and documentation for rapid project development. Designing and developing golden paths can help facilitate the onboarding and the migration of existing applications to your IDP. When you use a golden path, your development and operations teams can take advantage of benefits like the following:
Onboarding and migrating existing applications to the IDP can let you experience the benefits of adopting platform engineering gradually and incrementally in your organization, without spending effort on large scale migration projects.
To migrate applications and onboard them to the IDP, we recommend that you design an application onboarding and migration process. This document describes a reference application onboarding and migration process. We recommend that you tailor the process to your requirements and your IDP.
If you're migrating your applications from your on-premises environment or from another cloud provider to Google Cloud, the application onboarding and migration process can help you to accelerate your migration. In that scenario, the teams that are managing the migration can refer to well-established golden paths, instead of having to design their own migration processes and project templates.
"},{"location":"reference-architectures/accelerating-migrations/#application-onboarding-and-migration-process","title":"Application onboarding and migration process","text":"The goal of the application onboarding and migration process is to get an application on the IDP. After you onboard and migrate the application to the IDP, your teams can benefit from using the IDP. When you use an IDP, you can focus on providing business value for the application, rather than spending effort on ad-hoc processes and operations.
To manage the complexity of the application onboarding and migration process, we recommend that you design the process in the following phases:
The high-level structure of this process matches the Google Cloud migration path. In this case, you follow the migration path to onboard and migrate existing applications on the IDP.
To ensure that the application onboarding and migration is on the right track, we recommend that you design validation checkpoints for each phase of the process, rather than having a single acceptance testing task. Having validation checkpoints for each phase helps you to promptly detect issues as they arise, rather than when you are close to the end of the migration.
Even when following a phased process, onboarding and migrating complex applications to the IDP might require a significant effort, and it might pose risks. To manage the effort and the risks of onboarding and migrating complex applications to the IDP, you can follow the onboarding and migration process iteratively, by migrating parts of the application on each iteration. For example, if an application is composed of multiple components, you can onboard and migrate one component for each iteration of the process.
To reduce toil, we recommend that you thoroughly document the application onboarding and migration process, and make it as self-service as possible, in line with platform-engineering principles.
In this document, we assume that the onboarding and migration process involves three teams:
The following sections describe each phase of the application onboarding and migration process.
"},{"location":"reference-architectures/accelerating-migrations/#intake-the-onboarding-and-migration-request","title":"Intake the onboarding and migration request","text":"The first phase of the application onboarding and migration process is to intake the request to onboard and migrate the application. The request process is the following:
We recommend that you keep this phase as light as possible by using a form or a guided, self-service process. For example, you can include migration guidance in the IDP documentation so that development teams can review it and prepare for the migration. You can also implement automated checks in your IDP to give initial feedback to development teams about potential migration blockers and issues.
To assist and offer consultation to the teams that filed or intend to file an application onboarding and migration request, we recommend that the team that manages the IDP establish communication channels to offer assistance to other teams. For example, the team that manages the IDP might set up dedicated discussion groups, chat rooms, and office hours where they can offer help and answer questions about the IDP. To help with onboarding and migration of complex applications and to facilitate communications, you can also attach a member of the team that manages the IDP to the application team while the migration is in progress.
"},{"location":"reference-architectures/accelerating-migrations/#plan-application-onboarding-and-migration","title":"Plan application onboarding and migration","text":"As part of this phase, we recommend that the application onboarding and migration team starts drafting an onboarding and migration plan, even if the team doesn't have all of the data points to fully define it. When the team progresses through the assessment phase, they will gather information to finalize and validate the plan.
To manage the complexity of the migration plan, we recommend that you decompose it across the following sub-tasks:
Developing a comprehensive onboarding and migration plan is crucial to the success of the application onboarding and migration process. Having a plan helps you to define clear deadlines, assign responsibilities, and deal with unanticipated issues.
"},{"location":"reference-architectures/accelerating-migrations/#assess-the-application","title":"Assess the application","text":"The second phase of the application onboarding and migration process is to follow up on the intake request by assessing the application to onboard and migrate to the IDP. The goal of this assessment phase is to produce the following artifacts:
These outputs of the assessment phase help you to plan and complete the migration. The outputs also help you to scope the enhancements that the IDP needs to support the application, and to increase the velocity of future migrations.
To manage the complexity of the assessment phase, we recommend that you decompose it into the following steps:
The preceding steps are described in the following sections. For more information about assessing applications and defining migration plans, see Migrate to Google Cloud: Assess and discover your workloads.
"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-design","title":"Review the application design","text":"To gather a comprehensive understanding about the design of the application, we recommend that you complete a thorough assessment of the following aspects of the application:
Understanding the application architecture helps you to design and implement an effective onboarding and migration process for your application. It also helps you anticipate issues and potential problems that might arise during the migration. For example, if the architecture of your application to onboard and migrate to the IDP isn't compatible with your IDP, you might need to spend additional effort to refactor the application and enhance the IDP.
The application to onboard and migrate to the IDP might have dependencies on systems and data that are outside the scope of the application. To understand these dependencies, we recommend that you gather information about any reliance of your application on external systems and data, such as databases, datasets, and APIs. After you gather information, you classify the dependencies in order of importance and criticality. For example, your application might need access to a database to store persistent data, and to external APIs to integrate with to provide critical functionality to users, while it might have an optional dependency on a caching system.
Understanding the dependencies of your application on external systems and data is crucial to plan for continued access to these dependencies during and after the migration.
"},{"location":"reference-architectures/accelerating-migrations/#review-application-dependencies","title":"Review application dependencies","text":""},{"location":"reference-architectures/accelerating-migrations/#review-cicd-processes","title":"Review CI/CD processes","text":"After you review the application design and its dependencies, we recommend that you refine the assessment about your application's deployable artifacts by reviewing your application's CI/CD processes. These processes usually let you build the artifacts to deploy the application and let you deploy them in your runtime environments. For example, you refine the assessment by answering questions about these CI/CD processes, such as the following:
Understanding how the application's CI/CD processes work helps you evaluate whether your IDP can support these CI/CD processes as is, or if you need to enhance your IDP to support them. For example, if your application has a business-critical requirement on a canary deployment process and your IDP doesn't support it, you might need to factor in additional effort to enhance the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#review-data-persistence-and-data-management-requirements","title":"Review data persistence and data management requirements","text":"By completing the previous tasks of the assessment phase, you gathered information about the statefulness of the application and about the systems that the application uses to store persistent and transient data. In this section, you refine the assessment to develop a deeper understanding of the systems that the application uses to store stateful data. We recommend that you gather information on data persistence and data management requirements of your application. For example, you refine the assessment by answering questions such as the following:
Understanding your application's data persistence and data management requirements helps you to ensure that your IDP and your production environment can effectively support the application. This understanding can also help you determine whether you need to enhance the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#review-finops-requirements","title":"Review FinOps requirements","text":"As part of the assessment of your application, we recommend that you gather data about the FinOps requirements of your application, such as budget control and cost management, and evaluate whether your IDP supports them. For example, the application might require certain mechanisms to control spending and manage costs, eventually sending alerts. The application might also require mechanisms to completely stop spending when it reaches a certain budget limit.
Understanding your application's FinOps requirements helps you to ensure that you keep your application costs under control. It also helps you to establish proper cost attribution and cost optimization practices.
"},{"location":"reference-architectures/accelerating-migrations/#review-compliance-requirements","title":"Review compliance requirements","text":"The application to onboard and migrate to the IDP and its runtime environment might have to meet compliance requirements, especially in regulated industries. We recommend that you assess the compliance requirements of the application, and evaluate if the IDP already supports them. For example, the application might require isolation from other workloads, or it might have data locality requirements.
Understanding your application's compliance requirements helps you to scope the necessary refactoring and enhancements for your application and for the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-team-practices","title":"Review the application team practices","text":"After you review the application, we recommend that you gather information about team practices and the methodologies for developing and operating the application. For example, the team might already have adopted DevOps principles, they might be already implementing Site Reliability Engineering (SRE), or they might be already familiar with platform engineering and with the IDP.
By gathering information about the team that develops and operates the application to migrate, you gain insights about the experience and the maturity of that team. You also learn whether there's a need to spend effort to train team members to proficiently use the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#assess-application-refactoring-and-the-idp","title":"Assess application refactoring and the IDP","text":"After you gather information about the application, its development and operation teams, and its requirements, you evaluate the following:
The goal of this task is to answer the following questions:
By answering these questions, you focus on evaluating potential onboarding and migration blockers. For example, you might experience the following onboarding and migration blockers:
The application development and operations team is responsible for the application refactoring tasks.
When you scope the eventual enhancements that the IDP needs to support the application, we recommend that you frame these enhancements in the broader vision that you have for the IDP, and not as a standalone exercise. We also recommend that you consider your IDP as a product for which you should develop a path to success. For example, if you're considering adding a new service to the IDP, we recommend that you evaluate how that service fits in the path to success for your IDP, in addition to the technical feasibility of the initiative.
By assessing the refactoring effort that's required to onboard and migrate the application, you develop a comprehensive understanding of the tasks that you need to complete to refactor the application and how you need to enhance the IDP to support the application.
"},{"location":"reference-architectures/accelerating-migrations/#finalize-the-application-onboarding-and-migration-plan","title":"Finalize the application onboarding and migration plan","text":"To complete the assessment phase, you finalize the application onboarding and migration plan with consideration of the data that you gathered. To finalize the plan, you do the following:
After you complete the assessment phase, you use its outputs to:
During the assessment phase, you scope any enhancements to the IDP that it needs to support the application and how those enhancements fit in your plans for the IDP. By completing this task, you design and implement the enhancements. For example, you might need to enhance the IDP as follows:
By enhancing the IDP to support the application, you unblock the migration. You also help streamline processes for onboarding and migration projects for other applications that might need those IDP enhancements.
"},{"location":"reference-architectures/accelerating-migrations/#configure-the-idp","title":"Configure the IDP","text":"After you enhance the IDP, if needed, you configure it to provide the resources that the application needs. For example, you configure the following IDP services for the application, or a subset of services:
By configuring the IDP, you prepare it to host the application that you want to onboard and migrate.
"},{"location":"reference-architectures/accelerating-migrations/#onboard-and-migrate-the-application","title":"Onboard and migrate the application","text":"In this phase, you onboard and migrate the application to the IDP by completing the following tasks:
By completing the preceding tasks, you onboard and migrate the application to the IDP. The following sections describe these tasks in more detail.
"},{"location":"reference-architectures/accelerating-migrations/#refactor-the-application","title":"Refactor the application","text":"In the assessment phase, you scoped the refactoring that your application needs in order to onboard and migrate it to the IDP. By completing this task, you design and implement the refactoring. For example, you might need to refactor your application in the following ways in order to meet the IDP's requirements:
By refactoring the application, you prepare it to onboard and migrate it on the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#configure-cicd-workflows","title":"Configure CI/CD workflows","text":"After you refactor the application, you do the following:
To build deployable artifacts and deploy them in your runtime environments, we recommend that you avoid manual processes. Instead of manual processes, configure CI/CD workflows by using the application delivery services that the IDP provides and store deployable artifacts in IDP-managed artifact repositories. For example, you can configure CI/CD workflows by using the following methods:
When you build the CI/CD workflows for your environment, consider how many runtime environments the IDP supports. For example, the IDP might support different runtime environments that are isolated from each other such as the following:
If the IDP supports multiple runtime environments for the application, you need to configure the CI/CD workflows for the application to support promoting the application's deployable artifact. You should plan for promoting the application from development to staging, and then from staging to production.
When you promote the application from one environment to the next environment, we recommend that you avoid rebuilding the application's deployable artifacts. Rebuilding creates new artifacts, which means that you would be deploying something different than what you tested and validated.
"},{"location":"reference-architectures/accelerating-migrations/#migrate-deployable-artifacts-from-the-source-environment","title":"Migrate deployable artifacts from the source environment","text":"If you need to support rolling back to previous versions of the application, you can migrate previous versions of the deployable artifacts that you built for the application from the source environment to an IDP-managed artifact repository. For example, if your application is containerized, you can migrate the container images that you built to deploy the application to Artifact Registry.
"},{"location":"reference-architectures/accelerating-migrations/#deploy-the-application-in-the-development-environment","title":"Deploy the application in the development environment","text":"After configuring CI/CD workflows to build deployable artifacts for the application and to promote them from one environment to another, you deploy the application in the development environment using the CI/CD workflows that you configured.
By using CI/CD workflows to build deployable artifacts and deploy the application, you avoid manual processes that are less repeatable and more prone to errors. You also validate that the CI/CD workflows work as expected.
"},{"location":"reference-architectures/accelerating-migrations/#promote-from-development-to-staging","title":"Promote from development to staging","text":"To promote your application from the development environment to the staging environment, you do the following:
By promoting the application from the development environment to the staging environment, you accomplish the following:
After you promote the application to your staging environment, you perform extensive acceptance testing for both functional and non-functional requirements. When you perform acceptance testing, we recommend that you validate that the user journeys and the business processes that the application implements are working properly in situations that resemble real-world usage scenarios. For example, when you perform acceptance testing, you can do the following:
Acceptance testing helps you ensure that the application works as expected in an environment that resembles the production environment, and helps you identify unanticipated issues.
"},{"location":"reference-architectures/accelerating-migrations/#migrate-data","title":"Migrate data","text":"After you complete acceptance testing for the application, you migrate data from the source environment to IDP-managed services such as the following:
To migrate data from your source environment to IDP-managed services, you can choose approaches like the following, depending on your requirements:
Each of the preceding approaches focuses on solving specific issues, and there's no approach that's inherently better than the others. For more information about migrating data to Google Cloud and choosing the best data migration approach for your application, see Migrate to Google Cloud: Transfer your large datasets.
I your data is stored in services managed by other cloud providers, see the following resources:
Migrating data from one environment to another is a complex task. If you think that the data migration is too complex to handle it as part of the application onboarding and migration process, you might consider migrating data as part of a dedicated migration project.
"},{"location":"reference-architectures/accelerating-migrations/#promote-from-staging-to-production","title":"Promote from staging to production","text":"After you complete data migration and acceptance testing, you promote the application to the production environment. To complete this task, you do the following:
When you check the application's operational readiness before you promote it from the staging environment to the production environment, you ensure that the application is ready for the production environment.
"},{"location":"reference-architectures/accelerating-migrations/#perform-the-cutover","title":"Perform the cutover","text":"After you promote the application to the production environment and ensure that it works as expected, you configure the production environment to gradually route requests for the application to the newly promoted application release. For example, you can implement a canary deployment strategy that uses Cloud Deploy.
After you validate that the application continues to work as expected while the number of requests to the newly promoted application increases, you do the following:
Before you retire the application in the source environment, we recommend that you prepare backups and a rollback plan. Doing so will help you handle unanticipated issues that might force you to go back to using the source environment.
"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-application","title":"Optimize the application","text":"Optimization is the last phase of the onboarding and migration process. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. For each iteration, you do the following:
You repeat the preceding sequence until you achieve your optimization goals.
For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization.
The following sections integrate the considerations in Migrate to Google Cloud: Optimize your environment.
"},{"location":"reference-architectures/accelerating-migrations/#establish-your-optimization-requirements","title":"Establish your optimization requirements","text":"Optimization requirements help you to narrow the scope of the current optimization iteration. To establish your optimization requirements for the application, start by considering the following aspects:
For each aspect, we recommend that you establish your optimization requirements for the application. Then, you set measurable optimization goals to meet those requirements. For more information about optimization requirements and goals, see Establish your optimization requirements and goals.
After you realize the optimization requirements for the application, you completed the onboarding and migration process for the application.
"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-onboarding-and-migration-process-and-the-idp","title":"Optimize the onboarding and migration process and the IDP","text":"After you onboard and migrate the application, you use the data that you gathered about the process and about the IDP to refine and optimize the process. Similarly to the optimization phase for your application, you complete the tasks that are described in the optimization phase, but with a focus on the onboarding and migration process and on the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#establish-your-optimization-requirements-for-the-idp","title":"Establish your optimization requirements for the IDP","text":"To narrow down the scope to optimize the onboarding and migration process, and the IDP, you establish optimization requirements according to data you gather while running through the process. For example, during the onboarding and migration of an application, you might face unanticipated issues that involve the process and the IDP, such as:
To address the issues that arise while you're onboarding and migrating an application, you establish optimization requirements. For example, you might establish the following optimization requirements to address the example issues described above:
After establishing optimization requirements, you set measurable optimization goals to meet those requirements. For more information about optimization requirements and goals, see Establish your optimization requirements and goals.
"},{"location":"reference-architectures/accelerating-migrations/#application-onboarding-and-migration-example","title":"Application onboarding and migration example","text":"In this section, you explore how the onboarding and migration process looks like for an example. The example that we describe in this section doesn't represent a real production application.
To reduce the scope of the example, we focus the example on the following environments:
This document focuses on the onboarding and migration process. For more information about migrating from Amazon EKS to GKE, see Migrate from AWS to Google Cloud: Migrate from Amazon EKS to GKE.
To onboard and migrate the application on the IDP, you follow the onboarding and migration process.
"},{"location":"reference-architectures/accelerating-migrations/#intake-the-onboarding-and-migration-request-example","title":"Intake the onboarding and migration request (example)","text":"In this example, the application onboarding and migration team files a request to onboard and migrate the application on the IDP. To fully present the onboarding and migration process, we assume that IDP cannot find an existing golden path to suggest to onboard and migrate the application, so it forwards the request to the team that manages the IDP for further evaluation.
"},{"location":"reference-architectures/accelerating-migrations/#plan-application-onboarding-and-migration-example","title":"Plan application onboarding and migration (example)","text":"To define timelines and milestones to onboard and migrate the application on the IDP, the application onboarding and migration team prepares a countdown plan:
Phase Task Countdown [days] Status Assess the application Review the application design -27 Not started Review application dependencies -23 Not started Review CI/CD processes -21 Not started Review data persistence and data management requirements -21 Not started Review FinOps requirements -20 Not started Review compliance requirements -20 Not started Review the application's team practices -19 Not started Assess application refactoring and the IDP -19 Not started Finalize the application onboarding and migration plan -18 Not started Set up the IDP Enhance the IDP N/A Not necessary Configure the IDP -17 Not started Onboard and migrate the application Refactor the application -15 Not started Configure CI/CD workflows -9 Not started Promote from development to staging -6 Not started Perform acceptance testing -5 Not started Migrate data -3 Not started Promote from staging to production -1 Not started Perform the cutover 0 Not started Optimize the application Assess your current environment, teams, and optimization loop 1 Not started Establish your optimization requirements and goals 1 Not started Optimize your environment and your teams 3 Not started Tune the optimization loop 5 Not startedTo clearly outline responsibility assignments, the application onboarding and migration team defines the following RACI matrix for each phase and task of the process:
Phase Task Application onboarding and migration team Application development and operations team IDP team Assess the application Review the application design Responsible Accountable Informed Review application dependencies Responsible Accountable Informed Review CI/CD processes Responsible Accountable Informed Review data persistence and data management requirements Responsible Accountable Informed Review FinOps requirements Responsible Accountable Informed Review compliance requirements Responsible Accountable Informed Review the application's team practices Responsible Accountable Informed Assess application refactoring and the IDP Responsible Accountable Consulted Plan application onboarding and migration Responsible Accountable Consulted Set up the IDP Enhance the IDP Accountable Consulted Responsible Configure the IDP Responsible, Accountable Consulted Consulted Onboard and migrate the application Refactor the application Accountable Responsible Consulted Configure CI/CD workflows Responsible, Accountable Consulted Consulted Promote from development to staging Responsible, Accountable Consulted Informed Perform acceptance testing Responsible, Accountable Consulted Informed Migrate data Responsible, Accountable Consulted Consulted Promote from staging to production Responsible, Accountable Consulted Informed Perform the cutover Responsible, Accountable Consulted Informed Optimize the application Assess your current environment, teams, and optimization loop Informed Responsible, Accountable Informed Establish your optimization requirements and goals Informed Responsible, Accountable Informed Optimize your environment and your teams Informed Responsible, Accountable Informed Tune the optimization loop Informed Responsible, Accountable Informed"},{"location":"reference-architectures/accelerating-migrations/#assess-the-application-example","title":"Assess the application (example)","text":"In the assessment phase, the application onboarding and migration team assesses the application by completing the assessment phase tasks.
"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-design-example","title":"Review the application design (example)","text":"The application onboarding and migration team reviews the application design, and gathers the following information:
Network and connectivity requirements. The application needs:
The application doesn't require any specific service mesh configuration.
Statefulness. The application stores persistent data on Amazon Relational Database Service (Amazon RDS) for PostgreSQL and on Amazon Simple Storage Service (Amazon S3).
The application onboarding and migration team reviews dependencies on systems that are outside the scope of the application, and gathers the following information:
The application onboarding and migration team reviews the application's CI/CD processes, and gathers the following information:
The application onboarding and migration team reviews data persistence and data management requirements, and gathers the following information:
The application onboarding and migration team is also tasked to migrate data from Amazon RDS for PostgreSQL and Amazon S3 to database and object storage services offered by the IDP. In this example, the IDP offers Cloud SQL for PostgreSQL as a database service, and Cloud Storage as an object storage service.
As part of this application dependency review, the application onboarding and migration team assesses the application's Amazon RDS database and the Amazon S3 buckets. For simplicity, we omit details about those assessments from this example. For more information about assessing Amazon RDS and Amazon S3, see the Assess the source environment sections in the following documents:
The application onboarding and migration team reviews FinOps requirements, and gathers the following information:
The application onboarding and migration team reviews compliance requirements, and gathers the following information:
The application onboarding and migration team reviews development and operational practices that the application development and operations team has in place, and gathers the following information:
The application onboarding and migration team suggests the following:
After reviewing the application and its related CI/CD process, the team application onboarding and migration team assesses the refactoring that the application needs to onboard and migrate it on the IDP, scopes the following refactoring tasks:
The application onboarding and migration team evaluates the IDP against the application's requirements, and concludes that:
After completing the application review, the application onboarding and migration team refines the onboarding and migration plan, and validates it in collaboration with technical and non-technical stakeholders.
"},{"location":"reference-architectures/accelerating-migrations/#set-up-the-idp-example","title":"Set up the IDP (example)","text":"After you assess the application and plan the onboarding and migration process, you set up the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#enhance-the-idp-example","title":"Enhance the IDP (example)","text":"The IDP team doesn't need to enhance the IDP to onboard and migrate the application because:
The application onboarding and migration team configures the runtime environments for the application using the IDP: a development environment, a staging environment, and a production environment. For each environment, the application onboarding and migration team completes the following tasks:
Configures foundational services:
Provisions and configures a GKE cluster for the application.
To onboard and migrate the application, the application development and operations team refactors the application and then the application onboarding and migration team proceeds with the onboarding and migration process.
"},{"location":"reference-architectures/accelerating-migrations/#refactor-the-application-example","title":"Refactor the application (example)","text":"The application development and operations team refactors the application as follows:
To configure CI/CD workflows, the application onboarding and migration team does the following:
After deploying the application in the development environment, the application onboarding and migration team:
After promoting the application from the development environment to the staging environment, the application onboarding and migration team performs acceptance testing.
To perform acceptance testing to validate the application's real-world user journeys and business processes, the application onboarding and migration team consults with the application development and operations team.
The application onboarding and migration team performs acceptance testing as follows:
Validates that the application works as designed under degraded conditions, and that it recovers once the issues are resolved. The application onboarding and migration team tests the following scenarios:
Verifies that observability and alerting for the application are correctly configured.
After completing acceptance testing for the application, the application onboarding and migration team migrates data from the source environment to the Google Cloud environment as follows:
For simplicity, this document doesn't describe the details of migrating from Amazon RDS and Amazon S3 to Google Cloud. For more information about migrating from Amazon RDS and Amazon S3 to Google Cloud, see:
After performing acceptance testing and after migrating data to the Google Cloud environment, the application onboarding and migration team:
Ensures the application's operational readiness by verifying that the application:
Correctly connects to the Cloud SQL for PostgreSQL instance
After promoting the application to the production environment, and ensuring that the application is operationally ready, the application onboarding and migration team:
After performing the cutover, the application development and operations team takes over the maintenance of the application, and establishes the following optimization requirements:
Reduce the application's operational costs by:
After establishing optimization requirements, the application development and operations team completes the rest of the tasks of the optimization phase.
"},{"location":"reference-architectures/accelerating-migrations/#whats-next","title":"What's next","text":"Authors:
Other contributors:
Secrets rotation is a broadly accepted best practice across the information technology industry. However, often times it is cumbersome and disruptive process. In this guide you will use Google Cloud tools to automate the process of rotating passwords for a Cloud SQL instance. This method could easily be extended to other tools and types of secrets.
"},{"location":"reference-architectures/automated-password-rotation/#storing-passwords-in-google-cloud","title":"Storing passwords in Google Cloud","text":"In Google Cloud, secrets including passwords can be stored using many different tools including common open source tools such as Vault, however in this guide, you will use Secret Manager, Google Cloud's fully managed product for securely storing secrets. Regardless of the tool you use, passwords stored should be further secured. When using Secret Manager, following are some of the ways you can further secure your secrets:
Limiting access : The secrets should be readable writable only through the Service Accounts via IAM roles. The principle of least privilege must be followed while granting roles to the service accounts.
Encryption : The secrets should be encrypted. Secret Manager encrypts the secret at rest using AES-256 by default. But you can use your own encryption keys, customer-managed encryption keys (CMEK) to encrypt your secret at rest. For details, see Enable customer-managed encryption keys for Secret Manager.
Password rotation : The passwords stored in the secret manager should be rotated on a regular basis to reduce the risk of a security incident.
Security best practices require us to regularly rotate the passwords in our stack. Changing the password mitigates the risk in the event where passwords are compromised.
"},{"location":"reference-architectures/automated-password-rotation/#how-to-rotate-passwords","title":"How to rotate passwords","text":"Manually rotating the passwords is an antipattern and should not be done as it exposes the password to the human rotating it and may result in security and system incidents. Manual rotation processes also introduce the risk that the rotation isn't actually performed due to human error, for example forgetting or typos.
This necessitates having a workflow that automates password rotation. The password could be of an application, a database, a third-party service or a SaaS vendor etc.
"},{"location":"reference-architectures/automated-password-rotation/#automatic-password-rotation","title":"Automatic password rotation","text":"Typically, rotating a password requires these steps:
(such as applications,databases, SaaS).
Update Secret Manager to store the new password.
Restart the applications that use that password. This will make the
application source the latest passwords.
The following architecture represents a general design for a systems that can rotate password for any underlying software/system.
"},{"location":"reference-architectures/automated-password-rotation/#workflow","title":"Workflow","text":"The following architecture demonstrates a way to automatically rotate CloudSQL password.
"},{"location":"reference-architectures/automated-password-rotation/#workflow-of-the-example-deployment","title":"Workflow of the example deployment","text":"Note : The architecture doesn't show the flow to restart the application after the password rotation as shown in thee Generic architecture but it can be added easily with minimal changes to the Terraform code.
"},{"location":"reference-architectures/automated-password-rotation/#deploy-the-architecture","title":"Deploy the architecture","text":"The code to build the architecture has been provided with this repository. Follow these instructions to create the architecture and use it:
Open Cloud Shell on Google Cloud Console and log in with your credentials.
If you want to use an existing project, get role/project.owner role on the project and set the environment in Cloud Shell as shown below. Then, move to step 4.
#set shell environment variable\n export PROJECT_ID=<PROJECT_ID>\n Replace <PROJECT_ID> with the ID of the existing project.
If you want to create a new GCP project run the following commands in Cloud Shell.
#set shell environment variable\n export PROJECT_ID=<PROJECT_ID>\n #create project\n gcloud projects create ${PROJECT_ID} --folder=<FOLDER_ID>\n #associate the project with billing account\n gcloud billing projects link ${PROJECT_ID} --billing-account=<BILLING_ACCOUNT_ID>\n Replace <PROJECT_ID> with the ID of the new project. Replace <BILLING_ACCOUNT_ID> with the billing account ID that the project should be associated with.
Set the project ID in Cloud Shell and enable APIs in the project:
gcloud config set project ${PROJECT_ID}\n gcloud services enable \\\n cloudresourcemanager.googleapis.com \\\n serviceusage.googleapis.com \\\n --project ${PROJECT_ID}\n Download the Git repository containing the code to build the example architecture:
cd ~\n git clone https://github.com/GoogleCloudPlatform/platform-engineering\n cd platform-engineering/reference-architectures/automated-password-rotation/terraform\n\n terraform init\n terraform plan -var \"project_id=$PROJECT_ID\"\n terraform apply -var \"project_id=$PROJECT_ID\" --auto-approve\n Note: It takes around 30 mins for the entire architecture to get deployed.
Once the Terraform apply has successfully finished, the example architecture will be deployed in the your Google Cloud project. Before exercising the rotation process, review and verify the deployment in the Google Cloud Console.
"},{"location":"reference-architectures/automated-password-rotation/#review-cloud-sql-database","title":"Review Cloud SQL database","text":"Databases > SQL. Confirm that cloudsql-for-pg is present in the instance list.cloudsql-for-pg, to open the instance details page.Users. Confirm you see a user with the name user1.Databases. Confirm you see see a database named test.Overview.Connect to this instance section, note that only Private IP address is present and no public IP address. This restricts access to the instance over public network.Security > Secret Manager. Confirm that cloudsql-pswd is present in the list.cloudsql-pswd.View secret value to view the password for Cloud SQL database.Integration Services > Cloud Scheduler. Confirm that password-rotator-job is present in the Scheduler Jobs list.password-rotator-job, confirm it is configured to run on 1st of every month.Click Continue to see execution configuration. Confirm the following settings:
Target type is Pub/SubSelect a Cloud Pub/Sub topic is set to pswd-rotation-topicMessage body contains a JSON object with the details of the Cloud SQL isntance and secret to be rotated.Click Cancel, to exit the Cloud Scheduler job details.
Analytics > Pub/Sub.Topic. Confirm that pswd-rotation-topic is present in the topics list.pswd-rotation-topic.Subscriptions tab, click on Subscription ID for the rotator Cloud Function.Details tab. Confirm, the Audience tag shows the rotator Cloud Function.Topic.pswd-rotation-topic.Details tab.Schema name field.Details, confirm that the schema contains these keys: secretid, instance_name, db_user, db_name and db_location. These keys will be used to identify what database and user password is to be rotated.Serverless > Cloud Run Functions. Confirm that pswd_rotator_function is present in the list.pswd_rotator_function.Trigger tab. Confirm that the field Receive events from has the Pub/Sub topic pswd-rotation-topic. This indicates that the function will run when a message arrives to that topic.Details tab. Confirm that under Network Settings VPC connector is set to connector-for-sql. This allows the function to connect to the CloudSQL over private IPs.Source tab to see the python code that the function executes.Note: For the purpose of this tutorial, the secret is accessible to the human users and not encrypted. See the section and Secret Manager best practice
"},{"location":"reference-architectures/automated-password-rotation/#verify-that-you-are-able-to-connect-to-the-cloud-sql-instance","title":"Verify that you are able to connect to the Cloud SQL instance","text":"Databases > SQLcloudsql-for-pgCloud SQL Studio.Database dropdown, choose test.User dropdown, choose user1.Password textbox paste the password copied from the cloudsql-pswd secret.Authenticate. Confirm you were able to log in to the database.Typically, the Cloud Scheduler will automatically run on 1st day of every month triggering password rotation. However, for this tutorial you will run the Cloud Scheduler job manually, which causes the Cloud Run Function to generate a new password, update it in Cloud SQL and store it in Secret Manager.
Integration Services > Cloud Scheduler.password-rotator-job. Click the three dots icon and select Force run.Status of last execution shows Success.Serverless > Cloud Run Functions.pswd_rotator_function.Logs tab.Secret cloudsql-pswd changed in Secret Manager!, DB password changed successfully! and DB password verified successfully!.Security > Secret Manager. Confirm that cloudsql-pswd is present in the list.cloudsql-pswd. Note you should now see a new version, version 2 of the secret.View secret value to view the password for Cloud SQL database.Databases > SQLcloudsql-for-pgCloud SQL Studio.Database dropdown, choose test.User dropdown, choose user1.Password textbox paste the password copied from the cloudsql-pswd secret.Authenticate. Confirm you were able to log in to the database. cd platform-engineering/reference-architectures/automated-password-rotation/terraform\n\n terraform init\n terraform plan -var \"project_id=$PROJECT_ID\"\n terraform destroy -var \"project_id=$PROJECT_ID\" --auto-approve\n"},{"location":"reference-architectures/automated-password-rotation/#conclusion","title":"Conclusion","text":"In this tutorial, you saw a way to automate password rotation on Google Cloud. First, you saw a generic reference architecture that can be used to automate password rotation in any password management system. In the later section, you saw an example deployment that uses Google Cloud services to rotate password of Cloud Sql database in Google Cloud Secret Manager.
Implementing an automatic flow to rotate passwords takes away manual overhead and provide seamless way to tighten your password security. It is recommended to create an automation flow that runs on a regular schedule but can also be easily triggered manually when needed. There can be many variations of this architecture that can be adopted. For example, you can directly trigger a Cloud Run Function from a Google Cloud Scheduler job without sending a message to pub/sub if you don't want to broadcast the password rotation. You should identify a flow that fits your organization requirements and modify the reference architecture to implement it.
"},{"location":"reference-architectures/backstage/","title":"Backstage on Google Cloud","text":"A collection of resources related to utilizing Backstage on Google Cloud.
"},{"location":"reference-architectures/backstage/#backstage-plugins-for-google-cloud","title":"Backstage Plugins for Google Cloud","text":"A repository for various plugins can be found here -> google-cloud-backstage-plugins
"},{"location":"reference-architectures/backstage/#backstage-quickstart","title":"Backstage Quickstart","text":"This is an example deployment of Backstage on Google Cloud with various Google Cloud services providing the infrastructure.
"},{"location":"reference-architectures/backstage/backstage-quickstart/","title":"Backstage on Google Cloud Quickstart","text":"This quick-start deployment guide can be used to set up an environment to familiarize yourself with the architecture and get an understanding of the concepts related to hosting Backstage on Google Cloud.
NOTE: This environment is not intended to be a long lived environment. It is intended for temporary demonstration and learning purposes. You will need to modify the configurations provided to align with your orginazations needs. Along the way the guide will make callouts to tasks or areas that should be productionized in for long lived deployments.
"},{"location":"reference-architectures/backstage/backstage-quickstart/#architecture","title":"Architecture","text":"The following diagram depicts the high level architecture of the infrastucture that will be deployed.
"},{"location":"reference-architectures/backstage/backstage-quickstart/#requirements-and-assumptions","title":"Requirements and Assumptions","text":"To keep this guide simple it makes a few assumptions. Where the are alternatives we have linked to some additional documentation.
In this section you prepare a folder for deployment.
In this section you prepare your project for deployment.
Go to the project selector page in the Cloud Console. Select or create a Cloud project.
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
In Cloud Shell, set environment variables with the ID of your project:
export PROJECT_ID=<INSERT_YOUR_PROJECT_ID>\ngcloud config set project \"${PROJECT_ID}\"\n Clone the repository and change directory to the guide directory
git clone https://github.com/GoogleCloudPlatform/platform-engineering && \\\ncd platform-engineering/reference-architectures/backstage/backstage-quickstart\n Set environment variables
export BACKSTAGE_QS_BASE_DIR=$(pwd) && \\\nsed -n -i -e '/^export BACKSTAGE_QS_BASE_DIR=/!p' -i -e '$aexport \\\nBACKSTAGE_QS_BASE_DIR=\"'\"${BACKSTAGE_QS_BASE_DIR}\"'\"' ${HOME}/.bashrc\n Set the project environment variables in Cloud Shell
export BACKSTAGE_QS_STATE_BUCKET=\"${PROJECT_ID}-terraform\"\nexport IAP_USER_DOMAIN=\"<your org's domain>\"\nexport IAP_SUPPORT_EMAIL=\"<your org's support email>\"\n Create a Cloud Storage bucket to store the Terraform state
gcloud storage buckets create gs://${BACKSTAGE_QS_STATE_BUCKET} --project ${PROJECT_ID}\n Before running Terraform, make sure that the Service Usage API and Service Management API are enabled.
Enable Service Usage API and Service Management API
gcloud services enable \\\n cloudresourcemanager.googleapis.com \\\n iap.googleapis.com \\\n serviceusage.googleapis.com \\\n servicemanagement.googleapis.com\n Setup the Identity Aware Proxy brand
gcloud iap oauth-brands create \\\n --application_title=\"IAP Secured Backstage\" \\\n --project=\"${PROJECT_ID}\" \\\n --support_email=\"${IAP_SUPPORT_EMAIL}\"\n Capture the brand name in an environment variable, it will be in the format of: projects/[your_project_number]/brands/[your_project_number].
export IAP_BRAND=<your_brand_name>\n Using the brand name create the IAP client.
gcloud iap oauth-clients create \\\n ${IAP_BRAND} \\\n --display_name=\"IAP Secured Backstage\"\n Capture the client_id and client_secret in environment variables. For the client_id we only need the last value of the string, it will be in the format of: 549085115274-ksi3n9n41tp1vif79dda5ofauk0ebes9.apps.googleusercontent.com
export IAP_CLIENT_ID=\"<your_client_id>\"\nexport IAP_SECRET=\"<your_iap_secret>\"\n Set the configuration variables
sed -i \"s/YOUR_STATE_BUCKET/${BACKSTAGE_QS_STATE_BUCKET}/g\" ${BACKSTAGE_QS_BASE_DIR}/backend.tf\nsed -i \"s/YOUR_PROJECT_ID/${PROJECT_ID}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_USER_DOMAIN/${IAP_USER_DOMAIN}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_SUPPORT_EMAIL/${IAP_SUPPORT_EMAIL}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_CLIENT_ID/${IAP_CLIENT_ID}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_SECRET/${IAP_SECRET}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\n Create the resources
cd ${BACKSTAGE_QS_BASE_DIR} && \\\nterraform init && \\\nterraform plan -input=false -out=tfplan && \\\nterraform apply -input=false tfplan && \\\nrm tfplan\n Initial run of the Terraform may result in errors due to they way the API services are asyrchonously enabled. Re-running the terraform usually resolves the errors.
This will take a while to create all of the required resources, figure somewhere between 15 and 20 minutes.
Build the container image for Backstage
cd manifests/cloudbuild\ngcloud builds submit .\n The output of that command will include a fully qualified image path similar to: us-central1-docker.pkg.dev/[your_project]/backstage-qs/backstage-quickstart:d747db2a-deef-4783-8a0e-3b36e568f6fc Using that value create a new environment variable.
export IMAGE_PATH=\"<your_image_path>\"\n This will take approximately 10 minutes to build and push the image.
Configure Cloud SQL postgres user for password authentication.
gcloud sql users set-password postgres --instance=backstage-qs --prompt-for-password\n Grant the backstage workload service account create database permissions.
a. In the Cloud Console, navigate to SQL
b. Select the database instance
c. In the left menu select Cloud SQL Studio
d. Choose the postgres database and login with the postgres user and password you created in step 4.
e. Run the following sql commands, to grant create database permissions
ALTER USER \"backstage-qs-workload@[your_project_id].iam\" CREATEDB\n Perform an initial deployment of Kubernetes resources.
cd ../k8s\nsed -i \"s%CONTAINER_IMAGE%${IMAGE_PATH}%g\" deployment.yaml\ngcloud container clusters get-credentials backstage-qs --region us-central1 --dns-endpoint\nkubectl apply -f .\n Capture the IAP audience, the Backend Service may take a few minutes to appear.
a. In the Cloud Console, navigate to Security > Identity-Aware Proxy
b. Verify the IAP option is set to enabled. If not enable it now.
b. Choose Get JWT audience code from the three dot menu on the right side of your Backend Service.
c. The value will be in the format of: /projects/<your_project_number>/global/backendServices/<numeric_id>. Using that value create a new environment variable.
export IAP_AUDIENCE_VALUE=\"<your_iap_audience_value>\"\n Redeploy the Kubernetes manifests with the IAP audience
sed -i \"s%IAP_AUDIENCE_VALUE%${IAP_AUDIENCE_VALUE}%g\" deployment.yaml\nkubectl apply -f .\n In a browser navigate to you backstage endpoint. The URL will be similar to https://qs.endpoints.[your_project_id].cloud.goog
Destroy the resources using Terraform destroy
cd ${BACKSTAGE_QS_BASE_DIR} && \\\nterraform init && \\\nterraform destroy -auto-approve && \\\nrm -rf .terraform .terraform.lock.hcl\n Delete the project
gcloud projects delete ${PROJECT_ID}\n Remove Terraform files and temporary files
cd ${BACKSTAGE_QS_BASE_DIR} && \\\nrm -rf \\\n.terraform \\\n.terraform.lock.hcl \\\ninitialize/.terraform \\\ninitialize/.terraform.lock.hcl \\\ninitialize/backend.tf.local \\\ninitialize/state\n Reset the TF variables file
cd ${BACKSTAGE_QS_BASE_DIR} && \\\ncp backstage-qs-auto.tfvars.local backstage-qs.auto.tfvars\n Remove the environment variables
sed \\\n-i -e '/^export BACKSTAGE_QS_BASE_DIR=/d' \\\n${HOME}/.bashrc\n In some instances you will need to create and manage the project through Terraform. This quickstart provides a sample process and Terraform to create and destory the project via Terraform.
To run this part of the quick start you will need the following information and permissions.
roles/billing.user IAM permissions on the billing account specifiedroles/resourcemanager.projectCreator IAM permissions on the organization or folder specifiedSet the configuration variables
nano ${BACKSTAGE_QS_BASE_DIR}/initialize/initialize.auto.tfvars\n environment_name = \"qs\"\niapUserDomain = \"\"\niapSupportEmail = \"\"\nproject = {\n billing_account_id = \"XXXXXX-XXXXXX-XXXXXX\"\n folder_id = \"############\"\n name = \"backstage\"\n org_id = \"############\"\n}\n Values required :
environment_name: the name of the environment (defaults to qs for quickstart)iapUserDomain: the root domain of the GCP Org that the Backstage users will be iniapSupportEmail: support contact for the IAP brandproject.billing_account_id: the billing account IDproject.name: the prefix for the display name of the project, the full name will be <project.name>-<environment_name>project.folder_id OR project.org_idproject.folder_id: the Google Cloud folder IDproject.org_id: the Google Cloud organization IDAuthorize gcloud
gcloud auth login --activate --no-launch-browser --quiet --update-adc\n Create a new project
cd ${BACKSTAGE_QS_BASE_DIR}/initialize\nterraform init && \\\nterraform plan -input=false -out=tfplan && \\\nterraform apply -input=false tfplan && \\\nrm tfplan && \\\nterraform init -force-copy -migrate-state && \\\nrm -rf state\n Set the project environment variables in Cloud Shell
PROJECT_ID=$(grep environment_project_id \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars |\nawk -F\"=\" '{print $2}' | xargs)\n Destroy the project
cd ${BACKSTAGE_QS_BASE_DIR}/initialize && \\\nTERRAFORM_BUCKET_NAME=$(grep bucket backend.tf | awk -F\"=\" '{print $2}' |\nxargs) && \\\ncp backend.tf.local backend.tf && \\\nterraform init -force-copy -lock=false -migrate-state && \\\ngsutil -m rm -rf gs://${TERRAFORM_BUCKET_NAME}/* && \\\nterraform init && \\\nterraform destroy -auto-approve && \\\nrm -rf .terraform .terraform.lock.hcl state/\n In situations where you have run this quickstart before and then cleaned-up the resources but are re-using the project, it might be neccasary to restore the endpoints from a deleted state first.
BACKSTAGE_QS_PREFIX=$(grep environment_name \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars | awk -F\"=\" '{print $2}' | xargs)\nBACKSTAGE_QS_PROJECT_ID=$(grep environment_project_id \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars | awk -F\"=\" '{print $2}' | xargs)\ngcloud endpoints services undelete \\\n${BACKSTAGE_QS_PREFIX}.endpoints.${BACKSTAGE_QS_PROJECT_ID}.cloud.goog \\\n--quiet 2>/dev/null\n"},{"location":"reference-architectures/cloud_deploy_flow/","title":"Platform Engineering Deployment Demo","text":""},{"location":"reference-architectures/cloud_deploy_flow/#background","title":"Background","text":"Platform engineering focuses on providing a robust framework for managing the deployment of applications across various environments. One of the critical components in this field is the automation of application deployments, which streamlines the entire process from development to production.
Most organizations have predefined rules around security, privacy, deployment, and change management to ensure consistency and compliance across environments. These rules often include automated security scans, privacy checks, and controlled release protocols that track all changes in both production and pre-production environments.
In this demo, the architecture is designed to show how a deployment tool like Cloud Deploy can integrate smoothly into such workflows, supporting both automation and oversight. The process starts with release validation, ensuring that only compliant builds reach the release stage. Rollout approvals then offer flexibility, allowing teams to implement either manual checks or automated responses depending on specific requirements.
This setup provides a blueprint for organizations to streamline deployment cycles while maintaining robust governance. By using this demo, you can see how these components work together, from container build through deployment, in a way that minimizes disruption to existing processes and aligns with typical organizational change management practices.
This demo showcases a complete workflow that begins with the build of a container and progresses through various stages, ultimately resulting in the deployment of a new application.
"},{"location":"reference-architectures/cloud_deploy_flow/#overview-of-the-demo","title":"Overview of the Demo","text":"This demo illustrates the end-to-end deployment process, starting from the container build phase. Here's a high-level overview of the workflow:
Container Build Process: The demo begins when a container is built-in Cloud Build. Upon completion, a notification is sent to a Pub/Sub message queue.
Release Logic: A Cloud Run Function subscribes to this message queue, assessing whether a release should be created. If a release is warranted, a message is sent to a \"Command Queue\" (another Pub/Sub topic).
Creating a Release: A dedicated function listens to the \"Command Queue\" and communicates with Cloud Deploy to create a new release. Once the release is created, a notification is dispatched to the Pub/Sub Operations topic.
Rollout Process: Another Cloud Function picks up this notification and initiates the rollout process by sending a createRolloutRequest to the \"Command Queue.\"
Approval Process: Since rollouts typically require approval, a notification is sent to the cloud-deploy-approvals Pub/Sub queue. An approval function then picks up this message, allowing you to implement your custom logic or utilize the provided site Demo to return JSON, such as { \"manualApproval\": \"true\" }.
Deployment: Once approved, the rollout proceeds, and the new application is deployed.
compute.googleapis.comiam.googleapis.comcloudresourcemanager.googleapis.comTo run this demo, the following IAM roles will be granted to the service account created by the Terraform configuration:
roles/iam.serviceAccountUser: Allows management of service accounts.roles/logging.logWriter: Grants permission to write logs.roles/artifactregistry.writer: Enables writing to Artifact Registry.roles/storage.objectUser: Provides access to Cloud Storage objects.roles/clouddeploy.jobRunner: Allows execution of Cloud Deploy jobs.roles/clouddeploy.releaser: Grants permissions to release configurations in Cloud Deploy.roles/run.developer: Enables deploying and managing Cloud Run services.roles/cloudbuild.builds.builder: Allows triggering and managing Cloud Build processes.The following Google Cloud services must be enabled in your project to run this demo:
pubsub.googleapis.com: Enables Pub/Sub for messaging between services.clouddeploy.googleapis.com: Allows use of Cloud Deploy for managing deployments.cloudbuild.googleapis.com: Enables Cloud Build for building and deploying applications.compute.googleapis.com: Provides access to Compute Engine resources.cloudresourcemanager.googleapis.com: Allows management of project-level permissions and resources.run.googleapis.com: Enables Cloud Run for deploying and running containerized applications.cloudfunctions.googleapis.com: Allows use of Cloud Functions for event-driven functions.eventarc.googleapis.com: Enables Eventarc for routing events from sources to targets.artifactregistry.googleapis.com: Allows for image hosting for CI/CD.To run this demo, follow these steps:
Fork and Clone the Repository: Start by forking this repository to your GitHub account (So you can connect GCP to this repository), then clone it to your local environment. After cloning, change your directory to the deployment demo:
cd platform-engineering/reference-architectures/cloud_deploy_flow\n Note: you can't use a repository inside an Organization, just use your personal account for this demo.
Set Up Environment Variables or Variables File: You can set the necessary variables either by exporting them as environment variables or by creating a terraform.tfvars file. Refer to variables.tf for more details on each variable. Ensure the values match your Google Cloud project and GitHub configuration.
For the repo-name and repo-owner here, use the repository you just cloned above.
Option 1: Set environment variables manually in your shell:
export TF_VAR_project_id=\"your-google-cloud-project-id\"\nexport TF_VAR_region=\"your-preferred-region\"\nexport TF_VAR_github_owner=\"your-github-repo-owner\"\nexport TF_VAR_github_repo=\"your-github-repo-name\"\n Option 2: Create a terraform.tfvars file in the same directory as your Terraform configuration and populate it with the following:
project_id = \"your-google-cloud-project-id\"\nregion = \"your-preferred-region\"\ngithub_owner = \"your-github-repo-owner\"\ngithub_repo = \"your-github-repo-name\"\n Initialize and Apply Terraform: With the environment variables set, initialize and apply the Terraform configuration:
terraform init\nterraform apply\n Note: Applying Terraform may take a few minutes as it creates the necessary resources.
Connect GitHub Repository to Cloud Build: Due to occasional issues with automatic connections, you may need to manually attach your GitHub repository to Cloud Build in the Google Cloud Console.
If you get the following error you will need to manually connect your repository to the project:
Error: Error creating Trigger: googleapi: Error 400: Repository mapping does\nnot exist.\n Re-run step 3 to ensure all resources are deployed
Navigate to the Demo site: Once the Terraform setup is complete, switch to the Demo site directory:
cd platform-engineering/reference-architectures/cloud-deploy-flow/WebsiteDemo\n Authenticate and Run the Demo site:
Ensure you are running these commands on a local machine or a machine with GUI/web browser access, as Cloud Shell may not fully support running the demo site.
Set your Google Cloud project by running:
gcloud config set project <your_project_id>\n Authenticate your Google Cloud CLI session:
gcloud auth application-default login\n Install required npm packages and start the demo site:
npm install\nnode index.js\n Open http://localhost:8080 in your browser to observe the demo site in action.
Trigger a Build in Cloud Build:
Approve the Rollout: When an approval message is received, you\u2019ll need to send a response to complete the deployment. Use the message data provided and add a ManualApproval field:
{\n \"message\": {\n \"data\": \"<base64-encoded data>\",\n \"attributes\": {\n \"Action\": \"Required\",\n \"Rollout\": \"rollout-123\",\n \"ReleaseId\": \"release-456\",\n \"ManualApproval\": \"true\"\n }\n }\n}\n Verify the Deployment: Once the approval is processed, the deployment should finish rolling out. Check the Cloud Deploy dashboard in the Google Cloud Console to confirm the deployment status.
This demo encapsulates the essential components and workflow for deploying applications using platform engineering practices. It illustrates how various services interact to ensure a smooth deployment process.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/","title":"Cloud Deployment Approvals with Pub/Sub","text":"This project provides a Google Cloud Run Function to automate deployment approvals based on messages received via Google Cloud Pub/Sub. The function processes deployment requests, checks conditions for rollout approval, and publishes an approval command if the requirements are met.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#features","title":"Features","text":"Clone the repository:
git clone <repository-url>\ncd <repository-folder>\n Enable APIs: Enable the Google Cloud Pub/Sub and Deploy APIs for your project:
gcloud services enable pubsub.googleapis.com deploy.googleapis.com\n Deploy the Function: Use Google Cloud SDK to deploy the function:
gcloud functions deploy cloudDeployApprovals --runtime go116 \\\n--trigger-event-type google.cloud.pubsub.topic.v1.messagePublished \\\n--trigger-resource YOUR_SUBSCRIBE_TOPIC\n The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:
Variable Name Description RequiredPROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#code-structure","title":"Code Structure","text":"config struct: Holds configuration for the environment variables.
PubsubMessage and ApprovalsData structs: Define the structure of messages received from Pub/Sub and attributes within them.
cloudDeployApprovals function: Entry point for handling messages. Validates the conditions and, if met, triggers the sendCommandPubSub function to send an approval command.
sendCommandPubSub function: Publishes a command message to the Pub/Sub topic to approve a deployment rollout.
The function cloudDeployApprovals is invoked whenever a message is published to the configured Pub/Sub topic. Upon receiving a message, the function will:
Required, if a rollout ID is provided, and if manual approval is marked as \"true.\"SENDTOPICID topic.A message sent to the function should resemble this JSON structure:
{\n \"message\": {\n \"data\": \"<base64-encoded data>\",\n \"attributes\": {\n \"Action\": \"Required\",\n \"Rollout\": \"rollout-123\",\n \"ReleaseId\": \"release-456\",\n \"ManualApproval\": \"true\"\n }\n }\n}\n"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#custom-manual-approval-field","title":"Custom Manual Approval Field","text":"In the ApprovalsData struct, there is a ManualApproval field. This field is a custom addition, not provided by Google Cloud Deploy, and serves as a placeholder for an external approval system.
To integrate the approval system, you can replace or adapt this field to suit your existing change process workflow. For instance, you could link this field to an external ticketing or project management system to track and verify approvals. Implementing an approval system allows greater control over deployment rollouts, ensuring they align with your organization\u2019s policies.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#logging","title":"Logging","text":"The function logs each major step, from invocation to message processing and condition checking, to facilitate debugging and monitoring.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/","title":"Cloud Deploy Interactions with Pub/Sub","text":"This project demonstrates a Google Cloud Run Function to manage deployments by creating releases, rollouts, or approving rollouts based on incoming Pub/Sub messages. The function leverages Google Cloud Deploy and listens for deployment-related commands sent via Pub/Sub, executing appropriate actions based on the command type.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#features","title":"Features","text":"Listens for Pub/Sub messages with deployment commands (CreateRelease, CreateRollout, ApproveRollout) Messages should include protobuf request.
Initiates Google Cloud Deploy actions based on the received command.
Logs each step of the deployment process for better traceability.
Clone the repository:
git clone <repository-url>\ncd <repository-folder>\n Set up Google Cloud: Ensure you have enabled the Google Cloud Deploy and Pub/Sub APIs in your project.
Deploy the Function: Deploy the function using Google Cloud SDK:
gcloud functions deploy cloudDeployInteractions --runtime go116 \\\n--trigger-event-type google.cloud.pubsub.topic.v1.messagePublished \\\n--trigger-resource YOUR_TOPIC_NAME\n The Pub/Sub message should include a JSON payload with a command field specifying the type of deployment action to execute. Examples of the command types include:
CreateRelease: Creates a new release for deployment.CreateRollout: Initiates a rollout of the release.ApproveRollout: Approves a pending rollout.The message should follow this structure:
{\n \"message\": {\n \"data\": \"<base64-encoded JSON containing command data>\"\n }\n}\n The JSON inside data should follow the format for DeployCommand:
{\n \"command\": \"CreateRelease\",\n \"createReleaseRequest\": {\n // Release creation parameters\n },\n \"createRolloutRequest\": {\n // Rollout creation parameters\n },\n \"approveRolloutRequest\": {\n // Rollout approval parameters\n }\n}\n"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#code-structure","title":"Code Structure","text":"DeployCommand struct: Defines the command to be executed and the parameters for each deploy action (create release, create rollout, or approve rollout).
cloudDeployInteractions function: Main function triggered by Pub/Sub messages. It parses the message and calls the respective deployment function based on the command.
cdCreateRelease: Creates a release in Google Cloud Deploy.
Each function logs key steps, from initialization to message handling and completion of deployments, helping in troubleshooting and monitoring.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/","title":"Cloud Deploy Operations Function","text":"This project contains a Google Cloud Run Function written in Go, designed to interact with Google Cloud Deploy. The function listens for deployment events on a Pub/Sub topic, processes those events, and triggers specific deployment operations based on the event details. For instance, when a deployment release succeeds, it triggers a rollout creation and sends the relevant command to another Pub/Sub topic.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#requirements","title":"Requirements","text":"The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:
Variable Name Description RequiredPROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#structure","title":"Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#main-components","title":"Main Components","text":"Data payload and Attributes metadata.CreateRollout.CommandMessage to a specified Pub/Sub topic, which triggers deployment operations.cloudDeployOperations is triggered by a deployment event, specifically a CloudEvent.Message struct, checking for deployment success events.CommandMessage for a rollout and calls sendCommandPubSub.sendCommandPubSub function publishes the CommandMessage to a designated Pub/Sub topic to initiate the rollout.functions-framework --target=cloudDeployOperations\n"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#deployment-to-google-cloud-run-functions","title":"Deployment to Google Cloud Run Functions","text":"Set up your Google Cloud environment and enable the necessary APIs:
gcloud services enable cloudfunctions.googleapis.com pubsub.googleapis.com\nclouddeploy.googleapis.com\n Deploy the function to Google Cloud:
gcloud functions deploy cloudDeployOperations \\\n --runtime go120 \\\n --trigger-topic <YOUR_TRIGGER_TOPIC> \\\n --set-env-vars PROJECTID=<YOUR_PROJECT_ID>,LOCATION=<YOUR_LOCATION>,SENDTOPICID=<YOUR_SEND_TOPIC_ID>\n This project is licensed under the MIT License. See the LICENSE file for details.
TargetId within CommandMessage is dynamically populated based on actual Pub/Sub message data.pubsub.NewClient which should be carefully monitored in production for connection management.This project demonstrates a Google Cloud Run Function that triggers deployments based on Pub/Sub messages. The function listens for build notifications from Google Cloud Build and initiates a release in Google Cloud Deploy when a build succeeds.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#table-of-contents","title":"Table of Contents","text":"The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:
Variable Name Description RequiredPROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes PIPELINE The name of the delivery pipeline in Cloud Deploy. Yes TRIGGER The ID of the build trigger in Cloud Build. Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#function-overview","title":"Function Overview","text":"The deployTrigger function is invoked by Pub/Sub events. Here's a breakdown of its key components:
Initialization:
Message Handling:
Release Creation:
CreateReleaseRequest for Cloud Deploy.Random ID Generation:
To deploy the function, follow these steps:
gcloud functions deploy deployTrigger \\\n --runtime go113 \\\n --trigger-topic YOUR_TOPIC_NAME \\\n --env-file .env\n"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/","title":"Random Date Service","text":"This repository contains a sample application designed to demonstrate how deployments can work through Google Cloud Deploy and Cloud Build. Instead of a traditional \"Hello World\" application, this project generates and serves a random date, showcasing how to set up a cloud-based service.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#overview","title":"Overview","text":"The Random Date Service is built to illustrate the process of deploying an application using Cloud Run and Cloud Deploy. The application serves a random date formatted as a string. This simple service allows you to explore key concepts in cloud deployment without the complexity of a full-fledged application.
This is the core of the application, where the HTTP server is defined. It handles requests and responds with a randomly generated date.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#2-dockerfile","title":"2. Dockerfile","text":"The Dockerfile specifies how to build a container image for the application. This image will be used in Cloud Run for deploying the service.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#3-skaffoldyaml","title":"3. skaffold.yaml","text":"This file is configured for Google Cloud Deploy, facilitating the deployment process by managing builds and configurations in a single file.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#4-runyaml","title":"4. run.yaml","text":"The run.yaml file defines the configuration for Cloud Run and Cloud Deploy. Key aspects to note include:
random-date-service.image field under spec is set to pizza. This is crucial, as it indicates to Cloud Deploy where to substitute the image. This substitution occurs based on the createRelease function in main.go, specifically noted on line 122.To deploy and test this application:
run.yaml configuration to deploy the service.This sample application serves as a foundational example of how to leverage cloud services for deploying applications. By utilizing Google Cloud Deploy and Cloud Build, you can understand the deployment lifecycle and how cloud-native applications can be effectively managed and served.
Feel free to explore the code and configurations provided in this repository to get a better grasp of the deployment process.
"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/","title":"Pub/Sub Local Demo","text":"This project is a simple demonstration of a Pub/Sub system using Google Cloud Pub/Sub and a basic Express.js server. It is designed to visually understand how messages are sent to and from Pub/Sub queues. The code provided is primarily for demonstration purposes and is not intended for production use.
"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#features","title":"Features","text":"Install the required dependencies:
npm install
Create a .env file and populate it with the environment variables found in .env.sample
Start the server:
node index.js
Open your web browser and go to http://localhost:8080 to access the demo.
This code is intended for educational and demonstration purposes only. It may not be suitable for production environments due to lack of error handling, security considerations, and scalability.
"},{"location":"reference-architectures/github-runners-gke/","title":"Reference Guide: Deploy and use GitHub Actions Runners on GKE","text":""},{"location":"reference-architectures/github-runners-gke/#overview","title":"Overview","text":"This guide walks you through the process of setting up self-hosted GitHub Actions Runners on Google Kubernetes Engine (GKE) using the Terraform module terraform-google-github-actions-runners. It then provides instructions on how to create a basic GitHub Actions workflow to leverage these runners.
cloudresourcemanager.googleapis.comiam.googleapis.comcontainer.googleapis.comserviceusage.googleapis.comRun the following command to enable the prerequisite APIs:
gcloud services enable \\\n cloudresourcemanager.googleapis.com \\\n iam.googleapis.com \\\n container.googleapis.com \\\n serviceusage.googleapis.com \\\n --project <YOUR_PROJECT_ID>\n"},{"location":"reference-architectures/github-runners-gke/#register-a-github-app-for-authenticating-arc","title":"Register a GitHub App for Authenticating ARC","text":"Using a GitHub App for authentication allows you to make your self-hosted runners available to a GitHub organization that you own or have administrative access to. For more details on registering GitHub Apps, see GitHub\u2019s documentation.
You will need 3 values from this section to use as inputs in the Terraform module:
https://github.com/actions/actions-runner-controllergh_app_id in the Terraform module.pem file for later.gh_app_private_key in the Terraform modulehttps://github.com/organizations/ORGANIZATION/settings/installations/INSTALLATION_IDgh_app_installation_id in the Terraform module.Open the Terraform module repository in Cloud Shell automatically by clicking the button:
Clicking this button will clone the repository into Cloud Shell, change into the example directory, and open the main.tf file in the Cloud Shell Editor.
project_idgh_app_id: insert the value of the App ID from the GitHub App pagegh_app_installation_id: insert the value from the URL of the app installation pagegh_app_private_key:.pem file to example directory, alongside the main.tf file.pem filename you downloaded after generating the private key for the app, like so:gh_app_private_key = file(\"example.private-key.pem\")gh_config_url with the URL of your GitHub organization. It will be in the format of https://github.com/ORGANIZATIONterraform init to download the required providers.terraform plan to preview the changes that will be made.terraform apply and confirm to create the resources.You will see the runners become available in your GitHub Organization:
You should see the runners appear as \u201carc-runners\u201d
"},{"location":"reference-architectures/github-runners-gke/#creating-a-github-actions-workflow","title":"Creating a GitHub Actions Workflow","text":"Paste the following configuration into the text editor:
name: Actions Runner Controller Demo\non:\nworkflow_dispatch:\njobs:\nExplore-GitHub-Actions:\n runs-on: arc-runners\n steps:\n - run: echo \"This job uses runner scale set runners!\"\n Click Commit changes to save the workflow to your repository.
Navigate back into the example directory you previously ran terraform apply
cd terraform-google-github-actions-runners/examples/gh-runner-gke-simple/\n Destroy Terraform-managed infrastructure
terraform destroy\n Warning: this will destroy the GKE cluster, example VPC, service accounts, and the Helm-managed workloads previously deployed by this example.
"},{"location":"reference-architectures/github-runners-gke/#delete-github-resources","title":"Delete GitHub resources","text":"If you created a new GitHub App for testing purposes of this walkthrough, you can delete it via the following instructions. Note that any services authenticating via this GitHub App will lose access.
This architecture demonstrates how you can automate the provisioning of sandbox projects and automatically apply sensible guardrails and constraints. A sandbox project allows engineers to experiment with new technologies. Sandboxes are provisioned for a short period of time and with budget constraints.
"},{"location":"reference-architectures/sandboxes/#architecture","title":"Architecture","text":"The following diagram is the high-level architecture for enabling self-service creation of sandbox projects.
onCreate and onModify. The functions contain the logic to decide if a sandbox should be created or deleted.infraManagerProcessor is a Cloud Run service that works with Infrastructure Manager to kick off and monitor the infrastructure management. This is handled in a Cloud Run service because the execution of Terraform is a long running process.This repository contains the code to stand up the reference architecture and also create difference sandbox templates in the catalog. This section describes the structure of the repository so you can better navigate the code.
"},{"location":"reference-architectures/sandboxes/#examples","title":"Examples","text":"The /examples directory contains a sample Terraform deployment for deploying the reference architecture and command-line tool to exercise the automated creation of developer sandboxes. The examples are intended to provide you a starting point so you can incorporate the reference architecure into your infrastructure.
This example uses the Terraform modules from /sandbox-modules to deploy the reference architecture and includes instructions on how to get started.
The workflows and lifecycle of the sandboxes deployed via the reference architecture are managed through the document model stored in Cloud Firestore. This abstraction has the benefit of separating the core logic included in the reference archiecture from the user experience (UX). As such the example command line interface lets you experiment with the reference architecture and learn about the object model.
"},{"location":"reference-architectures/sandboxes/#catalog","title":"Catalog","text":"This directory contains a collection (catalog) of templates that you can use to deploy sandboxes. The reference architecture includes one for an empty project, but others could be added to support more specialized roles such as database admins, AI engineers, etc.
"},{"location":"reference-architectures/sandboxes/#sandbox-modules","title":"Sandbox Modules","text":"These modules use the fabric modules to create the system project. Each module represents a large component of the overall reference architecture and each component can be combined into the one system project or spread across different projects to help with separation of duties.
"},{"location":"reference-architectures/sandboxes/#fabric-modules","title":"Fabric Modules","text":"These are the base Terraform modules adopted from the Cloud Fabric Foundation. The fabric foundation is intended to be vendored, so we have copied them here for repeatbility of the overall deployment of the reference architecture.
We recommend that as you need additional modules for templates in the catalog that you start with and vendor the modules from the Cloud Foundation Fabric into this directory.
"},{"location":"reference-architectures/sandboxes/examples/cli/","title":"Example Command Line Interface","text":""},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/","title":"Overview","text":"This directory contains Terraform configuration files that let you deploy the system project. This example is a good entry point for testing the reference architecture and learning how it can be incorportated into your own infrastructure as code processes.
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#architecture","title":"Architecture","text":"For an explanation of the components of the sandboxes reference architecture and the interaction flow, read the main Architecture section.
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#before-you-begin","title":"Before you begin","text":"In this section you prepare a folder for deployment.
Activate Cloud Shell \\ At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt.
In Cloud Shell, clone this repository
git clone https://github.com/GoogleCloudPlatform/platform-engineering.git\n Export variables for the working directories
export SANDBOXES_DIR=\"$(pwd)/reference-architectures/examples/gcp-sandboxes\"\nexport SANDBOXES_CLI=\"$(pwd)/reference-architectures/examples/cli\"\n In this section you prepare your environment for deploying the system project.
Go to the Manage Resources page in the Cloud Console in the IAM & Admin menu.
Click Create folder, then choose Folder.
Enter a name for your folder. This folder will be used to contain the system and sandbox projects.
Click Create
Copy the folder ID from the Manage resources page, you will need this value later for use as Terraform variable.
Set the project ID and region in the corresponding Terraform environment variables
export TF_VAR_billing_account=\"<your billing account id>\"\nexport TF_VAR_sandboxes_folder=\"folders/<folder id from step 5>\"\nexport TF_VAR_system_project_name=\"<name for the system project>\"\n Change directory into the Terraform example directory and initialize Terraform.
cd \"${SANDBOXES_DIR}\"\nterraform init\n Apply the configuration. Answer yes when prompted, after reviewing the resources that Terraform intends to create.
terraform apply\n Now that the system project has been deployed, create a sandbox using the example cli.
Change directory into the example command-line tool directory
cd \"${SANDBOXES_DIR}\"\n Install there required Python libraries
pip install -r requirements.txt\n Create a Sandbox using the cli
python ./sandbox.py create \\\n--system=\"<name of your system project>\" \\\n--project_id=\"<name of the sandbox to create>\"\n Your sandboxes infrastructure is ready, you may continue to use the example cli to create and delete sandboxes. At this point it is recommended that you:
Each document stored in Cloud Firestore represents a sandbox. The following sections document the fields and structure of those documents.
"},{"location":"reference-architectures/sandboxes/sandbox-modules/#deployment","title":"Deployment","text":"Field Type Description_updateSource string This describes the last process or tool used to update or create the deployment document. For example, the example python cli _updateSource is set to python and when the firestore-processor Cloud Run updates the document it is set to cloudrun. status string Status of the sandbox, this changes create and delete operations progress. Refer to Key Statuses for detailed definitions of the values. projectId string The project ID of the sandbox. templateName string The name of the Terraform template from the catalog that the sandbox is based on. deploymentState object<DeploymentState> State object for the sandbox deployment. Contains data such as budget, current spend, expiration date, etc.The state object is updated by and used by the various lifecycle functions. infraManagerDeploymentId string ID returned by Infrastructure Manager for the deployment. infraManagerResult object<DeploymentResponse> This is the response object returned from Infrastructure Manager deployment operation. userId string Unique identifier for the user which owns the sandbox deployment. createdAt string Timestamp that the sandbox record was created at. updatedAt string Timestamp that the sandbox record was last updated. variables object<Variables> List of variable supplied by the user, which are in turned used by the template to create the sandbox. auditLog array[string] List of messages that the system can add as an audit log."},{"location":"reference-architectures/sandboxes/sandbox-modules/#deploymentstate","title":"DeploymentState","text":"Field Type Description budgetLimit number Spend limit for the sandbox. currentSpend number Current spend for the sandbox. expiresAt string Time base expiration for the sandbox."},{"location":"reference-architectures/sandboxes/sandbox-modules/#variables","title":"Variables","text":"Collection of key-value pairs that are used in the Infrastructure Manager request, for use as the Terraform variable values.
"},{"location":"reference-architectures/sandboxes/sandbox-modules/#key-statuses","title":"Key Statuses","text":"The following table describes important statuses that are used during the lifecycle of a deployment.
Status Set By Handled By Meaningprovision_requested User Interface firestore-functions The user has requested that a sandbox be provisioned. provision_pending infra-manager-processor infra-manager-processor Indicates the request was received by the infra-manager-processor but the request hasn\u2019t yet been made to Infrastructure Manager. provision_inprogress infra-manager-processor infra-manager-processor Indicates that the request has been submitted to Infrastructure Manager and it is in progress with Infrastructure Manager. provision_error infra-manager-processor infra-manager-processor The deployment process has failed with an error. provision_successful infra-manager-processor infra-manager-processor The deployment process has succeeded and the sandbox is available and running. delete_requested User Interface firestore-functions The user or lifecycle process has requested that a sandbox be deleted. delete_pending infra-manager-processor infra-manager-processor Indicates the delete request was received by the infra-manager-processor but the request hasn\u2019t yet been made to Infrastructure Manager. delete_inprogress infra-manager-processor infra-manager-processor Indicates that the delete request has been submitted to Infrastructure Manager and it is in progress with Infrastructure Manager. delete_error infra-manager-processor infra-manager-processor The delete process has failed with an error. delete_successful infra-manager-processor infra-manager-processor The delete process has succeeded."}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Platform Engineering on Google Cloud","text":"Platform engineering is an emerging practice in organizations to enable cross functional collaboration in order to deliver business value faster. It treats the internal groups; application developers, operators, security, infrastructure admins, etc. as customers and provides them the foundational platforms to accelerate their work. The key goals of platform engineering are providing everything as self-service, golden paths, improved collaboration, abstraction of technical complexities, all of which simplify the software development lifecycle, contributing towards delivering business values to consumers. Platform engineering is more effective in cloud computing as it helps realize the benefits possible on cloud like automation, security, productivity, faster time-to-market.
"},{"location":"#overview","title":"Overview","text":"Google Cloud offers decomposable, elastic, secure, scalable and cost efficient tools built on the guiding principles of platform engineering. With a focus on developer experience and innovation coupled with practices like SRE embedded into the tools, they make a good place to begin your platform journey to empower the developers to enhance their experience and increase their productivity.
This repository contains a collection of guides, examples and design patterns spanning Google Cloud products and best in class OSS tools, which you can use to help build an internal developer platform.
For more information, see Platform Engineering on Google Cloud.
"},{"location":"#resources","title":"Resources","text":""},{"location":"#design-patterns","title":"Design Patterns","text":"Copy any code you need from this repository into your own project.
Warning: Do not depend directly on the samples in this repository. Breaking changes may be made at any time without warning.
"},{"location":"#contributing-changes","title":"Contributing changes","text":"Entirely new samples are not accepted. Bugfixes are welcome, either as pull requests or as GitHub issues.
See CONTRIBUTING.md for details on how to contribute.
"},{"location":"#licensing","title":"Licensing","text":"Copyright 2024 Google LLC Code in this repository is licensed under the Apache 2.0. See LICENSE.
"},{"location":"code-of-conduct/","title":"Code of Conduct","text":""},{"location":"code-of-conduct/#our-pledge","title":"Our Pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"code-of-conduct/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"code-of-conduct/#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project email address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
This Code of Conduct also applies outside the project spaces when the Project Steward has a reasonable belief that an individual's behavior may have a negative impact on the project or its community.
"},{"location":"code-of-conduct/#conflict-resolution","title":"Conflict Resolution","text":"We do not believe that all conflict is bad; healthy debate and disagreement often yield positive results. However, it is never okay to be disrespectful or to engage in behavior that violates the project\u2019s code of conduct.
If you see someone violating the code of conduct, you are encouraged to address the behavior directly with those involved. Many issues can be resolved quickly and easily, and this gives people more control over the outcome of their dispute. If you are unable to resolve the matter for any reason, or if the behavior is threatening or harassing, report it. We are dedicated to providing an environment where participants feel welcome and safe.
Reports should be directed to [PROJECT STEWARD NAME(s) AND EMAIL(s)], the Project Steward(s) for [PROJECT NAME]. It is the Project Steward\u2019s duty to receive and address reported violations of the code of conduct. They will then work with a committee consisting of representatives from the Open Source Programs Office and the Google Open Source Strategy team. If for any reason you are uncomfortable reaching out to the Project Steward, please email opensource@google.com.
We will investigate every complaint, but you may not receive a direct response. We will use our discretion in determining when and how to follow up on reported incidents, which may range from not taking action to permanent expulsion from the project and project-sponsored spaces. We will notify the accused of the report and provide them an opportunity to discuss it before any action is taken. The identity of the reporter will be omitted from the details of the report supplied to the accused. In potentially harmful situations, such as ongoing harassment or threats to anyone's safety, we may take action without notice.
"},{"location":"code-of-conduct/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
"},{"location":"contributing/","title":"How to Contribute","text":"We'd love to accept your patches and contributions to this project.
"},{"location":"contributing/#before-you-begin","title":"Before you begin","text":""},{"location":"contributing/#sign-our-contributor-license-agreement","title":"Sign our Contributor License Agreement","text":"Contributions to this project must be accompanied by a Contributor License Agreement (CLA). You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project.
If you or your current employer have already signed the Google CLA (even if it was for a different project), you probably don't need to do it again.
Visit https://cla.developers.google.com/ to see your current agreements or to sign a new one.
"},{"location":"contributing/#review-our-community-guidelines","title":"Review our Community Guidelines","text":"This project follows Google's Open Source Community Guidelines.
"},{"location":"contributing/#contribution-process","title":"Contribution process","text":""},{"location":"contributing/#code-reviews","title":"Code Reviews","text":"All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.
"},{"location":"contributing/#development-guide","title":"Development guide","text":"This document contains technical information to contribute to this repository.
"},{"location":"contributing/#site","title":"Site","text":"This repository includes scripts and configuration to build a site using Material for MkDocs:
config/mkdocs: MkDocs configuration filesscripts/run-mkdocssh: script to build the site.github/workflows/documentation.yaml: GitHub Actions workflow that builds the site, and pushes a commit with changes on the current branch.To build the site, run the following command from the root of the repository:
scripts/run-mkdocs.sh\n"},{"location":"contributing/#preview-the-site","title":"Preview the site","text":"To preview the site, run the following command from the root of the repository:
scripts/run-mkdocs.sh \"serve\"\n"},{"location":"contributing/#linting-and-formatting","title":"Linting and formatting","text":"We configured several linters and formatters for code and documentation in this repository. Linting and formatting checks run as part of CI workflows.
Linting and formatting checks are configured to check changed files only by default. If you change the configuration of any linter or formatter, these checks run against the entire repository.
To run linting and formatting checks locally, you do the following:
scripts/lint.sh\n To automatically fix certain linting and formatting errors, you do the following:
LINTER_CONTAINER_FIX_MODE=\"true\" scripts/lint.sh\n"},{"location":"reference-architectures/accelerating-migrations/","title":"Accelerate migrations through platform engineering golden paths","text":"This document helps you adopt platform engineering by designing a process to onboard and migrate your existing applications to use your internal developer platform (IDP). It also provides guidance to help you evaluate the opportunity to design a platform engineering process, and to explore how it might function. Google Cloud provides tools, products, guidance, and professional services to help you adopt platform engineering in your environments.
This document is aimed at the following personas:
The Cloud Native Computing Foundation defines a golden path as an integrated bundle of templates and documentation for rapid project development. Designing and developing golden paths can help facilitate the onboarding and the migration of existing applications to your IDP. When you use a golden path, your development and operations teams can take advantage of benefits like the following:
Onboarding and migrating existing applications to the IDP can let you experience the benefits of adopting platform engineering gradually and incrementally in your organization, without spending effort on large scale migration projects.
To migrate applications and onboard them to the IDP, we recommend that you design an application onboarding and migration process. This document describes a reference application onboarding and migration process. We recommend that you tailor the process to your requirements and your IDP.
If you're migrating your applications from your on-premises environment or from another cloud provider to Google Cloud, the application onboarding and migration process can help you to accelerate your migration. In that scenario, the teams that are managing the migration can refer to well-established golden paths, instead of having to design their own migration processes and project templates.
"},{"location":"reference-architectures/accelerating-migrations/#application-onboarding-and-migration-process","title":"Application onboarding and migration process","text":"The goal of the application onboarding and migration process is to get an application on the IDP. After you onboard and migrate the application to the IDP, your teams can benefit from using the IDP. When you use an IDP, you can focus on providing business value for the application, rather than spending effort on ad-hoc processes and operations.
To manage the complexity of the application onboarding and migration process, we recommend that you design the process in the following phases:
The high-level structure of this process matches the Google Cloud migration path. In this case, you follow the migration path to onboard and migrate existing applications on the IDP.
To ensure that the application onboarding and migration is on the right track, we recommend that you design validation checkpoints for each phase of the process, rather than having a single acceptance testing task. Having validation checkpoints for each phase helps you to promptly detect issues as they arise, rather than when you are close to the end of the migration.
Even when following a phased process, onboarding and migrating complex applications to the IDP might require a significant effort, and it might pose risks. To manage the effort and the risks of onboarding and migrating complex applications to the IDP, you can follow the onboarding and migration process iteratively, by migrating parts of the application on each iteration. For example, if an application is composed of multiple components, you can onboard and migrate one component for each iteration of the process.
To reduce toil, we recommend that you thoroughly document the application onboarding and migration process, and make it as self-service as possible, in line with platform-engineering principles.
In this document, we assume that the onboarding and migration process involves three teams:
The following sections describe each phase of the application onboarding and migration process.
"},{"location":"reference-architectures/accelerating-migrations/#intake-the-onboarding-and-migration-request","title":"Intake the onboarding and migration request","text":"The first phase of the application onboarding and migration process is to intake the request to onboard and migrate the application. The request process is the following:
We recommend that you keep this phase as light as possible by using a form or a guided, self-service process. For example, you can include migration guidance in the IDP documentation so that development teams can review it and prepare for the migration. You can also implement automated checks in your IDP to give initial feedback to development teams about potential migration blockers and issues.
To assist and offer consultation to the teams that filed or intend to file an application onboarding and migration request, we recommend that the team that manages the IDP establish communication channels to offer assistance to other teams. For example, the team that manages the IDP might set up dedicated discussion groups, chat rooms, and office hours where they can offer help and answer questions about the IDP. To help with onboarding and migration of complex applications and to facilitate communications, you can also attach a member of the team that manages the IDP to the application team while the migration is in progress.
"},{"location":"reference-architectures/accelerating-migrations/#plan-application-onboarding-and-migration","title":"Plan application onboarding and migration","text":"As part of this phase, we recommend that the application onboarding and migration team starts drafting an onboarding and migration plan, even if the team doesn't have all of the data points to fully define it. When the team progresses through the assessment phase, they will gather information to finalize and validate the plan.
To manage the complexity of the migration plan, we recommend that you decompose it across the following sub-tasks:
Developing a comprehensive onboarding and migration plan is crucial to the success of the application onboarding and migration process. Having a plan helps you to define clear deadlines, assign responsibilities, and deal with unanticipated issues.
"},{"location":"reference-architectures/accelerating-migrations/#assess-the-application","title":"Assess the application","text":"The second phase of the application onboarding and migration process is to follow up on the intake request by assessing the application to onboard and migrate to the IDP. The goal of this assessment phase is to produce the following artifacts:
These outputs of the assessment phase help you to plan and complete the migration. The outputs also help you to scope the enhancements that the IDP needs to support the application, and to increase the velocity of future migrations.
To manage the complexity of the assessment phase, we recommend that you decompose it into the following steps:
The preceding steps are described in the following sections. For more information about assessing applications and defining migration plans, see Migrate to Google Cloud: Assess and discover your workloads.
"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-design","title":"Review the application design","text":"To gather a comprehensive understanding about the design of the application, we recommend that you complete a thorough assessment of the following aspects of the application:
Understanding the application architecture helps you to design and implement an effective onboarding and migration process for your application. It also helps you anticipate issues and potential problems that might arise during the migration. For example, if the architecture of your application to onboard and migrate to the IDP isn't compatible with your IDP, you might need to spend additional effort to refactor the application and enhance the IDP.
The application to onboard and migrate to the IDP might have dependencies on systems and data that are outside the scope of the application. To understand these dependencies, we recommend that you gather information about any reliance of your application on external systems and data, such as databases, datasets, and APIs. After you gather information, you classify the dependencies in order of importance and criticality. For example, your application might need access to a database to store persistent data, and to external APIs to integrate with to provide critical functionality to users, while it might have an optional dependency on a caching system.
Understanding the dependencies of your application on external systems and data is crucial to plan for continued access to these dependencies during and after the migration.
"},{"location":"reference-architectures/accelerating-migrations/#review-application-dependencies","title":"Review application dependencies","text":""},{"location":"reference-architectures/accelerating-migrations/#review-cicd-processes","title":"Review CI/CD processes","text":"After you review the application design and its dependencies, we recommend that you refine the assessment about your application's deployable artifacts by reviewing your application's CI/CD processes. These processes usually let you build the artifacts to deploy the application and let you deploy them in your runtime environments. For example, you refine the assessment by answering questions about these CI/CD processes, such as the following:
Understanding how the application's CI/CD processes work helps you evaluate whether your IDP can support these CI/CD processes as is, or if you need to enhance your IDP to support them. For example, if your application has a business-critical requirement on a canary deployment process and your IDP doesn't support it, you might need to factor in additional effort to enhance the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#review-data-persistence-and-data-management-requirements","title":"Review data persistence and data management requirements","text":"By completing the previous tasks of the assessment phase, you gathered information about the statefulness of the application and about the systems that the application uses to store persistent and transient data. In this section, you refine the assessment to develop a deeper understanding of the systems that the application uses to store stateful data. We recommend that you gather information on data persistence and data management requirements of your application. For example, you refine the assessment by answering questions such as the following:
Understanding your application's data persistence and data management requirements helps you to ensure that your IDP and your production environment can effectively support the application. This understanding can also help you determine whether you need to enhance the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#review-finops-requirements","title":"Review FinOps requirements","text":"As part of the assessment of your application, we recommend that you gather data about the FinOps requirements of your application, such as budget control and cost management, and evaluate whether your IDP supports them. For example, the application might require certain mechanisms to control spending and manage costs, eventually sending alerts. The application might also require mechanisms to completely stop spending when it reaches a certain budget limit.
Understanding your application's FinOps requirements helps you to ensure that you keep your application costs under control. It also helps you to establish proper cost attribution and cost optimization practices.
"},{"location":"reference-architectures/accelerating-migrations/#review-compliance-requirements","title":"Review compliance requirements","text":"The application to onboard and migrate to the IDP and its runtime environment might have to meet compliance requirements, especially in regulated industries. We recommend that you assess the compliance requirements of the application, and evaluate if the IDP already supports them. For example, the application might require isolation from other workloads, or it might have data locality requirements.
Understanding your application's compliance requirements helps you to scope the necessary refactoring and enhancements for your application and for the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-team-practices","title":"Review the application team practices","text":"After you review the application, we recommend that you gather information about team practices and the methodologies for developing and operating the application. For example, the team might already have adopted DevOps principles, they might be already implementing Site Reliability Engineering (SRE), or they might be already familiar with platform engineering and with the IDP.
By gathering information about the team that develops and operates the application to migrate, you gain insights about the experience and the maturity of that team. You also learn whether there's a need to spend effort to train team members to proficiently use the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#assess-application-refactoring-and-the-idp","title":"Assess application refactoring and the IDP","text":"After you gather information about the application, its development and operation teams, and its requirements, you evaluate the following:
The goal of this task is to answer the following questions:
By answering these questions, you focus on evaluating potential onboarding and migration blockers. For example, you might experience the following onboarding and migration blockers:
The application development and operations team is responsible for the application refactoring tasks.
When you scope the eventual enhancements that the IDP needs to support the application, we recommend that you frame these enhancements in the broader vision that you have for the IDP, and not as a standalone exercise. We also recommend that you consider your IDP as a product for which you should develop a path to success. For example, if you're considering adding a new service to the IDP, we recommend that you evaluate how that service fits in the path to success for your IDP, in addition to the technical feasibility of the initiative.
By assessing the refactoring effort that's required to onboard and migrate the application, you develop a comprehensive understanding of the tasks that you need to complete to refactor the application and how you need to enhance the IDP to support the application.
"},{"location":"reference-architectures/accelerating-migrations/#finalize-the-application-onboarding-and-migration-plan","title":"Finalize the application onboarding and migration plan","text":"To complete the assessment phase, you finalize the application onboarding and migration plan with consideration of the data that you gathered. To finalize the plan, you do the following:
After you complete the assessment phase, you use its outputs to:
During the assessment phase, you scope any enhancements to the IDP that it needs to support the application and how those enhancements fit in your plans for the IDP. By completing this task, you design and implement the enhancements. For example, you might need to enhance the IDP as follows:
By enhancing the IDP to support the application, you unblock the migration. You also help streamline processes for onboarding and migration projects for other applications that might need those IDP enhancements.
"},{"location":"reference-architectures/accelerating-migrations/#configure-the-idp","title":"Configure the IDP","text":"After you enhance the IDP, if needed, you configure it to provide the resources that the application needs. For example, you configure the following IDP services for the application, or a subset of services:
By configuring the IDP, you prepare it to host the application that you want to onboard and migrate.
"},{"location":"reference-architectures/accelerating-migrations/#onboard-and-migrate-the-application","title":"Onboard and migrate the application","text":"In this phase, you onboard and migrate the application to the IDP by completing the following tasks:
By completing the preceding tasks, you onboard and migrate the application to the IDP. The following sections describe these tasks in more detail.
"},{"location":"reference-architectures/accelerating-migrations/#refactor-the-application","title":"Refactor the application","text":"In the assessment phase, you scoped the refactoring that your application needs in order to onboard and migrate it to the IDP. By completing this task, you design and implement the refactoring. For example, you might need to refactor your application in the following ways in order to meet the IDP's requirements:
By refactoring the application, you prepare it to onboard and migrate it on the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#configure-cicd-workflows","title":"Configure CI/CD workflows","text":"After you refactor the application, you do the following:
To build deployable artifacts and deploy them in your runtime environments, we recommend that you avoid manual processes. Instead of manual processes, configure CI/CD workflows by using the application delivery services that the IDP provides and store deployable artifacts in IDP-managed artifact repositories. For example, you can configure CI/CD workflows by using the following methods:
When you build the CI/CD workflows for your environment, consider how many runtime environments the IDP supports. For example, the IDP might support different runtime environments that are isolated from each other such as the following:
If the IDP supports multiple runtime environments for the application, you need to configure the CI/CD workflows for the application to support promoting the application's deployable artifact. You should plan for promoting the application from development to staging, and then from staging to production.
When you promote the application from one environment to the next environment, we recommend that you avoid rebuilding the application's deployable artifacts. Rebuilding creates new artifacts, which means that you would be deploying something different than what you tested and validated.
"},{"location":"reference-architectures/accelerating-migrations/#migrate-deployable-artifacts-from-the-source-environment","title":"Migrate deployable artifacts from the source environment","text":"If you need to support rolling back to previous versions of the application, you can migrate previous versions of the deployable artifacts that you built for the application from the source environment to an IDP-managed artifact repository. For example, if your application is containerized, you can migrate the container images that you built to deploy the application to Artifact Registry.
"},{"location":"reference-architectures/accelerating-migrations/#deploy-the-application-in-the-development-environment","title":"Deploy the application in the development environment","text":"After configuring CI/CD workflows to build deployable artifacts for the application and to promote them from one environment to another, you deploy the application in the development environment using the CI/CD workflows that you configured.
By using CI/CD workflows to build deployable artifacts and deploy the application, you avoid manual processes that are less repeatable and more prone to errors. You also validate that the CI/CD workflows work as expected.
"},{"location":"reference-architectures/accelerating-migrations/#promote-from-development-to-staging","title":"Promote from development to staging","text":"To promote your application from the development environment to the staging environment, you do the following:
By promoting the application from the development environment to the staging environment, you accomplish the following:
After you promote the application to your staging environment, you perform extensive acceptance testing for both functional and non-functional requirements. When you perform acceptance testing, we recommend that you validate that the user journeys and the business processes that the application implements are working properly in situations that resemble real-world usage scenarios. For example, when you perform acceptance testing, you can do the following:
Acceptance testing helps you ensure that the application works as expected in an environment that resembles the production environment, and helps you identify unanticipated issues.
"},{"location":"reference-architectures/accelerating-migrations/#migrate-data","title":"Migrate data","text":"After you complete acceptance testing for the application, you migrate data from the source environment to IDP-managed services such as the following:
To migrate data from your source environment to IDP-managed services, you can choose approaches like the following, depending on your requirements:
Each of the preceding approaches focuses on solving specific issues, and there's no approach that's inherently better than the others. For more information about migrating data to Google Cloud and choosing the best data migration approach for your application, see Migrate to Google Cloud: Transfer your large datasets.
I your data is stored in services managed by other cloud providers, see the following resources:
Migrating data from one environment to another is a complex task. If you think that the data migration is too complex to handle it as part of the application onboarding and migration process, you might consider migrating data as part of a dedicated migration project.
"},{"location":"reference-architectures/accelerating-migrations/#promote-from-staging-to-production","title":"Promote from staging to production","text":"After you complete data migration and acceptance testing, you promote the application to the production environment. To complete this task, you do the following:
When you check the application's operational readiness before you promote it from the staging environment to the production environment, you ensure that the application is ready for the production environment.
"},{"location":"reference-architectures/accelerating-migrations/#perform-the-cutover","title":"Perform the cutover","text":"After you promote the application to the production environment and ensure that it works as expected, you configure the production environment to gradually route requests for the application to the newly promoted application release. For example, you can implement a canary deployment strategy that uses Cloud Deploy.
After you validate that the application continues to work as expected while the number of requests to the newly promoted application increases, you do the following:
Before you retire the application in the source environment, we recommend that you prepare backups and a rollback plan. Doing so will help you handle unanticipated issues that might force you to go back to using the source environment.
"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-application","title":"Optimize the application","text":"Optimization is the last phase of the onboarding and migration process. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. For each iteration, you do the following:
You repeat the preceding sequence until you achieve your optimization goals.
For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization.
The following sections integrate the considerations in Migrate to Google Cloud: Optimize your environment.
"},{"location":"reference-architectures/accelerating-migrations/#establish-your-optimization-requirements","title":"Establish your optimization requirements","text":"Optimization requirements help you to narrow the scope of the current optimization iteration. To establish your optimization requirements for the application, start by considering the following aspects:
For each aspect, we recommend that you establish your optimization requirements for the application. Then, you set measurable optimization goals to meet those requirements. For more information about optimization requirements and goals, see Establish your optimization requirements and goals.
After you realize the optimization requirements for the application, you completed the onboarding and migration process for the application.
"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-onboarding-and-migration-process-and-the-idp","title":"Optimize the onboarding and migration process and the IDP","text":"After you onboard and migrate the application, you use the data that you gathered about the process and about the IDP to refine and optimize the process. Similarly to the optimization phase for your application, you complete the tasks that are described in the optimization phase, but with a focus on the onboarding and migration process and on the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#establish-your-optimization-requirements-for-the-idp","title":"Establish your optimization requirements for the IDP","text":"To narrow down the scope to optimize the onboarding and migration process, and the IDP, you establish optimization requirements according to data you gather while running through the process. For example, during the onboarding and migration of an application, you might face unanticipated issues that involve the process and the IDP, such as:
To address the issues that arise while you're onboarding and migrating an application, you establish optimization requirements. For example, you might establish the following optimization requirements to address the example issues described above:
After establishing optimization requirements, you set measurable optimization goals to meet those requirements. For more information about optimization requirements and goals, see Establish your optimization requirements and goals.
"},{"location":"reference-architectures/accelerating-migrations/#application-onboarding-and-migration-example","title":"Application onboarding and migration example","text":"In this section, you explore how the onboarding and migration process looks like for an example. The example that we describe in this section doesn't represent a real production application.
To reduce the scope of the example, we focus the example on the following environments:
This document focuses on the onboarding and migration process. For more information about migrating from Amazon EKS to GKE, see Migrate from AWS to Google Cloud: Migrate from Amazon EKS to GKE.
To onboard and migrate the application on the IDP, you follow the onboarding and migration process.
"},{"location":"reference-architectures/accelerating-migrations/#intake-the-onboarding-and-migration-request-example","title":"Intake the onboarding and migration request (example)","text":"In this example, the application onboarding and migration team files a request to onboard and migrate the application on the IDP. To fully present the onboarding and migration process, we assume that IDP cannot find an existing golden path to suggest to onboard and migrate the application, so it forwards the request to the team that manages the IDP for further evaluation.
"},{"location":"reference-architectures/accelerating-migrations/#plan-application-onboarding-and-migration-example","title":"Plan application onboarding and migration (example)","text":"To define timelines and milestones to onboard and migrate the application on the IDP, the application onboarding and migration team prepares a countdown plan:
Phase Task Countdown [days] Status Assess the application Review the application design -27 Not started Review application dependencies -23 Not started Review CI/CD processes -21 Not started Review data persistence and data management requirements -21 Not started Review FinOps requirements -20 Not started Review compliance requirements -20 Not started Review the application's team practices -19 Not started Assess application refactoring and the IDP -19 Not started Finalize the application onboarding and migration plan -18 Not started Set up the IDP Enhance the IDP N/A Not necessary Configure the IDP -17 Not started Onboard and migrate the application Refactor the application -15 Not started Configure CI/CD workflows -9 Not started Promote from development to staging -6 Not started Perform acceptance testing -5 Not started Migrate data -3 Not started Promote from staging to production -1 Not started Perform the cutover 0 Not started Optimize the application Assess your current environment, teams, and optimization loop 1 Not started Establish your optimization requirements and goals 1 Not started Optimize your environment and your teams 3 Not started Tune the optimization loop 5 Not startedTo clearly outline responsibility assignments, the application onboarding and migration team defines the following RACI matrix for each phase and task of the process:
Phase Task Application onboarding and migration team Application development and operations team IDP team Assess the application Review the application design Responsible Accountable Informed Review application dependencies Responsible Accountable Informed Review CI/CD processes Responsible Accountable Informed Review data persistence and data management requirements Responsible Accountable Informed Review FinOps requirements Responsible Accountable Informed Review compliance requirements Responsible Accountable Informed Review the application's team practices Responsible Accountable Informed Assess application refactoring and the IDP Responsible Accountable Consulted Plan application onboarding and migration Responsible Accountable Consulted Set up the IDP Enhance the IDP Accountable Consulted Responsible Configure the IDP Responsible, Accountable Consulted Consulted Onboard and migrate the application Refactor the application Accountable Responsible Consulted Configure CI/CD workflows Responsible, Accountable Consulted Consulted Promote from development to staging Responsible, Accountable Consulted Informed Perform acceptance testing Responsible, Accountable Consulted Informed Migrate data Responsible, Accountable Consulted Consulted Promote from staging to production Responsible, Accountable Consulted Informed Perform the cutover Responsible, Accountable Consulted Informed Optimize the application Assess your current environment, teams, and optimization loop Informed Responsible, Accountable Informed Establish your optimization requirements and goals Informed Responsible, Accountable Informed Optimize your environment and your teams Informed Responsible, Accountable Informed Tune the optimization loop Informed Responsible, Accountable Informed"},{"location":"reference-architectures/accelerating-migrations/#assess-the-application-example","title":"Assess the application (example)","text":"In the assessment phase, the application onboarding and migration team assesses the application by completing the assessment phase tasks.
"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-design-example","title":"Review the application design (example)","text":"The application onboarding and migration team reviews the application design, and gathers the following information:
Network and connectivity requirements. The application needs:
The application doesn't require any specific service mesh configuration.
Statefulness. The application stores persistent data on Amazon Relational Database Service (Amazon RDS) for PostgreSQL and on Amazon Simple Storage Service (Amazon S3).
The application onboarding and migration team reviews dependencies on systems that are outside the scope of the application, and gathers the following information:
The application onboarding and migration team reviews the application's CI/CD processes, and gathers the following information:
The application onboarding and migration team reviews data persistence and data management requirements, and gathers the following information:
The application onboarding and migration team is also tasked to migrate data from Amazon RDS for PostgreSQL and Amazon S3 to database and object storage services offered by the IDP. In this example, the IDP offers Cloud SQL for PostgreSQL as a database service, and Cloud Storage as an object storage service.
As part of this application dependency review, the application onboarding and migration team assesses the application's Amazon RDS database and the Amazon S3 buckets. For simplicity, we omit details about those assessments from this example. For more information about assessing Amazon RDS and Amazon S3, see the Assess the source environment sections in the following documents:
The application onboarding and migration team reviews FinOps requirements, and gathers the following information:
The application onboarding and migration team reviews compliance requirements, and gathers the following information:
The application onboarding and migration team reviews development and operational practices that the application development and operations team has in place, and gathers the following information:
The application onboarding and migration team suggests the following:
After reviewing the application and its related CI/CD process, the team application onboarding and migration team assesses the refactoring that the application needs to onboard and migrate it on the IDP, scopes the following refactoring tasks:
The application onboarding and migration team evaluates the IDP against the application's requirements, and concludes that:
After completing the application review, the application onboarding and migration team refines the onboarding and migration plan, and validates it in collaboration with technical and non-technical stakeholders.
"},{"location":"reference-architectures/accelerating-migrations/#set-up-the-idp-example","title":"Set up the IDP (example)","text":"After you assess the application and plan the onboarding and migration process, you set up the IDP.
"},{"location":"reference-architectures/accelerating-migrations/#enhance-the-idp-example","title":"Enhance the IDP (example)","text":"The IDP team doesn't need to enhance the IDP to onboard and migrate the application because:
The application onboarding and migration team configures the runtime environments for the application using the IDP: a development environment, a staging environment, and a production environment. For each environment, the application onboarding and migration team completes the following tasks:
Configures foundational services:
Provisions and configures a GKE cluster for the application.
To onboard and migrate the application, the application development and operations team refactors the application and then the application onboarding and migration team proceeds with the onboarding and migration process.
"},{"location":"reference-architectures/accelerating-migrations/#refactor-the-application-example","title":"Refactor the application (example)","text":"The application development and operations team refactors the application as follows:
To configure CI/CD workflows, the application onboarding and migration team does the following:
After deploying the application in the development environment, the application onboarding and migration team:
After promoting the application from the development environment to the staging environment, the application onboarding and migration team performs acceptance testing.
To perform acceptance testing to validate the application's real-world user journeys and business processes, the application onboarding and migration team consults with the application development and operations team.
The application onboarding and migration team performs acceptance testing as follows:
Validates that the application works as designed under degraded conditions, and that it recovers once the issues are resolved. The application onboarding and migration team tests the following scenarios:
Verifies that observability and alerting for the application are correctly configured.
After completing acceptance testing for the application, the application onboarding and migration team migrates data from the source environment to the Google Cloud environment as follows:
For simplicity, this document doesn't describe the details of migrating from Amazon RDS and Amazon S3 to Google Cloud. For more information about migrating from Amazon RDS and Amazon S3 to Google Cloud, see:
After performing acceptance testing and after migrating data to the Google Cloud environment, the application onboarding and migration team:
Ensures the application's operational readiness by verifying that the application:
Correctly connects to the Cloud SQL for PostgreSQL instance
After promoting the application to the production environment, and ensuring that the application is operationally ready, the application onboarding and migration team:
After performing the cutover, the application development and operations team takes over the maintenance of the application, and establishes the following optimization requirements:
Reduce the application's operational costs by:
After establishing optimization requirements, the application development and operations team completes the rest of the tasks of the optimization phase.
"},{"location":"reference-architectures/accelerating-migrations/#whats-next","title":"What's next","text":"Authors:
Other contributors:
Secrets rotation is a broadly accepted best practice across the information technology industry. However, often times it is cumbersome and disruptive process. In this guide you will use Google Cloud tools to automate the process of rotating passwords for a Cloud SQL instance. This method could easily be extended to other tools and types of secrets.
"},{"location":"reference-architectures/automated-password-rotation/#storing-passwords-in-google-cloud","title":"Storing passwords in Google Cloud","text":"In Google Cloud, secrets including passwords can be stored using many different tools including common open source tools such as Vault, however in this guide, you will use Secret Manager, Google Cloud's fully managed product for securely storing secrets. Regardless of the tool you use, passwords stored should be further secured. When using Secret Manager, following are some of the ways you can further secure your secrets:
Limiting access : The secrets should be readable writable only through the Service Accounts via IAM roles. The principle of least privilege must be followed while granting roles to the service accounts.
Encryption : The secrets should be encrypted. Secret Manager encrypts the secret at rest using AES-256 by default. But you can use your own encryption keys, customer-managed encryption keys (CMEK) to encrypt your secret at rest. For details, see Enable customer-managed encryption keys for Secret Manager.
Password rotation : The passwords stored in the secret manager should be rotated on a regular basis to reduce the risk of a security incident.
Security best practices require us to regularly rotate the passwords in our stack. Changing the password mitigates the risk in the event where passwords are compromised.
"},{"location":"reference-architectures/automated-password-rotation/#how-to-rotate-passwords","title":"How to rotate passwords","text":"Manually rotating the passwords is an antipattern and should not be done as it exposes the password to the human rotating it and may result in security and system incidents. Manual rotation processes also introduce the risk that the rotation isn't actually performed due to human error, for example forgetting or typos.
This necessitates having a workflow that automates password rotation. The password could be of an application, a database, a third-party service or a SaaS vendor etc.
"},{"location":"reference-architectures/automated-password-rotation/#automatic-password-rotation","title":"Automatic password rotation","text":"Typically, rotating a password requires these steps:
(such as applications,databases, SaaS).
Update Secret Manager to store the new password.
Restart the applications that use that password. This will make the
application source the latest passwords.
The following architecture represents a general design for a systems that can rotate password for any underlying software/system.
"},{"location":"reference-architectures/automated-password-rotation/#workflow","title":"Workflow","text":"The following architecture demonstrates a way to automatically rotate CloudSQL password.
"},{"location":"reference-architectures/automated-password-rotation/#workflow-of-the-example-deployment","title":"Workflow of the example deployment","text":"Note : The architecture doesn't show the flow to restart the application after the password rotation as shown in thee Generic architecture but it can be added easily with minimal changes to the Terraform code.
"},{"location":"reference-architectures/automated-password-rotation/#deploy-the-architecture","title":"Deploy the architecture","text":"The code to build the architecture has been provided with this repository. Follow these instructions to create the architecture and use it:
Open Cloud Shell on Google Cloud Console and log in with your credentials.
If you want to use an existing project, get role/project.owner role on the project and set the environment in Cloud Shell as shown below. Then, move to step 4.
#set shell environment variable\n export PROJECT_ID=<PROJECT_ID>\n Replace <PROJECT_ID> with the ID of the existing project.
If you want to create a new GCP project run the following commands in Cloud Shell.
#set shell environment variable\n export PROJECT_ID=<PROJECT_ID>\n #create project\n gcloud projects create ${PROJECT_ID} --folder=<FOLDER_ID>\n #associate the project with billing account\n gcloud billing projects link ${PROJECT_ID} --billing-account=<BILLING_ACCOUNT_ID>\n Replace <PROJECT_ID> with the ID of the new project. Replace <BILLING_ACCOUNT_ID> with the billing account ID that the project should be associated with.
Set the project ID in Cloud Shell and enable APIs in the project:
gcloud config set project ${PROJECT_ID}\n gcloud services enable \\\n cloudresourcemanager.googleapis.com \\\n serviceusage.googleapis.com \\\n --project ${PROJECT_ID}\n Download the Git repository containing the code to build the example architecture:
cd ~\n git clone https://github.com/GoogleCloudPlatform/platform-engineering\n cd platform-engineering/reference-architectures/automated-password-rotation/terraform\n\n terraform init\n terraform plan -var \"project_id=$PROJECT_ID\"\n terraform apply -var \"project_id=$PROJECT_ID\" --auto-approve\n Note: It takes around 30 mins for the entire architecture to get deployed.
Once the Terraform apply has successfully finished, the example architecture will be deployed in the your Google Cloud project. Before exercising the rotation process, review and verify the deployment in the Google Cloud Console.
"},{"location":"reference-architectures/automated-password-rotation/#review-cloud-sql-database","title":"Review Cloud SQL database","text":"Databases > SQL. Confirm that cloudsql-for-pg is present in the instance list.cloudsql-for-pg, to open the instance details page.Users. Confirm you see a user with the name user1.Databases. Confirm you see see a database named test.Overview.Connect to this instance section, note that only Private IP address is present and no public IP address. This restricts access to the instance over public network.Security > Secret Manager. Confirm that cloudsql-pswd is present in the list.cloudsql-pswd.View secret value to view the password for Cloud SQL database.Integration Services > Cloud Scheduler. Confirm that password-rotator-job is present in the Scheduler Jobs list.password-rotator-job, confirm it is configured to run on 1st of every month.Click Continue to see execution configuration. Confirm the following settings:
Target type is Pub/SubSelect a Cloud Pub/Sub topic is set to pswd-rotation-topicMessage body contains a JSON object with the details of the Cloud SQL isntance and secret to be rotated.Click Cancel, to exit the Cloud Scheduler job details.
Analytics > Pub/Sub.Topic. Confirm that pswd-rotation-topic is present in the topics list.pswd-rotation-topic.Subscriptions tab, click on Subscription ID for the rotator Cloud Function.Details tab. Confirm, the Audience tag shows the rotator Cloud Function.Topic.pswd-rotation-topic.Details tab.Schema name field.Details, confirm that the schema contains these keys: secretid, instance_name, db_user, db_name and db_location. These keys will be used to identify what database and user password is to be rotated.Serverless > Cloud Run Functions. Confirm that pswd_rotator_function is present in the list.pswd_rotator_function.Trigger tab. Confirm that the field Receive events from has the Pub/Sub topic pswd-rotation-topic. This indicates that the function will run when a message arrives to that topic.Details tab. Confirm that under Network Settings VPC connector is set to connector-for-sql. This allows the function to connect to the CloudSQL over private IPs.Source tab to see the python code that the function executes.Note: For the purpose of this tutorial, the secret is accessible to the human users and not encrypted. See the section and Secret Manager best practice
"},{"location":"reference-architectures/automated-password-rotation/#verify-that-you-are-able-to-connect-to-the-cloud-sql-instance","title":"Verify that you are able to connect to the Cloud SQL instance","text":"Databases > SQLcloudsql-for-pgCloud SQL Studio.Database dropdown, choose test.User dropdown, choose user1.Password textbox paste the password copied from the cloudsql-pswd secret.Authenticate. Confirm you were able to log in to the database.Typically, the Cloud Scheduler will automatically run on 1st day of every month triggering password rotation. However, for this tutorial you will run the Cloud Scheduler job manually, which causes the Cloud Run Function to generate a new password, update it in Cloud SQL and store it in Secret Manager.
Integration Services > Cloud Scheduler.password-rotator-job. Click the three dots icon and select Force run.Status of last execution shows Success.Serverless > Cloud Run Functions.pswd_rotator_function.Logs tab.Secret cloudsql-pswd changed in Secret Manager!, DB password changed successfully! and DB password verified successfully!.Security > Secret Manager. Confirm that cloudsql-pswd is present in the list.cloudsql-pswd. Note you should now see a new version, version 2 of the secret.View secret value to view the password for Cloud SQL database.Databases > SQLcloudsql-for-pgCloud SQL Studio.Database dropdown, choose test.User dropdown, choose user1.Password textbox paste the password copied from the cloudsql-pswd secret.Authenticate. Confirm you were able to log in to the database. cd platform-engineering/reference-architectures/automated-password-rotation/terraform\n\n terraform init\n terraform plan -var \"project_id=$PROJECT_ID\"\n terraform destroy -var \"project_id=$PROJECT_ID\" --auto-approve\n"},{"location":"reference-architectures/automated-password-rotation/#conclusion","title":"Conclusion","text":"In this tutorial, you saw a way to automate password rotation on Google Cloud. First, you saw a generic reference architecture that can be used to automate password rotation in any password management system. In the later section, you saw an example deployment that uses Google Cloud services to rotate password of Cloud Sql database in Google Cloud Secret Manager.
Implementing an automatic flow to rotate passwords takes away manual overhead and provide seamless way to tighten your password security. It is recommended to create an automation flow that runs on a regular schedule but can also be easily triggered manually when needed. There can be many variations of this architecture that can be adopted. For example, you can directly trigger a Cloud Run Function from a Google Cloud Scheduler job without sending a message to pub/sub if you don't want to broadcast the password rotation. You should identify a flow that fits your organization requirements and modify the reference architecture to implement it.
"},{"location":"reference-architectures/backstage/","title":"Backstage on Google Cloud","text":"A collection of resources related to utilizing Backstage on Google Cloud.
"},{"location":"reference-architectures/backstage/#backstage-plugins-for-google-cloud","title":"Backstage Plugins for Google Cloud","text":"A repository for various plugins can be found here -> google-cloud-backstage-plugins
"},{"location":"reference-architectures/backstage/#backstage-quickstart","title":"Backstage Quickstart","text":"This is an example deployment of Backstage on Google Cloud with various Google Cloud services providing the infrastructure.
"},{"location":"reference-architectures/backstage/backstage-quickstart/","title":"Backstage on Google Cloud Quickstart","text":"This quick-start deployment guide can be used to set up an environment to familiarize yourself with the architecture and get an understanding of the concepts related to hosting Backstage on Google Cloud.
NOTE: This environment is not intended to be a long lived environment. It is intended for temporary demonstration and learning purposes. You will need to modify the configurations provided to align with your orginazations needs. Along the way the guide will make callouts to tasks or areas that should be productionized in for long lived deployments.
"},{"location":"reference-architectures/backstage/backstage-quickstart/#architecture","title":"Architecture","text":"The following diagram depicts the high level architecture of the infrastucture that will be deployed.
"},{"location":"reference-architectures/backstage/backstage-quickstart/#requirements-and-assumptions","title":"Requirements and Assumptions","text":"To keep this guide simple it makes a few assumptions. Where the are alternatives we have linked to some additional documentation.
In this section you prepare a folder for deployment.
In this section you prepare your project for deployment.
Go to the project selector page in the Cloud Console. Select or create a Cloud project.
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
In Cloud Shell, set environment variables with the ID of your project:
export PROJECT_ID=<INSERT_YOUR_PROJECT_ID>\ngcloud config set project \"${PROJECT_ID}\"\n Clone the repository and change directory to the guide directory
git clone https://github.com/GoogleCloudPlatform/platform-engineering && \\\ncd platform-engineering/reference-architectures/backstage/backstage-quickstart\n Set environment variables
export BACKSTAGE_QS_BASE_DIR=$(pwd) && \\\nsed -n -i -e '/^export BACKSTAGE_QS_BASE_DIR=/!p' -i -e '$aexport \\\nBACKSTAGE_QS_BASE_DIR=\"'\"${BACKSTAGE_QS_BASE_DIR}\"'\"' ${HOME}/.bashrc\n Set the project environment variables in Cloud Shell
export BACKSTAGE_QS_STATE_BUCKET=\"${PROJECT_ID}-terraform\"\nexport IAP_USER_DOMAIN=\"<your org's domain>\"\nexport IAP_SUPPORT_EMAIL=\"<your org's support email>\"\n Create a Cloud Storage bucket to store the Terraform state
gcloud storage buckets create gs://${BACKSTAGE_QS_STATE_BUCKET} --project ${PROJECT_ID}\n Before running Terraform, make sure that the Service Usage API and Service Management API are enabled.
Enable Service Usage API and Service Management API
gcloud services enable \\\n cloudresourcemanager.googleapis.com \\\n iap.googleapis.com \\\n serviceusage.googleapis.com \\\n servicemanagement.googleapis.com\n Setup the Identity Aware Proxy brand
gcloud iap oauth-brands create \\\n --application_title=\"IAP Secured Backstage\" \\\n --project=\"${PROJECT_ID}\" \\\n --support_email=\"${IAP_SUPPORT_EMAIL}\"\n Capture the brand name in an environment variable, it will be in the format of: projects/[your_project_number]/brands/[your_project_number].
export IAP_BRAND=<your_brand_name>\n Using the brand name create the IAP client.
gcloud iap oauth-clients create \\\n ${IAP_BRAND} \\\n --display_name=\"IAP Secured Backstage\"\n Capture the client_id and client_secret in environment variables. For the client_id we only need the last value of the string, it will be in the format of: 549085115274-ksi3n9n41tp1vif79dda5ofauk0ebes9.apps.googleusercontent.com
export IAP_CLIENT_ID=\"<your_client_id>\"\nexport IAP_SECRET=\"<your_iap_secret>\"\n Set the configuration variables
sed -i \"s/YOUR_STATE_BUCKET/${BACKSTAGE_QS_STATE_BUCKET}/g\" ${BACKSTAGE_QS_BASE_DIR}/backend.tf\nsed -i \"s/YOUR_PROJECT_ID/${PROJECT_ID}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_USER_DOMAIN/${IAP_USER_DOMAIN}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_SUPPORT_EMAIL/${IAP_SUPPORT_EMAIL}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_CLIENT_ID/${IAP_CLIENT_ID}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_SECRET/${IAP_SECRET}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\n Create the resources
cd ${BACKSTAGE_QS_BASE_DIR} && \\\nterraform init && \\\nterraform plan -input=false -out=tfplan && \\\nterraform apply -input=false tfplan && \\\nrm tfplan\n Initial run of the Terraform may result in errors due to they way the API services are asyrchonously enabled. Re-running the terraform usually resolves the errors.
This will take a while to create all of the required resources, figure somewhere between 15 and 20 minutes.
Build the container image for Backstage
cd manifests/cloudbuild\ngcloud builds submit .\n The output of that command will include a fully qualified image path similar to: us-central1-docker.pkg.dev/[your_project]/backstage-qs/backstage-quickstart:d747db2a-deef-4783-8a0e-3b36e568f6fc Using that value create a new environment variable.
export IMAGE_PATH=\"<your_image_path>\"\n This will take approximately 10 minutes to build and push the image.
Configure Cloud SQL postgres user for password authentication.
gcloud sql users set-password postgres --instance=backstage-qs --prompt-for-password\n Grant the backstage workload service account create database permissions.
a. In the Cloud Console, navigate to SQL
b. Select the database instance
c. In the left menu select Cloud SQL Studio
d. Choose the postgres database and login with the postgres user and password you created in step 4.
e. Run the following sql commands, to grant create database permissions
ALTER USER \"backstage-qs-workload@[your_project_id].iam\" CREATEDB\n Perform an initial deployment of Kubernetes resources.
cd ../k8s\nsed -i \"s%CONTAINER_IMAGE%${IMAGE_PATH}%g\" deployment.yaml\ngcloud container clusters get-credentials backstage-qs --region us-central1 --dns-endpoint\nkubectl apply -f .\n Capture the IAP audience, the Backend Service may take a few minutes to appear.
a. In the Cloud Console, navigate to Security > Identity-Aware Proxy
b. Verify the IAP option is set to enabled. If not enable it now.
b. Choose Get JWT audience code from the three dot menu on the right side of your Backend Service.
c. The value will be in the format of: /projects/<your_project_number>/global/backendServices/<numeric_id>. Using that value create a new environment variable.
export IAP_AUDIENCE_VALUE=\"<your_iap_audience_value>\"\n Redeploy the Kubernetes manifests with the IAP audience
sed -i \"s%IAP_AUDIENCE_VALUE%${IAP_AUDIENCE_VALUE}%g\" deployment.yaml\nkubectl apply -f .\n In a browser navigate to you backstage endpoint. The URL will be similar to https://qs.endpoints.[your_project_id].cloud.goog
Destroy the resources using Terraform destroy
cd ${BACKSTAGE_QS_BASE_DIR} && \\\nterraform init && \\\nterraform destroy -auto-approve && \\\nrm -rf .terraform .terraform.lock.hcl\n Delete the project
gcloud projects delete ${PROJECT_ID}\n Remove Terraform files and temporary files
cd ${BACKSTAGE_QS_BASE_DIR} && \\\nrm -rf \\\n.terraform \\\n.terraform.lock.hcl \\\ninitialize/.terraform \\\ninitialize/.terraform.lock.hcl \\\ninitialize/backend.tf.local \\\ninitialize/state\n Reset the TF variables file
cd ${BACKSTAGE_QS_BASE_DIR} && \\\ncp backstage-qs-auto.tfvars.local backstage-qs.auto.tfvars\n Remove the environment variables
sed \\\n-i -e '/^export BACKSTAGE_QS_BASE_DIR=/d' \\\n${HOME}/.bashrc\n In some instances you will need to create and manage the project through Terraform. This quickstart provides a sample process and Terraform to create and destory the project via Terraform.
To run this part of the quick start you will need the following information and permissions.
roles/billing.user IAM permissions on the billing account specifiedroles/resourcemanager.projectCreator IAM permissions on the organization or folder specifiedSet the configuration variables
nano ${BACKSTAGE_QS_BASE_DIR}/initialize/initialize.auto.tfvars\n environment_name = \"qs\"\niapUserDomain = \"\"\niapSupportEmail = \"\"\nproject = {\n billing_account_id = \"XXXXXX-XXXXXX-XXXXXX\"\n folder_id = \"############\"\n name = \"backstage\"\n org_id = \"############\"\n}\n Values required :
environment_name: the name of the environment (defaults to qs for quickstart)iapUserDomain: the root domain of the GCP Org that the Backstage users will be iniapSupportEmail: support contact for the IAP brandproject.billing_account_id: the billing account IDproject.name: the prefix for the display name of the project, the full name will be <project.name>-<environment_name>project.folder_id OR project.org_idproject.folder_id: the Google Cloud folder IDproject.org_id: the Google Cloud organization IDAuthorize gcloud
gcloud auth login --activate --no-launch-browser --quiet --update-adc\n Create a new project
cd ${BACKSTAGE_QS_BASE_DIR}/initialize\nterraform init && \\\nterraform plan -input=false -out=tfplan && \\\nterraform apply -input=false tfplan && \\\nrm tfplan && \\\nterraform init -force-copy -migrate-state && \\\nrm -rf state\n Set the project environment variables in Cloud Shell
PROJECT_ID=$(grep environment_project_id \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars |\nawk -F\"=\" '{print $2}' | xargs)\n Destroy the project
cd ${BACKSTAGE_QS_BASE_DIR}/initialize && \\\nTERRAFORM_BUCKET_NAME=$(grep bucket backend.tf | awk -F\"=\" '{print $2}' |\nxargs) && \\\ncp backend.tf.local backend.tf && \\\nterraform init -force-copy -lock=false -migrate-state && \\\ngcloud storage rm --recursive --continue-on-error gs://${TERRAFORM_BUCKET_NAME}/* && \\\nterraform init && \\\nterraform destroy -auto-approve && \\\nrm -rf .terraform .terraform.lock.hcl state/\n In situations where you have run this quickstart before and then cleaned-up the resources but are re-using the project, it might be neccasary to restore the endpoints from a deleted state first.
BACKSTAGE_QS_PREFIX=$(grep environment_name \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars | awk -F\"=\" '{print $2}' | xargs)\nBACKSTAGE_QS_PROJECT_ID=$(grep environment_project_id \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars | awk -F\"=\" '{print $2}' | xargs)\ngcloud endpoints services undelete \\\n${BACKSTAGE_QS_PREFIX}.endpoints.${BACKSTAGE_QS_PROJECT_ID}.cloud.goog \\\n--quiet 2>/dev/null\n"},{"location":"reference-architectures/cloud_deploy_flow/","title":"Platform Engineering Deployment Demo","text":""},{"location":"reference-architectures/cloud_deploy_flow/#background","title":"Background","text":"Platform engineering focuses on providing a robust framework for managing the deployment of applications across various environments. One of the critical components in this field is the automation of application deployments, which streamlines the entire process from development to production.
Most organizations have predefined rules around security, privacy, deployment, and change management to ensure consistency and compliance across environments. These rules often include automated security scans, privacy checks, and controlled release protocols that track all changes in both production and pre-production environments.
In this demo, the architecture is designed to show how a deployment tool like Cloud Deploy can integrate smoothly into such workflows, supporting both automation and oversight. The process starts with release validation, ensuring that only compliant builds reach the release stage. Rollout approvals then offer flexibility, allowing teams to implement either manual checks or automated responses depending on specific requirements.
This setup provides a blueprint for organizations to streamline deployment cycles while maintaining robust governance. By using this demo, you can see how these components work together, from container build through deployment, in a way that minimizes disruption to existing processes and aligns with typical organizational change management practices.
This demo showcases a complete workflow that begins with the build of a container and progresses through various stages, ultimately resulting in the deployment of a new application.
"},{"location":"reference-architectures/cloud_deploy_flow/#overview-of-the-demo","title":"Overview of the Demo","text":"This demo illustrates the end-to-end deployment process, starting from the container build phase. Here's a high-level overview of the workflow:
Container Build Process: The demo begins when a container is built-in Cloud Build. Upon completion, a notification is sent to a Pub/Sub message queue.
Release Logic: A Cloud Run Function subscribes to this message queue, assessing whether a release should be created. If a release is warranted, a message is sent to a \"Command Queue\" (another Pub/Sub topic).
Creating a Release: A dedicated function listens to the \"Command Queue\" and communicates with Cloud Deploy to create a new release. Once the release is created, a notification is dispatched to the Pub/Sub Operations topic.
Rollout Process: Another Cloud Function picks up this notification and initiates the rollout process by sending a createRolloutRequest to the \"Command Queue.\"
Approval Process: Since rollouts typically require approval, a notification is sent to the cloud-deploy-approvals Pub/Sub queue. An approval function then picks up this message, allowing you to implement your custom logic or utilize the provided site Demo to return JSON, such as { \"manualApproval\": \"true\" }.
Deployment: Once approved, the rollout proceeds, and the new application is deployed.
compute.googleapis.comiam.googleapis.comcloudresourcemanager.googleapis.comTo run this demo, the following IAM roles will be granted to the service account created by the Terraform configuration:
roles/iam.serviceAccountUser: Allows management of service accounts.roles/logging.logWriter: Grants permission to write logs.roles/artifactregistry.writer: Enables writing to Artifact Registry.roles/storage.objectUser: Provides access to Cloud Storage objects.roles/clouddeploy.jobRunner: Allows execution of Cloud Deploy jobs.roles/clouddeploy.releaser: Grants permissions to release configurations in Cloud Deploy.roles/run.developer: Enables deploying and managing Cloud Run services.roles/cloudbuild.builds.builder: Allows triggering and managing Cloud Build processes.The following Google Cloud services must be enabled in your project to run this demo:
pubsub.googleapis.com: Enables Pub/Sub for messaging between services.clouddeploy.googleapis.com: Allows use of Cloud Deploy for managing deployments.cloudbuild.googleapis.com: Enables Cloud Build for building and deploying applications.compute.googleapis.com: Provides access to Compute Engine resources.cloudresourcemanager.googleapis.com: Allows management of project-level permissions and resources.run.googleapis.com: Enables Cloud Run for deploying and running containerized applications.cloudfunctions.googleapis.com: Allows use of Cloud Functions for event-driven functions.eventarc.googleapis.com: Enables Eventarc for routing events from sources to targets.artifactregistry.googleapis.com: Allows for image hosting for CI/CD.To run this demo, follow these steps:
Fork and Clone the Repository: Start by forking this repository to your GitHub account (So you can connect GCP to this repository), then clone it to your local environment. After cloning, change your directory to the deployment demo:
cd platform-engineering/reference-architectures/cloud_deploy_flow\n Note: you can't use a repository inside an Organization, just use your personal account for this demo.
Set Up Environment Variables or Variables File: You can set the necessary variables either by exporting them as environment variables or by creating a terraform.tfvars file. Refer to variables.tf for more details on each variable. Ensure the values match your Google Cloud project and GitHub configuration.
For the repo-name and repo-owner here, use the repository you just cloned above.
Option 1: Set environment variables manually in your shell:
export TF_VAR_project_id=\"your-google-cloud-project-id\"\nexport TF_VAR_region=\"your-preferred-region\"\nexport TF_VAR_github_owner=\"your-github-repo-owner\"\nexport TF_VAR_github_repo=\"your-github-repo-name\"\n Option 2: Create a terraform.tfvars file in the same directory as your Terraform configuration and populate it with the following:
project_id = \"your-google-cloud-project-id\"\nregion = \"your-preferred-region\"\ngithub_owner = \"your-github-repo-owner\"\ngithub_repo = \"your-github-repo-name\"\n Initialize and Apply Terraform: With the environment variables set, initialize and apply the Terraform configuration:
terraform init\nterraform apply\n Note: Applying Terraform may take a few minutes as it creates the necessary resources.
Connect GitHub Repository to Cloud Build: Due to occasional issues with automatic connections, you may need to manually attach your GitHub repository to Cloud Build in the Google Cloud Console.
If you get the following error you will need to manually connect your repository to the project:
Error: Error creating Trigger: googleapi: Error 400: Repository mapping does\nnot exist.\n Re-run step 3 to ensure all resources are deployed
Navigate to the Demo site: Once the Terraform setup is complete, switch to the Demo site directory:
cd platform-engineering/reference-architectures/cloud-deploy-flow/WebsiteDemo\n Authenticate and Run the Demo site:
Ensure you are running these commands on a local machine or a machine with GUI/web browser access, as Cloud Shell may not fully support running the demo site.
Set your Google Cloud project by running:
gcloud config set project <your_project_id>\n Authenticate your Google Cloud CLI session:
gcloud auth application-default login\n Install required npm packages and start the demo site:
npm install\nnode index.js\n Open http://localhost:8080 in your browser to observe the demo site in action.
Trigger a Build in Cloud Build:
Approve the Rollout: When an approval message is received, you\u2019ll need to send a response to complete the deployment. Use the message data provided and add a ManualApproval field:
{\n \"message\": {\n \"data\": \"<base64-encoded data>\",\n \"attributes\": {\n \"Action\": \"Required\",\n \"Rollout\": \"rollout-123\",\n \"ReleaseId\": \"release-456\",\n \"ManualApproval\": \"true\"\n }\n }\n}\n Verify the Deployment: Once the approval is processed, the deployment should finish rolling out. Check the Cloud Deploy dashboard in the Google Cloud Console to confirm the deployment status.
This demo encapsulates the essential components and workflow for deploying applications using platform engineering practices. It illustrates how various services interact to ensure a smooth deployment process.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/","title":"Cloud Deployment Approvals with Pub/Sub","text":"This project provides a Google Cloud Run Function to automate deployment approvals based on messages received via Google Cloud Pub/Sub. The function processes deployment requests, checks conditions for rollout approval, and publishes an approval command if the requirements are met.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#features","title":"Features","text":"Clone the repository:
git clone <repository-url>\ncd <repository-folder>\n Enable APIs: Enable the Google Cloud Pub/Sub and Deploy APIs for your project:
gcloud services enable pubsub.googleapis.com deploy.googleapis.com\n Deploy the Function: Use Google Cloud SDK to deploy the function:
gcloud functions deploy cloudDeployApprovals --runtime go116 \\\n--trigger-event-type google.cloud.pubsub.topic.v1.messagePublished \\\n--trigger-resource YOUR_SUBSCRIBE_TOPIC\n The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:
Variable Name Description RequiredPROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#code-structure","title":"Code Structure","text":"config struct: Holds configuration for the environment variables.
PubsubMessage and ApprovalsData structs: Define the structure of messages received from Pub/Sub and attributes within them.
cloudDeployApprovals function: Entry point for handling messages. Validates the conditions and, if met, triggers the sendCommandPubSub function to send an approval command.
sendCommandPubSub function: Publishes a command message to the Pub/Sub topic to approve a deployment rollout.
The function cloudDeployApprovals is invoked whenever a message is published to the configured Pub/Sub topic. Upon receiving a message, the function will:
Required, if a rollout ID is provided, and if manual approval is marked as \"true.\"SENDTOPICID topic.A message sent to the function should resemble this JSON structure:
{\n \"message\": {\n \"data\": \"<base64-encoded data>\",\n \"attributes\": {\n \"Action\": \"Required\",\n \"Rollout\": \"rollout-123\",\n \"ReleaseId\": \"release-456\",\n \"ManualApproval\": \"true\"\n }\n }\n}\n"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#custom-manual-approval-field","title":"Custom Manual Approval Field","text":"In the ApprovalsData struct, there is a ManualApproval field. This field is a custom addition, not provided by Google Cloud Deploy, and serves as a placeholder for an external approval system.
To integrate the approval system, you can replace or adapt this field to suit your existing change process workflow. For instance, you could link this field to an external ticketing or project management system to track and verify approvals. Implementing an approval system allows greater control over deployment rollouts, ensuring they align with your organization\u2019s policies.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#logging","title":"Logging","text":"The function logs each major step, from invocation to message processing and condition checking, to facilitate debugging and monitoring.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/","title":"Cloud Deploy Interactions with Pub/Sub","text":"This project demonstrates a Google Cloud Run Function to manage deployments by creating releases, rollouts, or approving rollouts based on incoming Pub/Sub messages. The function leverages Google Cloud Deploy and listens for deployment-related commands sent via Pub/Sub, executing appropriate actions based on the command type.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#features","title":"Features","text":"Listens for Pub/Sub messages with deployment commands (CreateRelease, CreateRollout, ApproveRollout) Messages should include protobuf request.
Initiates Google Cloud Deploy actions based on the received command.
Logs each step of the deployment process for better traceability.
Clone the repository:
git clone <repository-url>\ncd <repository-folder>\n Set up Google Cloud: Ensure you have enabled the Google Cloud Deploy and Pub/Sub APIs in your project.
Deploy the Function: Deploy the function using Google Cloud SDK:
gcloud functions deploy cloudDeployInteractions --runtime go116 \\\n--trigger-event-type google.cloud.pubsub.topic.v1.messagePublished \\\n--trigger-resource YOUR_TOPIC_NAME\n The Pub/Sub message should include a JSON payload with a command field specifying the type of deployment action to execute. Examples of the command types include:
CreateRelease: Creates a new release for deployment.CreateRollout: Initiates a rollout of the release.ApproveRollout: Approves a pending rollout.The message should follow this structure:
{\n \"message\": {\n \"data\": \"<base64-encoded JSON containing command data>\"\n }\n}\n The JSON inside data should follow the format for DeployCommand:
{\n \"command\": \"CreateRelease\",\n \"createReleaseRequest\": {\n // Release creation parameters\n },\n \"createRolloutRequest\": {\n // Rollout creation parameters\n },\n \"approveRolloutRequest\": {\n // Rollout approval parameters\n }\n}\n"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#code-structure","title":"Code Structure","text":"DeployCommand struct: Defines the command to be executed and the parameters for each deploy action (create release, create rollout, or approve rollout).
cloudDeployInteractions function: Main function triggered by Pub/Sub messages. It parses the message and calls the respective deployment function based on the command.
cdCreateRelease: Creates a release in Google Cloud Deploy.
Each function logs key steps, from initialization to message handling and completion of deployments, helping in troubleshooting and monitoring.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/","title":"Cloud Deploy Operations Function","text":"This project contains a Google Cloud Run Function written in Go, designed to interact with Google Cloud Deploy. The function listens for deployment events on a Pub/Sub topic, processes those events, and triggers specific deployment operations based on the event details. For instance, when a deployment release succeeds, it triggers a rollout creation and sends the relevant command to another Pub/Sub topic.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#requirements","title":"Requirements","text":"The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:
Variable Name Description RequiredPROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#structure","title":"Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#main-components","title":"Main Components","text":"Data payload and Attributes metadata.CreateRollout.CommandMessage to a specified Pub/Sub topic, which triggers deployment operations.cloudDeployOperations is triggered by a deployment event, specifically a CloudEvent.Message struct, checking for deployment success events.CommandMessage for a rollout and calls sendCommandPubSub.sendCommandPubSub function publishes the CommandMessage to a designated Pub/Sub topic to initiate the rollout.functions-framework --target=cloudDeployOperations\n"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#deployment-to-google-cloud-run-functions","title":"Deployment to Google Cloud Run Functions","text":"Set up your Google Cloud environment and enable the necessary APIs:
gcloud services enable cloudfunctions.googleapis.com pubsub.googleapis.com\nclouddeploy.googleapis.com\n Deploy the function to Google Cloud:
gcloud functions deploy cloudDeployOperations \\\n --runtime go120 \\\n --trigger-topic <YOUR_TRIGGER_TOPIC> \\\n --set-env-vars PROJECTID=<YOUR_PROJECT_ID>,LOCATION=<YOUR_LOCATION>,SENDTOPICID=<YOUR_SEND_TOPIC_ID>\n This project is licensed under the MIT License. See the LICENSE file for details.
TargetId within CommandMessage is dynamically populated based on actual Pub/Sub message data.pubsub.NewClient which should be carefully monitored in production for connection management.This project demonstrates a Google Cloud Run Function that triggers deployments based on Pub/Sub messages. The function listens for build notifications from Google Cloud Build and initiates a release in Google Cloud Deploy when a build succeeds.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#table-of-contents","title":"Table of Contents","text":"The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:
Variable Name Description RequiredPROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes PIPELINE The name of the delivery pipeline in Cloud Deploy. Yes TRIGGER The ID of the build trigger in Cloud Build. Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#function-overview","title":"Function Overview","text":"The deployTrigger function is invoked by Pub/Sub events. Here's a breakdown of its key components:
Initialization:
Message Handling:
Release Creation:
CreateReleaseRequest for Cloud Deploy.Random ID Generation:
To deploy the function, follow these steps:
gcloud functions deploy deployTrigger \\\n --runtime go113 \\\n --trigger-topic YOUR_TOPIC_NAME \\\n --env-file .env\n"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/","title":"Random Date Service","text":"This repository contains a sample application designed to demonstrate how deployments can work through Google Cloud Deploy and Cloud Build. Instead of a traditional \"Hello World\" application, this project generates and serves a random date, showcasing how to set up a cloud-based service.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#usage-note","title":"Usage Note","text":"This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#overview","title":"Overview","text":"The Random Date Service is built to illustrate the process of deploying an application using Cloud Run and Cloud Deploy. The application serves a random date formatted as a string. This simple service allows you to explore key concepts in cloud deployment without the complexity of a full-fledged application.
This is the core of the application, where the HTTP server is defined. It handles requests and responds with a randomly generated date.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#2-dockerfile","title":"2. Dockerfile","text":"The Dockerfile specifies how to build a container image for the application. This image will be used in Cloud Run for deploying the service.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#3-skaffoldyaml","title":"3. skaffold.yaml","text":"This file is configured for Google Cloud Deploy, facilitating the deployment process by managing builds and configurations in a single file.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#4-runyaml","title":"4. run.yaml","text":"The run.yaml file defines the configuration for Cloud Run and Cloud Deploy. Key aspects to note include:
random-date-service.image field under spec is set to pizza. This is crucial, as it indicates to Cloud Deploy where to substitute the image. This substitution occurs based on the createRelease function in main.go, specifically noted on line 122.To deploy and test this application:
run.yaml configuration to deploy the service.This sample application serves as a foundational example of how to leverage cloud services for deploying applications. By utilizing Google Cloud Deploy and Cloud Build, you can understand the deployment lifecycle and how cloud-native applications can be effectively managed and served.
Feel free to explore the code and configurations provided in this repository to get a better grasp of the deployment process.
"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/","title":"Pub/Sub Local Demo","text":"This project is a simple demonstration of a Pub/Sub system using Google Cloud Pub/Sub and a basic Express.js server. It is designed to visually understand how messages are sent to and from Pub/Sub queues. The code provided is primarily for demonstration purposes and is not intended for production use.
"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#features","title":"Features","text":"Install the required dependencies:
npm install
Create a .env file and populate it with the environment variables found in .env.sample
Start the server:
node index.js
Open your web browser and go to http://localhost:8080 to access the demo.
This code is intended for educational and demonstration purposes only. It may not be suitable for production environments due to lack of error handling, security considerations, and scalability.
"},{"location":"reference-architectures/github-runners-gke/","title":"Reference Guide: Deploy and use GitHub Actions Runners on GKE","text":""},{"location":"reference-architectures/github-runners-gke/#overview","title":"Overview","text":"This guide walks you through the process of setting up self-hosted GitHub Actions Runners on Google Kubernetes Engine (GKE) using the Terraform module terraform-google-github-actions-runners. It then provides instructions on how to create a basic GitHub Actions workflow to leverage these runners.
cloudresourcemanager.googleapis.comiam.googleapis.comcontainer.googleapis.comserviceusage.googleapis.comRun the following command to enable the prerequisite APIs:
gcloud services enable \\\n cloudresourcemanager.googleapis.com \\\n iam.googleapis.com \\\n container.googleapis.com \\\n serviceusage.googleapis.com \\\n --project <YOUR_PROJECT_ID>\n"},{"location":"reference-architectures/github-runners-gke/#register-a-github-app-for-authenticating-arc","title":"Register a GitHub App for Authenticating ARC","text":"Using a GitHub App for authentication allows you to make your self-hosted runners available to a GitHub organization that you own or have administrative access to. For more details on registering GitHub Apps, see GitHub\u2019s documentation.
You will need 3 values from this section to use as inputs in the Terraform module:
https://github.com/actions/actions-runner-controllergh_app_id in the Terraform module.pem file for later.gh_app_private_key in the Terraform modulehttps://github.com/organizations/ORGANIZATION/settings/installations/INSTALLATION_IDgh_app_installation_id in the Terraform module.Open the Terraform module repository in Cloud Shell automatically by clicking the button:
Clicking this button will clone the repository into Cloud Shell, change into the example directory, and open the main.tf file in the Cloud Shell Editor.
project_idgh_app_id: insert the value of the App ID from the GitHub App pagegh_app_installation_id: insert the value from the URL of the app installation pagegh_app_private_key:.pem file to example directory, alongside the main.tf file.pem filename you downloaded after generating the private key for the app, like so:gh_app_private_key = file(\"example.private-key.pem\")gh_config_url with the URL of your GitHub organization. It will be in the format of https://github.com/ORGANIZATIONterraform init to download the required providers.terraform plan to preview the changes that will be made.terraform apply and confirm to create the resources.You will see the runners become available in your GitHub Organization:
You should see the runners appear as \u201carc-runners\u201d
"},{"location":"reference-architectures/github-runners-gke/#creating-a-github-actions-workflow","title":"Creating a GitHub Actions Workflow","text":"Paste the following configuration into the text editor:
name: Actions Runner Controller Demo\non:\nworkflow_dispatch:\njobs:\nExplore-GitHub-Actions:\n runs-on: arc-runners\n steps:\n - run: echo \"This job uses runner scale set runners!\"\n Click Commit changes to save the workflow to your repository.
Navigate back into the example directory you previously ran terraform apply
cd terraform-google-github-actions-runners/examples/gh-runner-gke-simple/\n Destroy Terraform-managed infrastructure
terraform destroy\n Warning: this will destroy the GKE cluster, example VPC, service accounts, and the Helm-managed workloads previously deployed by this example.
"},{"location":"reference-architectures/github-runners-gke/#delete-github-resources","title":"Delete GitHub resources","text":"If you created a new GitHub App for testing purposes of this walkthrough, you can delete it via the following instructions. Note that any services authenticating via this GitHub App will lose access.
This architecture demonstrates how you can automate the provisioning of sandbox projects and automatically apply sensible guardrails and constraints. A sandbox project allows engineers to experiment with new technologies. Sandboxes are provisioned for a short period of time and with budget constraints.
"},{"location":"reference-architectures/sandboxes/#architecture","title":"Architecture","text":"The following diagram is the high-level architecture for enabling self-service creation of sandbox projects.
onCreate and onModify. The functions contain the logic to decide if a sandbox should be created or deleted.infraManagerProcessor is a Cloud Run service that works with Infrastructure Manager to kick off and monitor the infrastructure management. This is handled in a Cloud Run service because the execution of Terraform is a long running process.This repository contains the code to stand up the reference architecture and also create difference sandbox templates in the catalog. This section describes the structure of the repository so you can better navigate the code.
"},{"location":"reference-architectures/sandboxes/#examples","title":"Examples","text":"The /examples directory contains a sample Terraform deployment for deploying the reference architecture and command-line tool to exercise the automated creation of developer sandboxes. The examples are intended to provide you a starting point so you can incorporate the reference architecure into your infrastructure.
This example uses the Terraform modules from /sandbox-modules to deploy the reference architecture and includes instructions on how to get started.
The workflows and lifecycle of the sandboxes deployed via the reference architecture are managed through the document model stored in Cloud Firestore. This abstraction has the benefit of separating the core logic included in the reference archiecture from the user experience (UX). As such the example command line interface lets you experiment with the reference architecture and learn about the object model.
"},{"location":"reference-architectures/sandboxes/#catalog","title":"Catalog","text":"This directory contains a collection (catalog) of templates that you can use to deploy sandboxes. The reference architecture includes one for an empty project, but others could be added to support more specialized roles such as database admins, AI engineers, etc.
"},{"location":"reference-architectures/sandboxes/#sandbox-modules","title":"Sandbox Modules","text":"These modules use the fabric modules to create the system project. Each module represents a large component of the overall reference architecture and each component can be combined into the one system project or spread across different projects to help with separation of duties.
"},{"location":"reference-architectures/sandboxes/#fabric-modules","title":"Fabric Modules","text":"These are the base Terraform modules adopted from the Cloud Fabric Foundation. The fabric foundation is intended to be vendored, so we have copied them here for repeatbility of the overall deployment of the reference architecture.
We recommend that as you need additional modules for templates in the catalog that you start with and vendor the modules from the Cloud Foundation Fabric into this directory.
"},{"location":"reference-architectures/sandboxes/examples/cli/","title":"Example Command Line Interface","text":""},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/","title":"Overview","text":"This directory contains Terraform configuration files that let you deploy the system project. This example is a good entry point for testing the reference architecture and learning how it can be incorportated into your own infrastructure as code processes.
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#architecture","title":"Architecture","text":"For an explanation of the components of the sandboxes reference architecture and the interaction flow, read the main Architecture section.
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#before-you-begin","title":"Before you begin","text":"In this section you prepare a folder for deployment.
Activate Cloud Shell \\ At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt.
In Cloud Shell, clone this repository
git clone https://github.com/GoogleCloudPlatform/platform-engineering.git\n Export variables for the working directories
export SANDBOXES_DIR=\"$(pwd)/reference-architectures/examples/gcp-sandboxes\"\nexport SANDBOXES_CLI=\"$(pwd)/reference-architectures/examples/cli\"\n In this section you prepare your environment for deploying the system project.
Go to the Manage Resources page in the Cloud Console in the IAM & Admin menu.
Click Create folder, then choose Folder.
Enter a name for your folder. This folder will be used to contain the system and sandbox projects.
Click Create
Copy the folder ID from the Manage resources page, you will need this value later for use as Terraform variable.
Set the project ID and region in the corresponding Terraform environment variables
export TF_VAR_billing_account=\"<your billing account id>\"\nexport TF_VAR_sandboxes_folder=\"folders/<folder id from step 5>\"\nexport TF_VAR_system_project_name=\"<name for the system project>\"\n Change directory into the Terraform example directory and initialize Terraform.
cd \"${SANDBOXES_DIR}\"\nterraform init\n Apply the configuration. Answer yes when prompted, after reviewing the resources that Terraform intends to create.
terraform apply\n Now that the system project has been deployed, create a sandbox using the example cli.
Change directory into the example command-line tool directory
cd \"${SANDBOXES_DIR}\"\n Install there required Python libraries
pip install -r requirements.txt\n Create a Sandbox using the cli
python ./sandbox.py create \\\n--system=\"<name of your system project>\" \\\n--project_id=\"<name of the sandbox to create>\"\n Your sandboxes infrastructure is ready, you may continue to use the example cli to create and delete sandboxes. At this point it is recommended that you:
Each document stored in Cloud Firestore represents a sandbox. The following sections document the fields and structure of those documents.
"},{"location":"reference-architectures/sandboxes/sandbox-modules/#deployment","title":"Deployment","text":"Field Type Description_updateSource string This describes the last process or tool used to update or create the deployment document. For example, the example python cli _updateSource is set to python and when the firestore-processor Cloud Run updates the document it is set to cloudrun. status string Status of the sandbox, this changes create and delete operations progress. Refer to Key Statuses for detailed definitions of the values. projectId string The project ID of the sandbox. templateName string The name of the Terraform template from the catalog that the sandbox is based on. deploymentState object<DeploymentState> State object for the sandbox deployment. Contains data such as budget, current spend, expiration date, etc.The state object is updated by and used by the various lifecycle functions. infraManagerDeploymentId string ID returned by Infrastructure Manager for the deployment. infraManagerResult object<DeploymentResponse> This is the response object returned from Infrastructure Manager deployment operation. userId string Unique identifier for the user which owns the sandbox deployment. createdAt string Timestamp that the sandbox record was created at. updatedAt string Timestamp that the sandbox record was last updated. variables object<Variables> List of variable supplied by the user, which are in turned used by the template to create the sandbox. auditLog array[string] List of messages that the system can add as an audit log."},{"location":"reference-architectures/sandboxes/sandbox-modules/#deploymentstate","title":"DeploymentState","text":"Field Type Description budgetLimit number Spend limit for the sandbox. currentSpend number Current spend for the sandbox. expiresAt string Time base expiration for the sandbox."},{"location":"reference-architectures/sandboxes/sandbox-modules/#variables","title":"Variables","text":"Collection of key-value pairs that are used in the Infrastructure Manager request, for use as the Terraform variable values.
"},{"location":"reference-architectures/sandboxes/sandbox-modules/#key-statuses","title":"Key Statuses","text":"The following table describes important statuses that are used during the lifecycle of a deployment.
Status Set By Handled By Meaningprovision_requested User Interface firestore-functions The user has requested that a sandbox be provisioned. provision_pending infra-manager-processor infra-manager-processor Indicates the request was received by the infra-manager-processor but the request hasn\u2019t yet been made to Infrastructure Manager. provision_inprogress infra-manager-processor infra-manager-processor Indicates that the request has been submitted to Infrastructure Manager and it is in progress with Infrastructure Manager. provision_error infra-manager-processor infra-manager-processor The deployment process has failed with an error. provision_successful infra-manager-processor infra-manager-processor The deployment process has succeeded and the sandbox is available and running. delete_requested User Interface firestore-functions The user or lifecycle process has requested that a sandbox be deleted. delete_pending infra-manager-processor infra-manager-processor Indicates the delete request was received by the infra-manager-processor but the request hasn\u2019t yet been made to Infrastructure Manager. delete_inprogress infra-manager-processor infra-manager-processor Indicates that the delete request has been submitted to Infrastructure Manager and it is in progress with Infrastructure Manager. delete_error infra-manager-processor infra-manager-processor The delete process has failed with an error. delete_successful infra-manager-processor infra-manager-processor The delete process has succeeded."}]}
\ No newline at end of file
diff --git a/reference-architectures/backstage/backstage-quickstart/README.md b/reference-architectures/backstage/backstage-quickstart/README.md
index 90417bf..35620fc 100644
--- a/reference-architectures/backstage/backstage-quickstart/README.md
+++ b/reference-architectures/backstage/backstage-quickstart/README.md
@@ -373,7 +373,8 @@ permissions.
xargs) && \
cp backend.tf.local backend.tf && \
terraform init -force-copy -lock=false -migrate-state && \
- gsutil -m rm -rf gs://${TERRAFORM_BUCKET_NAME}/* && \
+ gcloud storage rm --recursive \
+ --continue-on-error gs://${TERRAFORM_BUCKET_NAME}/* && \
terraform init && \
terraform destroy -auto-approve && \
rm -rf .terraform .terraform.lock.hcl state/