diff --git a/docs/search/search_index.json b/docs/search/search_index.json index c22113e..8f662c3 100644 --- a/docs/search/search_index.json +++ b/docs/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Platform Engineering on Google Cloud","text":"

Platform engineering is an emerging practice in organizations to enable cross functional collaboration in order to deliver business value faster. It treats the internal groups; application developers, operators, security, infrastructure admins, etc. as customers and provides them the foundational platforms to accelerate their work. The key goals of platform engineering are providing everything as self-service, golden paths, improved collaboration, abstraction of technical complexities, all of which simplify the software development lifecycle, contributing towards delivering business values to consumers. Platform engineering is more effective in cloud computing as it helps realize the benefits possible on cloud like automation, security, productivity, faster time-to-market.

"},{"location":"#overview","title":"Overview","text":"

Google Cloud offers decomposable, elastic, secure, scalable and cost efficient tools built on the guiding principles of platform engineering. With a focus on developer experience and innovation coupled with practices like SRE embedded into the tools, they make a good place to begin your platform journey to empower the developers to enhance their experience and increase their productivity.

This repository contains a collection of guides, examples and design patterns spanning Google Cloud products and best in class OSS tools, which you can use to help build an internal developer platform.

For more information, see Platform Engineering on Google Cloud.

"},{"location":"#resources","title":"Resources","text":""},{"location":"#design-patterns","title":"Design Patterns","text":""},{"location":"#research-papers-and-white-papers","title":"Research papers and white papers","text":""},{"location":"#guides-and-building-blocks","title":"Guides and Building Blocks","text":""},{"location":"#manage-developer-environments-at-scale","title":"Manage Developer Environments at Scale","text":""},{"location":"#self-service-and-automation-patterns","title":"Self-service and Automation patterns","text":""},{"location":"#run-third-party-cicd-tools-on-google-cloud-infrastructure","title":"Run third-party CI/CD tools on Google Cloud infrastructure","text":""},{"location":"#enterprise-change-management","title":"Enterprise change management","text":""},{"location":"#application-migrations-and-modernization","title":"Application migrations and modernization","text":""},{"location":"#end-to-end-examples","title":"End-to-end Examples","text":""},{"location":"#usage-disclaimer","title":"Usage Disclaimer","text":"

Copy any code you need from this repository into your own project.

Warning: Do not depend directly on the samples in this repository. Breaking changes may be made at any time without warning.

"},{"location":"#contributing-changes","title":"Contributing changes","text":"

Entirely new samples are not accepted. Bugfixes are welcome, either as pull requests or as GitHub issues.

See CONTRIBUTING.md for details on how to contribute.

"},{"location":"#licensing","title":"Licensing","text":"

Copyright 2024 Google LLC Code in this repository is licensed under the Apache 2.0. See LICENSE.

"},{"location":"code-of-conduct/","title":"Code of Conduct","text":""},{"location":"code-of-conduct/#our-pledge","title":"Our Pledge","text":"

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

"},{"location":"code-of-conduct/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to creating a positive environment include:

Examples of unacceptable behavior by participants include:

"},{"location":"code-of-conduct/#our-responsibilities","title":"Our Responsibilities","text":"

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

"},{"location":"code-of-conduct/#scope","title":"Scope","text":"

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project email address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

This Code of Conduct also applies outside the project spaces when the Project Steward has a reasonable belief that an individual's behavior may have a negative impact on the project or its community.

"},{"location":"code-of-conduct/#conflict-resolution","title":"Conflict Resolution","text":"

We do not believe that all conflict is bad; healthy debate and disagreement often yield positive results. However, it is never okay to be disrespectful or to engage in behavior that violates the project\u2019s code of conduct.

If you see someone violating the code of conduct, you are encouraged to address the behavior directly with those involved. Many issues can be resolved quickly and easily, and this gives people more control over the outcome of their dispute. If you are unable to resolve the matter for any reason, or if the behavior is threatening or harassing, report it. We are dedicated to providing an environment where participants feel welcome and safe.

Reports should be directed to [PROJECT STEWARD NAME(s) AND EMAIL(s)], the Project Steward(s) for [PROJECT NAME]. It is the Project Steward\u2019s duty to receive and address reported violations of the code of conduct. They will then work with a committee consisting of representatives from the Open Source Programs Office and the Google Open Source Strategy team. If for any reason you are uncomfortable reaching out to the Project Steward, please email opensource@google.com.

We will investigate every complaint, but you may not receive a direct response. We will use our discretion in determining when and how to follow up on reported incidents, which may range from not taking action to permanent expulsion from the project and project-sponsored spaces. We will notify the accused of the report and provide them an opportunity to discuss it before any action is taken. The identity of the reporter will be omitted from the details of the report supplied to the accused. In potentially harmful situations, such as ongoing harassment or threats to anyone's safety, we may take action without notice.

"},{"location":"code-of-conduct/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

"},{"location":"contributing/","title":"How to Contribute","text":"

We'd love to accept your patches and contributions to this project.

"},{"location":"contributing/#before-you-begin","title":"Before you begin","text":""},{"location":"contributing/#sign-our-contributor-license-agreement","title":"Sign our Contributor License Agreement","text":"

Contributions to this project must be accompanied by a Contributor License Agreement (CLA). You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project.

If you or your current employer have already signed the Google CLA (even if it was for a different project), you probably don't need to do it again.

Visit https://cla.developers.google.com/ to see your current agreements or to sign a new one.

"},{"location":"contributing/#review-our-community-guidelines","title":"Review our Community Guidelines","text":"

This project follows Google's Open Source Community Guidelines.

"},{"location":"contributing/#contribution-process","title":"Contribution process","text":""},{"location":"contributing/#code-reviews","title":"Code Reviews","text":"

All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.

"},{"location":"contributing/#development-guide","title":"Development guide","text":"

This document contains technical information to contribute to this repository.

"},{"location":"contributing/#site","title":"Site","text":"

This repository includes scripts and configuration to build a site using Material for MkDocs:

"},{"location":"contributing/#build-the-site","title":"Build the site","text":"

To build the site, run the following command from the root of the repository:

scripts/run-mkdocs.sh\n
"},{"location":"contributing/#preview-the-site","title":"Preview the site","text":"

To preview the site, run the following command from the root of the repository:

scripts/run-mkdocs.sh \"serve\"\n
"},{"location":"contributing/#linting-and-formatting","title":"Linting and formatting","text":"

We configured several linters and formatters for code and documentation in this repository. Linting and formatting checks run as part of CI workflows.

Linting and formatting checks are configured to check changed files only by default. If you change the configuration of any linter or formatter, these checks run against the entire repository.

To run linting and formatting checks locally, you do the following:

scripts/lint.sh\n

To automatically fix certain linting and formatting errors, you do the following:

LINTER_CONTAINER_FIX_MODE=\"true\" scripts/lint.sh\n
"},{"location":"reference-architectures/accelerating-migrations/","title":"Accelerate migrations through platform engineering golden paths","text":"

This document helps you adopt platform engineering by designing a process to onboard and migrate your existing applications to use your internal developer platform (IDP). It also provides guidance to help you evaluate the opportunity to design a platform engineering process, and to explore how it might function. Google Cloud provides tools, products, guidance, and professional services to help you adopt platform engineering in your environments.

This document is aimed at the following personas:

The Cloud Native Computing Foundation defines a golden path as an integrated bundle of templates and documentation for rapid project development. Designing and developing golden paths can help facilitate the onboarding and the migration of existing applications to your IDP. When you use a golden path, your development and operations teams can take advantage of benefits like the following:

Onboarding and migrating existing applications to the IDP can let you experience the benefits of adopting platform engineering gradually and incrementally in your organization, without spending effort on large scale migration projects.

To migrate applications and onboard them to the IDP, we recommend that you design an application onboarding and migration process. This document describes a reference application onboarding and migration process. We recommend that you tailor the process to your requirements and your IDP.

If you're migrating your applications from your on-premises environment or from another cloud provider to Google Cloud, the application onboarding and migration process can help you to accelerate your migration. In that scenario, the teams that are managing the migration can refer to well-established golden paths, instead of having to design their own migration processes and project templates.

"},{"location":"reference-architectures/accelerating-migrations/#application-onboarding-and-migration-process","title":"Application onboarding and migration process","text":"

The goal of the application onboarding and migration process is to get an application on the IDP. After you onboard and migrate the application to the IDP, your teams can benefit from using the IDP. When you use an IDP, you can focus on providing business value for the application, rather than spending effort on ad-hoc processes and operations.

To manage the complexity of the application onboarding and migration process, we recommend that you design the process in the following phases:

  1. Intake the application onboarding and migration request.
  2. Assess the application to onboard and migrate.
  3. Set up and eventually extend the IDP to accommodate the needs of the application to onboard and migrate.
  4. Onboard and migrate the application.
  5. Optimize the application.

The high-level structure of this process matches the Google Cloud migration path. In this case, you follow the migration path to onboard and migrate existing applications on the IDP.

To ensure that the application onboarding and migration is on the right track, we recommend that you design validation checkpoints for each phase of the process, rather than having a single acceptance testing task. Having validation checkpoints for each phase helps you to promptly detect issues as they arise, rather than when you are close to the end of the migration.

Even when following a phased process, onboarding and migrating complex applications to the IDP might require a significant effort, and it might pose risks. To manage the effort and the risks of onboarding and migrating complex applications to the IDP, you can follow the onboarding and migration process iteratively, by migrating parts of the application on each iteration. For example, if an application is composed of multiple components, you can onboard and migrate one component for each iteration of the process.

To reduce toil, we recommend that you thoroughly document the application onboarding and migration process, and make it as self-service as possible, in line with platform-engineering principles.

In this document, we assume that the onboarding and migration process involves three teams:

The following sections describe each phase of the application onboarding and migration process.

"},{"location":"reference-architectures/accelerating-migrations/#intake-the-onboarding-and-migration-request","title":"Intake the onboarding and migration request","text":"

The first phase of the application onboarding and migration process is to intake the request to onboard and migrate the application. The request process is the following:

  1. The application onboarding and migration team files the onboarding and migration request.
  2. The IDP receives the request, and it recommends existing golden paths.
  3. If the IDP can't suggest an existing golden path, the IDP forwards the request to the team that manages the IDP for further evaluation.

We recommend that you keep this phase as light as possible by using a form or a guided, self-service process. For example, you can include migration guidance in the IDP documentation so that development teams can review it and prepare for the migration. You can also implement automated checks in your IDP to give initial feedback to development teams about potential migration blockers and issues.

To assist and offer consultation to the teams that filed or intend to file an application onboarding and migration request, we recommend that the team that manages the IDP establish communication channels to offer assistance to other teams. For example, the team that manages the IDP might set up dedicated discussion groups, chat rooms, and office hours where they can offer help and answer questions about the IDP. To help with onboarding and migration of complex applications and to facilitate communications, you can also attach a member of the team that manages the IDP to the application team while the migration is in progress.

"},{"location":"reference-architectures/accelerating-migrations/#plan-application-onboarding-and-migration","title":"Plan application onboarding and migration","text":"

As part of this phase, we recommend that the application onboarding and migration team starts drafting an onboarding and migration plan, even if the team doesn't have all of the data points to fully define it. When the team progresses through the assessment phase, they will gather information to finalize and validate the plan.

To manage the complexity of the migration plan, we recommend that you decompose it across the following sub-tasks:

Developing a comprehensive onboarding and migration plan is crucial to the success of the application onboarding and migration process. Having a plan helps you to define clear deadlines, assign responsibilities, and deal with unanticipated issues.

"},{"location":"reference-architectures/accelerating-migrations/#assess-the-application","title":"Assess the application","text":"

The second phase of the application onboarding and migration process is to follow up on the intake request by assessing the application to onboard and migrate to the IDP. The goal of this assessment phase is to produce the following artifacts:

These outputs of the assessment phase help you to plan and complete the migration. The outputs also help you to scope the enhancements that the IDP needs to support the application, and to increase the velocity of future migrations.

To manage the complexity of the assessment phase, we recommend that you decompose it into the following steps:

  1. Review the application design.
  2. Review application dependencies.
  3. Review continuous integration and continuous deployment (CI/CD) processes.
  4. Review data persistence and data management requirements.
  5. Review FinOps requirements.
  6. Review compliance requirements.
  7. Review the application team practices.
  8. Assess application refactoring and the IDP.
  9. Finalize the application onboarding and migration plan.

The preceding steps are described in the following sections. For more information about assessing applications and defining migration plans, see Migrate to Google Cloud: Assess and discover your workloads.

"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-design","title":"Review the application design","text":"

To gather a comprehensive understanding about the design of the application, we recommend that you complete a thorough assessment of the following aspects of the application:

Understanding the application architecture helps you to design and implement an effective onboarding and migration process for your application. It also helps you anticipate issues and potential problems that might arise during the migration. For example, if the architecture of your application to onboard and migrate to the IDP isn't compatible with your IDP, you might need to spend additional effort to refactor the application and enhance the IDP.

The application to onboard and migrate to the IDP might have dependencies on systems and data that are outside the scope of the application. To understand these dependencies, we recommend that you gather information about any reliance of your application on external systems and data, such as databases, datasets, and APIs. After you gather information, you classify the dependencies in order of importance and criticality. For example, your application might need access to a database to store persistent data, and to external APIs to integrate with to provide critical functionality to users, while it might have an optional dependency on a caching system.

Understanding the dependencies of your application on external systems and data is crucial to plan for continued access to these dependencies during and after the migration.

"},{"location":"reference-architectures/accelerating-migrations/#review-application-dependencies","title":"Review application dependencies","text":""},{"location":"reference-architectures/accelerating-migrations/#review-cicd-processes","title":"Review CI/CD processes","text":"

After you review the application design and its dependencies, we recommend that you refine the assessment about your application's deployable artifacts by reviewing your application's CI/CD processes. These processes usually let you build the artifacts to deploy the application and let you deploy them in your runtime environments. For example, you refine the assessment by answering questions about these CI/CD processes, such as the following:

Understanding how the application's CI/CD processes work helps you evaluate whether your IDP can support these CI/CD processes as is, or if you need to enhance your IDP to support them. For example, if your application has a business-critical requirement on a canary deployment process and your IDP doesn't support it, you might need to factor in additional effort to enhance the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#review-data-persistence-and-data-management-requirements","title":"Review data persistence and data management requirements","text":"

By completing the previous tasks of the assessment phase, you gathered information about the statefulness of the application and about the systems that the application uses to store persistent and transient data. In this section, you refine the assessment to develop a deeper understanding of the systems that the application uses to store stateful data. We recommend that you gather information on data persistence and data management requirements of your application. For example, you refine the assessment by answering questions such as the following:

Understanding your application's data persistence and data management requirements helps you to ensure that your IDP and your production environment can effectively support the application. This understanding can also help you determine whether you need to enhance the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#review-finops-requirements","title":"Review FinOps requirements","text":"

As part of the assessment of your application, we recommend that you gather data about the FinOps requirements of your application, such as budget control and cost management, and evaluate whether your IDP supports them. For example, the application might require certain mechanisms to control spending and manage costs, eventually sending alerts. The application might also require mechanisms to completely stop spending when it reaches a certain budget limit.

Understanding your application's FinOps requirements helps you to ensure that you keep your application costs under control. It also helps you to establish proper cost attribution and cost optimization practices.

"},{"location":"reference-architectures/accelerating-migrations/#review-compliance-requirements","title":"Review compliance requirements","text":"

The application to onboard and migrate to the IDP and its runtime environment might have to meet compliance requirements, especially in regulated industries. We recommend that you assess the compliance requirements of the application, and evaluate if the IDP already supports them. For example, the application might require isolation from other workloads, or it might have data locality requirements.

Understanding your application's compliance requirements helps you to scope the necessary refactoring and enhancements for your application and for the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-team-practices","title":"Review the application team practices","text":"

After you review the application, we recommend that you gather information about team practices and the methodologies for developing and operating the application. For example, the team might already have adopted DevOps principles, they might be already implementing Site Reliability Engineering (SRE), or they might be already familiar with platform engineering and with the IDP.

By gathering information about the team that develops and operates the application to migrate, you gain insights about the experience and the maturity of that team. You also learn whether there's a need to spend effort to train team members to proficiently use the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#assess-application-refactoring-and-the-idp","title":"Assess application refactoring and the IDP","text":"

After you gather information about the application, its development and operation teams, and its requirements, you evaluate the following:

The goal of this task is to answer the following questions:

  1. Does the application need any refactoring to onboard and migrate it to the IDP?
  2. Are there any new services or processes that the IDP should offer to migrate the application?
  3. Does the IDP meet the compliance and regulatory requirements that the application requires?

By answering these questions, you focus on evaluating potential onboarding and migration blockers. For example, you might experience the following onboarding and migration blockers:

The application development and operations team is responsible for the application refactoring tasks.

When you scope the eventual enhancements that the IDP needs to support the application, we recommend that you frame these enhancements in the broader vision that you have for the IDP, and not as a standalone exercise. We also recommend that you consider your IDP as a product for which you should develop a path to success. For example, if you're considering adding a new service to the IDP, we recommend that you evaluate how that service fits in the path to success for your IDP, in addition to the technical feasibility of the initiative.

By assessing the refactoring effort that's required to onboard and migrate the application, you develop a comprehensive understanding of the tasks that you need to complete to refactor the application and how you need to enhance the IDP to support the application.

"},{"location":"reference-architectures/accelerating-migrations/#finalize-the-application-onboarding-and-migration-plan","title":"Finalize the application onboarding and migration plan","text":"

To complete the assessment phase, you finalize the application onboarding and migration plan with consideration of the data that you gathered. To finalize the plan, you do the following:

"},{"location":"reference-architectures/accelerating-migrations/#set-up-the-idp","title":"Set up the IDP","text":"

After you complete the assessment phase, you use its outputs to:

  1. Enhance the IDP by adding missing features and services.
  2. Configure the IDP to support the application.
"},{"location":"reference-architectures/accelerating-migrations/#enhance-the-idp","title":"Enhance the IDP","text":"

During the assessment phase, you scope any enhancements to the IDP that it needs to support the application and how those enhancements fit in your plans for the IDP. By completing this task, you design and implement the enhancements. For example, you might need to enhance the IDP as follows:

By enhancing the IDP to support the application, you unblock the migration. You also help streamline processes for onboarding and migration projects for other applications that might need those IDP enhancements.

"},{"location":"reference-architectures/accelerating-migrations/#configure-the-idp","title":"Configure the IDP","text":"

After you enhance the IDP, if needed, you configure it to provide the resources that the application needs. For example, you configure the following IDP services for the application, or a subset of services:

By configuring the IDP, you prepare it to host the application that you want to onboard and migrate.

"},{"location":"reference-architectures/accelerating-migrations/#onboard-and-migrate-the-application","title":"Onboard and migrate the application","text":"

In this phase, you onboard and migrate the application to the IDP by completing the following tasks:

  1. Refactor the application to apply the changes that are necessary to onboard and migrate it on the IDP.
  2. Configure CI/CD workflows for the application and deploy the application in the development environment.
  3. Promote the application from the development environment to the staging environment.
  4. Perform acceptance testing.
  5. Migrate data from the source environment to the production environment.
  6. Promote the application from the staging environment to the production environment and ensure the application's operational readiness.
  7. Perform the cutover from the source environment.

By completing the preceding tasks, you onboard and migrate the application to the IDP. The following sections describe these tasks in more detail.

"},{"location":"reference-architectures/accelerating-migrations/#refactor-the-application","title":"Refactor the application","text":"

In the assessment phase, you scoped the refactoring that your application needs in order to onboard and migrate it to the IDP. By completing this task, you design and implement the refactoring. For example, you might need to refactor your application in the following ways in order to meet the IDP's requirements:

By refactoring the application, you prepare it to onboard and migrate it on the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#configure-cicd-workflows","title":"Configure CI/CD workflows","text":"

After you refactor the application, you do the following:

  1. Configure CI/CD workflows to deploy the application.
  2. Optionally migrate deployable artifacts from the source environment.
  3. Deploy the application in the development environment.
"},{"location":"reference-architectures/accelerating-migrations/#configure-cicd-workflows-to-deploy-the-application","title":"Configure CI/CD workflows to deploy the application","text":"

To build deployable artifacts and deploy them in your runtime environments, we recommend that you avoid manual processes. Instead of manual processes, configure CI/CD workflows by using the application delivery services that the IDP provides and store deployable artifacts in IDP-managed artifact repositories. For example, you can configure CI/CD workflows by using the following methods:

  1. Configure Cloud Build to build container images and store them in Artifact Registry.
  2. Configure a Cloud Deploy pipeline to automate delivery of your application.

When you build the CI/CD workflows for your environment, consider how many runtime environments the IDP supports. For example, the IDP might support different runtime environments that are isolated from each other such as the following:

If the IDP supports multiple runtime environments for the application, you need to configure the CI/CD workflows for the application to support promoting the application's deployable artifact. You should plan for promoting the application from development to staging, and then from staging to production.

When you promote the application from one environment to the next environment, we recommend that you avoid rebuilding the application's deployable artifacts. Rebuilding creates new artifacts, which means that you would be deploying something different than what you tested and validated.

"},{"location":"reference-architectures/accelerating-migrations/#migrate-deployable-artifacts-from-the-source-environment","title":"Migrate deployable artifacts from the source environment","text":"

If you need to support rolling back to previous versions of the application, you can migrate previous versions of the deployable artifacts that you built for the application from the source environment to an IDP-managed artifact repository. For example, if your application is containerized, you can migrate the container images that you built to deploy the application to Artifact Registry.

"},{"location":"reference-architectures/accelerating-migrations/#deploy-the-application-in-the-development-environment","title":"Deploy the application in the development environment","text":"

After configuring CI/CD workflows to build deployable artifacts for the application and to promote them from one environment to another, you deploy the application in the development environment using the CI/CD workflows that you configured.

By using CI/CD workflows to build deployable artifacts and deploy the application, you avoid manual processes that are less repeatable and more prone to errors. You also validate that the CI/CD workflows work as expected.

"},{"location":"reference-architectures/accelerating-migrations/#promote-from-development-to-staging","title":"Promote from development to staging","text":"

To promote your application from the development environment to the staging environment, you do the following:

  1. Test the application and verify that it works as expected.
  2. Fix any unanticipated issues.
  3. Promote the application from the development environment to the staging environment.

By promoting the application from the development environment to the staging environment, you accomplish the following:

"},{"location":"reference-architectures/accelerating-migrations/#perform-acceptance-testing","title":"Perform acceptance testing","text":"

After you promote the application to your staging environment, you perform extensive acceptance testing for both functional and non-functional requirements. When you perform acceptance testing, we recommend that you validate that the user journeys and the business processes that the application implements are working properly in situations that resemble real-world usage scenarios. For example, when you perform acceptance testing, you can do the following:

Acceptance testing helps you ensure that the application works as expected in an environment that resembles the production environment, and helps you identify unanticipated issues.

"},{"location":"reference-architectures/accelerating-migrations/#migrate-data","title":"Migrate data","text":"

After you complete acceptance testing for the application, you migrate data from the source environment to IDP-managed services such as the following:

To migrate data from your source environment to IDP-managed services, you can choose approaches like the following, depending on your requirements:

Each of the preceding approaches focuses on solving specific issues, and there's no approach that's inherently better than the others. For more information about migrating data to Google Cloud and choosing the best data migration approach for your application, see Migrate to Google Cloud: Transfer your large datasets.

I your data is stored in services managed by other cloud providers, see the following resources:

Migrating data from one environment to another is a complex task. If you think that the data migration is too complex to handle it as part of the application onboarding and migration process, you might consider migrating data as part of a dedicated migration project.

"},{"location":"reference-architectures/accelerating-migrations/#promote-from-staging-to-production","title":"Promote from staging to production","text":"

After you complete data migration and acceptance testing, you promote the application to the production environment. To complete this task, you do the following:

  1. Promote the application from the staging environment to the production environment. The process is similar to when you promoted the application from the development environment to the staging environment: you use the IDP-managed CI/CD workflows that you configured for the application to promote it from the staging environment to the production environment.
  2. Ensure the application's operational readiness. For example, to help you avoid performance issues if the application requires a cache, ensure that the cache is properly initialized.
  3. Fix any unanticipated issues.

When you check the application's operational readiness before you promote it from the staging environment to the production environment, you ensure that the application is ready for the production environment.

"},{"location":"reference-architectures/accelerating-migrations/#perform-the-cutover","title":"Perform the cutover","text":"

After you promote the application to the production environment and ensure that it works as expected, you configure the production environment to gradually route requests for the application to the newly promoted application release. For example, you can implement a canary deployment strategy that uses Cloud Deploy.

After you validate that the application continues to work as expected while the number of requests to the newly promoted application increases, you do the following:

  1. Configure your production environment to route all of the requests to your newly promoted application.
  2. Retire the application in the source environment.

Before you retire the application in the source environment, we recommend that you prepare backups and a rollback plan. Doing so will help you handle unanticipated issues that might force you to go back to using the source environment.

"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-application","title":"Optimize the application","text":"

Optimization is the last phase of the onboarding and migration process. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. For each iteration, you do the following:

  1. Assess your current environment, teams, and optimization loop.
  2. Establish your optimization requirements and goals.
  3. Optimize your environment and your teams.
  4. Tune the optimization loop.

You repeat the preceding sequence until you achieve your optimization goals.

For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization.

The following sections integrate the considerations in Migrate to Google Cloud: Optimize your environment.

"},{"location":"reference-architectures/accelerating-migrations/#establish-your-optimization-requirements","title":"Establish your optimization requirements","text":"

Optimization requirements help you to narrow the scope of the current optimization iteration. To establish your optimization requirements for the application, start by considering the following aspects:

For each aspect, we recommend that you establish your optimization requirements for the application. Then, you set measurable optimization goals to meet those requirements. For more information about optimization requirements and goals, see Establish your optimization requirements and goals.

After you realize the optimization requirements for the application, you completed the onboarding and migration process for the application.

"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-onboarding-and-migration-process-and-the-idp","title":"Optimize the onboarding and migration process and the IDP","text":"

After you onboard and migrate the application, you use the data that you gathered about the process and about the IDP to refine and optimize the process. Similarly to the optimization phase for your application, you complete the tasks that are described in the optimization phase, but with a focus on the onboarding and migration process and on the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#establish-your-optimization-requirements-for-the-idp","title":"Establish your optimization requirements for the IDP","text":"

To narrow down the scope to optimize the onboarding and migration process, and the IDP, you establish optimization requirements according to data you gather while running through the process. For example, during the onboarding and migration of an application, you might face unanticipated issues that involve the process and the IDP, such as:

To address the issues that arise while you're onboarding and migrating an application, you establish optimization requirements. For example, you might establish the following optimization requirements to address the example issues described above:

After establishing optimization requirements, you set measurable optimization goals to meet those requirements. For more information about optimization requirements and goals, see Establish your optimization requirements and goals.

"},{"location":"reference-architectures/accelerating-migrations/#application-onboarding-and-migration-example","title":"Application onboarding and migration example","text":"

In this section, you explore how the onboarding and migration process looks like for an example. The example that we describe in this section doesn't represent a real production application.

To reduce the scope of the example, we focus the example on the following environments:

This document focuses on the onboarding and migration process. For more information about migrating from Amazon EKS to GKE, see Migrate from AWS to Google Cloud: Migrate from Amazon EKS to GKE.

To onboard and migrate the application on the IDP, you follow the onboarding and migration process.

"},{"location":"reference-architectures/accelerating-migrations/#intake-the-onboarding-and-migration-request-example","title":"Intake the onboarding and migration request (example)","text":"

In this example, the application onboarding and migration team files a request to onboard and migrate the application on the IDP. To fully present the onboarding and migration process, we assume that IDP cannot find an existing golden path to suggest to onboard and migrate the application, so it forwards the request to the team that manages the IDP for further evaluation.

"},{"location":"reference-architectures/accelerating-migrations/#plan-application-onboarding-and-migration-example","title":"Plan application onboarding and migration (example)","text":"

To define timelines and milestones to onboard and migrate the application on the IDP, the application onboarding and migration team prepares a countdown plan:

Phase Task Countdown [days] Status Assess the application Review the application design -27 Not started Review application dependencies -23 Not started Review CI/CD processes -21 Not started Review data persistence and data management requirements -21 Not started Review FinOps requirements -20 Not started Review compliance requirements -20 Not started Review the application's team practices -19 Not started Assess application refactoring and the IDP -19 Not started Finalize the application onboarding and migration plan -18 Not started Set up the IDP Enhance the IDP N/A Not necessary Configure the IDP -17 Not started Onboard and migrate the application Refactor the application -15 Not started Configure CI/CD workflows -9 Not started Promote from development to staging -6 Not started Perform acceptance testing -5 Not started Migrate data -3 Not started Promote from staging to production -1 Not started Perform the cutover 0 Not started Optimize the application Assess your current environment, teams, and optimization loop 1 Not started Establish your optimization requirements and goals 1 Not started Optimize your environment and your teams 3 Not started Tune the optimization loop 5 Not started

To clearly outline responsibility assignments, the application onboarding and migration team defines the following RACI matrix for each phase and task of the process:

Phase Task Application onboarding and migration team Application development and operations team IDP team Assess the application Review the application design Responsible Accountable Informed Review application dependencies Responsible Accountable Informed Review CI/CD processes Responsible Accountable Informed Review data persistence and data management requirements Responsible Accountable Informed Review FinOps requirements Responsible Accountable Informed Review compliance requirements Responsible Accountable Informed Review the application's team practices Responsible Accountable Informed Assess application refactoring and the IDP Responsible Accountable Consulted Plan application onboarding and migration Responsible Accountable Consulted Set up the IDP Enhance the IDP Accountable Consulted Responsible Configure the IDP Responsible, Accountable Consulted Consulted Onboard and migrate the application Refactor the application Accountable Responsible Consulted Configure CI/CD workflows Responsible, Accountable Consulted Consulted Promote from development to staging Responsible, Accountable Consulted Informed Perform acceptance testing Responsible, Accountable Consulted Informed Migrate data Responsible, Accountable Consulted Consulted Promote from staging to production Responsible, Accountable Consulted Informed Perform the cutover Responsible, Accountable Consulted Informed Optimize the application Assess your current environment, teams, and optimization loop Informed Responsible, Accountable Informed Establish your optimization requirements and goals Informed Responsible, Accountable Informed Optimize your environment and your teams Informed Responsible, Accountable Informed Tune the optimization loop Informed Responsible, Accountable Informed"},{"location":"reference-architectures/accelerating-migrations/#assess-the-application-example","title":"Assess the application (example)","text":"

In the assessment phase, the application onboarding and migration team assesses the application by completing the assessment phase tasks.

"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-design-example","title":"Review the application design (example)","text":"

The application onboarding and migration team reviews the application design, and gathers the following information:

  1. Application source code. The application source code is available on the company source code management and hosting solution.
  2. Deployable artifacts. The application is fully containerized using a single Open Container Initiative (OCI) container image. The container image uses Debian as a base image.
  3. Configuration injection. The application supports injecting configuration using environment variables and configuration files. Environment variables take precedence over configuration files. The application reads runtime- and environment-specific configuration from a Kubernetes ConfigMap.
  4. Security requirements. Container images need to be scanned for vulnerabilities. Also, container images need to be verified for authenticity and bills of materials. The application requires periodic secret rotation. The application doesn't allow direct access to its production runtime environment.
  5. Identity and access management. The application requires a dedicated service account with the minimum set of permissions to work correctly.
  6. Observability requirements. The application logs messages to stout and stderr streams, and exposes metrics and tracing in OpenTelemetry format. The application requires SLO monitoring for uptime and request error rates.
  7. Availability and reliability requirements. The application is not business critical, and can afford two hours of downtime at maximum. The application is designed to shed load under degraded conditions, and is capable of automated recovery after a loss of connectivity.
  8. Network and connectivity requirements. The application needs:

    The application doesn't require any specific service mesh configuration.

  9. Statefulness. The application stores persistent data on Amazon Relational Database Service (Amazon RDS) for PostgreSQL and on Amazon Simple Storage Service (Amazon S3).

  10. Runtime environment requirements. The application doesn't depend on any preview Kubernetes features, and doesn't need dependencies outside what is packaged in its container image.
  11. Development tools and environments. The application doesn't have any dependency on specific IDEs or development hardware.
  12. Multi-tenancy requirements. The application doesn't have any multi-tenancy requirements.
"},{"location":"reference-architectures/accelerating-migrations/#review-application-dependencies-example","title":"Review application dependencies (example)","text":"

The application onboarding and migration team reviews dependencies on systems that are outside the scope of the application, and gathers the following information:

"},{"location":"reference-architectures/accelerating-migrations/#review-cicd-processes-example","title":"Review CI/CD processes (example)","text":"

The application onboarding and migration team reviews the application's CI/CD processes, and gathers the following information:

"},{"location":"reference-architectures/accelerating-migrations/#review-data-persistence-and-data-management-requirements-example","title":"Review data persistence and data management requirements (example)","text":"

The application onboarding and migration team reviews data persistence and data management requirements, and gathers the following information:

The application onboarding and migration team is also tasked to migrate data from Amazon RDS for PostgreSQL and Amazon S3 to database and object storage services offered by the IDP. In this example, the IDP offers Cloud SQL for PostgreSQL as a database service, and Cloud Storage as an object storage service.

As part of this application dependency review, the application onboarding and migration team assesses the application's Amazon RDS database and the Amazon S3 buckets. For simplicity, we omit details about those assessments from this example. For more information about assessing Amazon RDS and Amazon S3, see the Assess the source environment sections in the following documents:

"},{"location":"reference-architectures/accelerating-migrations/#review-finops-requirements-example","title":"Review FinOps requirements (example)","text":"

The application onboarding and migration team reviews FinOps requirements, and gathers the following information:

"},{"location":"reference-architectures/accelerating-migrations/#review-compliance-requirements-example","title":"Review compliance requirements (example)","text":"

The application onboarding and migration team reviews compliance requirements, and gathers the following information:

"},{"location":"reference-architectures/accelerating-migrations/#review-the-applications-team-practices","title":"Review the application's team practices","text":"

The application onboarding and migration team reviews development and operational practices that the application development and operations team has in place, and gathers the following information:

The application onboarding and migration team suggests the following:

"},{"location":"reference-architectures/accelerating-migrations/#assess-application-refactoring-and-the-idp-example","title":"Assess application refactoring and the IDP (example)","text":"

After reviewing the application and its related CI/CD process, the team application onboarding and migration team assesses the refactoring that the application needs to onboard and migrate it on the IDP, scopes the following refactoring tasks:

The application onboarding and migration team evaluates the IDP against the application's requirements, and concludes that:

"},{"location":"reference-architectures/accelerating-migrations/#finalize-the-application-onboarding-and-migration-plan-example","title":"Finalize the application onboarding and migration plan (example)","text":"

After completing the application review, the application onboarding and migration team refines the onboarding and migration plan, and validates it in collaboration with technical and non-technical stakeholders.

"},{"location":"reference-architectures/accelerating-migrations/#set-up-the-idp-example","title":"Set up the IDP (example)","text":"

After you assess the application and plan the onboarding and migration process, you set up the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#enhance-the-idp-example","title":"Enhance the IDP (example)","text":"

The IDP team doesn't need to enhance the IDP to onboard and migrate the application because:

"},{"location":"reference-architectures/accelerating-migrations/#configure-the-idp-example","title":"Configure the IDP (example)","text":"

The application onboarding and migration team configures the runtime environments for the application using the IDP: a development environment, a staging environment, and a production environment. For each environment, the application onboarding and migration team completes the following tasks:

  1. Configures foundational services:

    1. Creates a new Google Cloud project.
    2. Configures IAM roles and service accounts.
    3. Configures a VPC and a subnet.
    4. Creates DNS records in the DNS zone.
  2. Provisions and configures a GKE cluster for the application.

  3. Provisions and configures a Cloud SQL for PostgreSQL instance.
  4. Provisions and configures two Cloud Storage buckets.
  5. Provisions and configures an Artifact Registry repository for container images.
  6. Instruments Cloud Operations Suite to observe the application.
  7. Configures Cloud Billing budget and budget alerts for the application.
"},{"location":"reference-architectures/accelerating-migrations/#onboard-and-migrate-the-application-example","title":"Onboard and migrate the application (example)","text":"

To onboard and migrate the application, the application development and operations team refactors the application and then the application onboarding and migration team proceeds with the onboarding and migration process.

"},{"location":"reference-architectures/accelerating-migrations/#refactor-the-application-example","title":"Refactor the application (example)","text":"

The application development and operations team refactors the application as follows:

  1. Refactors the application to read from and write objects to Cloud Storage, instead of Amazon S3.
  2. Updates the application configuration to use the Cloud SQL for PostgreSQL, instance instead of the Amazon RDS for PostgreSQL instance.
  3. Exposes the metrics that the IDP needs to observe the application.
  4. Update application dependencies that are affected by known vulnerabilities.
"},{"location":"reference-architectures/accelerating-migrations/#configure-cicd-workflows-example","title":"Configure CI/CD workflows (example)","text":"

To configure CI/CD workflows, the application onboarding and migration team does the following:

  1. Refactors the application CI workflow to push container images to the Artifact Registry repository, in addition to Amazon ECR.
  2. Implements a Cloud Deploy pipeline to automatically deploy the application, and promote it across runtime environments.
  3. Deploys the application in the development environment using the Cloud Deploy pipeline.
"},{"location":"reference-architectures/accelerating-migrations/#promote-the-application-from-development-to-staging","title":"Promote the application from development to staging","text":"

After deploying the application in the development environment, the application onboarding and migration team:

  1. Tests the application, and verifies that it works as expected.
  2. Promotes the application from the development environment to the staging environment.
"},{"location":"reference-architectures/accelerating-migrations/#perform-acceptance-testing-example","title":"Perform acceptance testing (example)","text":"

After promoting the application from the development environment to the staging environment, the application onboarding and migration team performs acceptance testing.

To perform acceptance testing to validate the application's real-world user journeys and business processes, the application onboarding and migration team consults with the application development and operations team.

The application onboarding and migration team performs acceptance testing as follows:

  1. Ensures that the application works as expected when dealing with amounts of data and traffic that are similar to production ones.
  2. Validates that the application works as designed under degraded conditions, and that it recovers once the issues are resolved. The application onboarding and migration team tests the following scenarios:

  3. Verifies that observability and alerting for the application are correctly configured.

"},{"location":"reference-architectures/accelerating-migrations/#migrate-data-example","title":"Migrate data (example)","text":"

After completing acceptance testing for the application, the application onboarding and migration team migrates data from the source environment to the Google Cloud environment as follows:

  1. Migrate data from Amazon RDS for PostgreSQL to Cloud SQL for PostgreSQL.
  2. Migrate data from Amazon S3 to Cloud Storage.

For simplicity, this document doesn't describe the details of migrating from Amazon RDS and Amazon S3 to Google Cloud. For more information about migrating from Amazon RDS and Amazon S3 to Google Cloud, see:

"},{"location":"reference-architectures/accelerating-migrations/#promote-from-staging-to-production-example","title":"Promote from staging to production (example)","text":"

After performing acceptance testing and after migrating data to the Google Cloud environment, the application onboarding and migration team:

  1. Promotes the application from the staging environment to the production environment using the Cloud Deploy pipeline.
  2. Ensures the application's operational readiness by verifying that the application:

  3. Correctly connects to the Cloud SQL for PostgreSQL instance

"},{"location":"reference-architectures/accelerating-migrations/#perform-the-cutover-example","title":"Perform the cutover (example)","text":"

After promoting the application to the production environment, and ensuring that the application is operationally ready, the application onboarding and migration team:

  1. Configures the production environment to gradually route requests to the application in 5% increments, until all the requests are routed to the Google Cloud environment.
  2. Refactors the CI workflow to push container images to Artifact Registry only.
  3. Takes backups to ensure that a rollback is possible, in case of unanticipated issues.
  4. Retires the application in the source environment.
"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-application-example","title":"Optimize the application (example)","text":"

After performing the cutover, the application development and operations team takes over the maintenance of the application, and establishes the following optimization requirements:

After establishing optimization requirements, the application development and operations team completes the rest of the tasks of the optimization phase.

"},{"location":"reference-architectures/accelerating-migrations/#whats-next","title":"What's next","text":""},{"location":"reference-architectures/accelerating-migrations/#contributors","title":"Contributors","text":"

Authors:

Other contributors:

"},{"location":"reference-architectures/automated-password-rotation/","title":"Overview","text":"

Secrets rotation is a broadly accepted best practice across the information technology industry. However, often times it is cumbersome and disruptive process. In this guide you will use Google Cloud tools to automate the process of rotating passwords for a Cloud SQL instance. This method could easily be extended to other tools and types of secrets.

"},{"location":"reference-architectures/automated-password-rotation/#storing-passwords-in-google-cloud","title":"Storing passwords in Google Cloud","text":"

In Google Cloud, secrets including passwords can be stored using many different tools including common open source tools such as Vault, however in this guide, you will use Secret Manager, Google Cloud's fully managed product for securely storing secrets. Regardless of the tool you use, passwords stored should be further secured. When using Secret Manager, following are some of the ways you can further secure your secrets:

  1. Limiting access : The secrets should be readable writable only through the Service Accounts via IAM roles. The principle of least privilege must be followed while granting roles to the service accounts.

  2. Encryption : The secrets should be encrypted. Secret Manager encrypts the secret at rest using AES-256 by default. But you can use your own encryption keys, customer-managed encryption keys (CMEK) to encrypt your secret at rest. For details, see Enable customer-managed encryption keys for Secret Manager.

  3. Password rotation : The passwords stored in the secret manager should be rotated on a regular basis to reduce the risk of a security incident.

"},{"location":"reference-architectures/automated-password-rotation/#why-password-rotation","title":"Why password rotation","text":"

Security best practices require us to regularly rotate the passwords in our stack. Changing the password mitigates the risk in the event where passwords are compromised.

"},{"location":"reference-architectures/automated-password-rotation/#how-to-rotate-passwords","title":"How to rotate passwords","text":"

Manually rotating the passwords is an antipattern and should not be done as it exposes the password to the human rotating it and may result in security and system incidents. Manual rotation processes also introduce the risk that the rotation isn't actually performed due to human error, for example forgetting or typos.

This necessitates having a workflow that automates password rotation. The password could be of an application, a database, a third-party service or a SaaS vendor etc.

"},{"location":"reference-architectures/automated-password-rotation/#automatic-password-rotation","title":"Automatic password rotation","text":"

Typically, rotating a password requires these steps:

(such as applications,databases, SaaS).

application source the latest passwords.

The following architecture represents a general design for a systems that can rotate password for any underlying software/system.

"},{"location":"reference-architectures/automated-password-rotation/#workflow","title":"Workflow","text":""},{"location":"reference-architectures/automated-password-rotation/#example-deployment-for-automatic-password-rotation-in-cloudsql","title":"Example deployment for automatic password rotation in CloudSQL","text":"

The following architecture demonstrates a way to automatically rotate CloudSQL password.

"},{"location":"reference-architectures/automated-password-rotation/#workflow-of-the-example-deployment","title":"Workflow of the example deployment","text":"

Note : The architecture doesn't show the flow to restart the application after the password rotation as shown in thee Generic architecture but it can be added easily with minimal changes to the Terraform code.

"},{"location":"reference-architectures/automated-password-rotation/#deploy-the-architecture","title":"Deploy the architecture","text":"

The code to build the architecture has been provided with this repository. Follow these instructions to create the architecture and use it:

  1. Open Cloud Shell on Google Cloud Console and log in with your credentials.

  2. If you want to use an existing project, get role/project.owner role on the project and set the environment in Cloud Shell as shown below. Then, move to step 4.

     #set shell environment variable\n export PROJECT_ID=<PROJECT_ID>\n

    Replace <PROJECT_ID> with the ID of the existing project.

  3. If you want to create a new GCP project run the following commands in Cloud Shell.

     #set shell environment variable\n export PROJECT_ID=<PROJECT_ID>\n #create project\n gcloud projects create ${PROJECT_ID} --folder=<FOLDER_ID>\n #associate the project with billing account\n gcloud billing projects link ${PROJECT_ID} --billing-account=<BILLING_ACCOUNT_ID>\n

    Replace <PROJECT_ID> with the ID of the new project. Replace <BILLING_ACCOUNT_ID> with the billing account ID that the project should be associated with.

  4. Set the project ID in Cloud Shell and enable APIs in the project:

     gcloud config set project ${PROJECT_ID}\n gcloud services enable \\\n  cloudresourcemanager.googleapis.com \\\n  serviceusage.googleapis.com \\\n  --project ${PROJECT_ID}\n
  5. Download the Git repository containing the code to build the example architecture:

     cd ~\n git clone https://github.com/GoogleCloudPlatform/platform-engineering\n cd platform-engineering/reference-architectures/automated-password-rotation/terraform\n\n terraform init\n terraform plan -var \"project_id=$PROJECT_ID\"\n terraform apply -var \"project_id=$PROJECT_ID\" --auto-approve\n

    Note: It takes around 30 mins for the entire architecture to get deployed.

"},{"location":"reference-architectures/automated-password-rotation/#review-the-deployed-architecture","title":"Review the deployed architecture","text":"

Once the Terraform apply has successfully finished, the example architecture will be deployed in the your Google Cloud project. Before exercising the rotation process, review and verify the deployment in the Google Cloud Console.

"},{"location":"reference-architectures/automated-password-rotation/#review-cloud-sql-database","title":"Review Cloud SQL database","text":"
  1. In the Cloud Console, using the naviagion menu select Databases > SQL. Confirm that cloudsql-for-pg is present in the instance list.
  2. Click on cloudsql-for-pg, to open the instance details page.
  3. In the left hand menu select Users. Confirm you see a user with the name user1.
  4. In the left hand menu select Databases. Confirm you see see a database named test.
  5. In the left hand menu select Overview.
  6. In the Connect to this instance section, note that only Private IP address is present and no public IP address. This restricts access to the instance over public network.
"},{"location":"reference-architectures/automated-password-rotation/#review-secret-manager","title":"Review Secret Manager","text":"
  1. In the Cloud Console, using the naviagion menu select Security > Secret Manager. Confirm that cloudsql-pswd is present in the list.
  2. Click on cloudsql-pswd.
  3. Click three dots icon and select View secret value to view the password for Cloud SQL database.
  4. Copy the secret value, you will use this in the next section to confirm access to the Cloud SQL instance.
"},{"location":"reference-architectures/automated-password-rotation/#review-cloud-scheduler-job","title":"Review Cloud Scheduler job","text":"
  1. In the Cloud Console, using the naviagion menu select Integration Services > Cloud Scheduler. Confirm that password-rotator-job is present in the Scheduler Jobs list.
  2. Click on password-rotator-job, confirm it is configured to run on 1st of every month.
  3. Click Continue to see execution configuration. Confirm the following settings:

  4. Click Cancel, to exit the Cloud Scheduler job details.

"},{"location":"reference-architectures/automated-password-rotation/#review-pubsub-topic-configuration","title":"Review Pub/Sub topic configuration","text":"
  1. In the Cloud Console, using the naviagion menu select Analytics > Pub/Sub.
  2. In the left hand menu select Topic. Confirm that pswd-rotation-topic is present in the topics list.
  3. Click on pswd-rotation-topic.
  4. In the Subscriptions tab, click on Subscription ID for the rotator Cloud Function.
  5. Click on the Details tab. Confirm, the Audience tag shows the rotator Cloud Function.
  6. In the left hand menu select Topic.
  7. Click on pswd-rotation-topic.
  8. Click on the Details tab.
  9. Click on the schema in the Schema name field.
  10. In the Details, confirm that the schema contains these keys: secretid, instance_name, db_user, db_name and db_location. These keys will be used to identify what database and user password is to be rotated.
"},{"location":"reference-architectures/automated-password-rotation/#review-cloud-run-function","title":"Review Cloud Run Function","text":"
  1. In the Cloud Console, using the naviagion menu select Serverless > Cloud Run Functions. Confirm that pswd_rotator_function is present in the list.
  2. Click on pswd_rotator_function.
  3. Click on the Trigger tab. Confirm that the field Receive events from has the Pub/Sub topic pswd-rotation-topic. This indicates that the function will run when a message arrives to that topic.
  4. Click on the Details tab. Confirm that under Network Settings VPC connector is set to connector-for-sql. This allows the function to connect to the CloudSQL over private IPs.
  5. Click on the Source tab to see the python code that the function executes.

Note: For the purpose of this tutorial, the secret is accessible to the human users and not encrypted. See the section and Secret Manager best practice

"},{"location":"reference-architectures/automated-password-rotation/#verify-that-you-are-able-to-connect-to-the-cloud-sql-instance","title":"Verify that you are able to connect to the Cloud SQL instance","text":"
  1. In the Cloud Console, using the naviagion menu select Databases > SQL
  2. Click on cloudsql-for-pg
  3. In the left hand menu select Cloud SQL Studio.
  4. In Database dropdown, choose test.
  5. In User dropdown, choose user1.
  6. In Password textbox paste the password copied from the cloudsql-pswd secret.
  7. Click Authenticate. Confirm you were able to log in to the database.
"},{"location":"reference-architectures/automated-password-rotation/#rotate-the-cloud-sql-password","title":"Rotate the Cloud SQL password","text":"

Typically, the Cloud Scheduler will automatically run on 1st day of every month triggering password rotation. However, for this tutorial you will run the Cloud Scheduler job manually, which causes the Cloud Run Function to generate a new password, update it in Cloud SQL and store it in Secret Manager.

  1. In the Cloud Console, using the naviagion menu select Integration Services > Cloud Scheduler.
  2. For the scheduler job password-rotator-job. Click the three dots icon and select Force run.
  3. Verify that the Status of last execution shows Success.
  4. In the Cloud Console, using the naviagion menu select Serverless > Cloud Run Functions.
  5. Click function named pswd_rotator_function.
  6. Select the Logs tab.
  7. Review the logs and verify the function has run and completed without errors. Successful completion will be noted with log entries containing Secret cloudsql-pswd changed in Secret Manager!, DB password changed successfully! and DB password verified successfully!.
"},{"location":"reference-architectures/automated-password-rotation/#test-the-new-password","title":"Test the new password","text":"
  1. In the Cloud Console, using the naviagion menu select Security > Secret Manager. Confirm that cloudsql-pswd is present in the list.
  2. Click on cloudsql-pswd. Note you should now see a new version, version 2 of the secret.
  3. Click three dots icon and select View secret value to view the password for Cloud SQL database.
  4. Copy the secret value.
  5. In the Cloud Console, using the naviagion menu select Databases > SQL
  6. Click on cloudsql-for-pg
  7. In the left hand menu select Cloud SQL Studio.
  8. In Database dropdown, choose test.
  9. In User dropdown, choose user1.
  10. In Password textbox paste the password copied from the cloudsql-pswd secret.
  11. Click Authenticate. Confirm you were able to log in to the database.
"},{"location":"reference-architectures/automated-password-rotation/#destroy-the-architecture","title":"Destroy the architecture","text":"
  cd platform-engineering/reference-architectures/automated-password-rotation/terraform\n\n  terraform init\n  terraform plan -var \"project_id=$PROJECT_ID\"\n  terraform destroy -var \"project_id=$PROJECT_ID\" --auto-approve\n
"},{"location":"reference-architectures/automated-password-rotation/#conclusion","title":"Conclusion","text":"

In this tutorial, you saw a way to automate password rotation on Google Cloud. First, you saw a generic reference architecture that can be used to automate password rotation in any password management system. In the later section, you saw an example deployment that uses Google Cloud services to rotate password of Cloud Sql database in Google Cloud Secret Manager.

Implementing an automatic flow to rotate passwords takes away manual overhead and provide seamless way to tighten your password security. It is recommended to create an automation flow that runs on a regular schedule but can also be easily triggered manually when needed. There can be many variations of this architecture that can be adopted. For example, you can directly trigger a Cloud Run Function from a Google Cloud Scheduler job without sending a message to pub/sub if you don't want to broadcast the password rotation. You should identify a flow that fits your organization requirements and modify the reference architecture to implement it.

"},{"location":"reference-architectures/backstage/","title":"Backstage on Google Cloud","text":"

A collection of resources related to utilizing Backstage on Google Cloud.

"},{"location":"reference-architectures/backstage/#backstage-plugins-for-google-cloud","title":"Backstage Plugins for Google Cloud","text":"

A repository for various plugins can be found here -> google-cloud-backstage-plugins

"},{"location":"reference-architectures/backstage/#backstage-quickstart","title":"Backstage Quickstart","text":"

This is an example deployment of Backstage on Google Cloud with various Google Cloud services providing the infrastructure.

"},{"location":"reference-architectures/backstage/backstage-quickstart/","title":"Backstage on Google Cloud Quickstart","text":"

This quick-start deployment guide can be used to set up an environment to familiarize yourself with the architecture and get an understanding of the concepts related to hosting Backstage on Google Cloud.

NOTE: This environment is not intended to be a long lived environment. It is intended for temporary demonstration and learning purposes. You will need to modify the configurations provided to align with your orginazations needs. Along the way the guide will make callouts to tasks or areas that should be productionized in for long lived deployments.

"},{"location":"reference-architectures/backstage/backstage-quickstart/#architecture","title":"Architecture","text":"

The following diagram depicts the high level architecture of the infrastucture that will be deployed.

"},{"location":"reference-architectures/backstage/backstage-quickstart/#requirements-and-assumptions","title":"Requirements and Assumptions","text":"

To keep this guide simple it makes a few assumptions. Where the are alternatives we have linked to some additional documentation.

  1. The Backstage quick start will be deployed in a new project that you will manually create. If you want to use a project managed through Terraform refer to the Terraform managed project section.
  2. Identity Aware Proxy (IAP) will be used for controlling access to Backstage.
"},{"location":"reference-architectures/backstage/backstage-quickstart/#before-you-begin","title":"Before you begin","text":"

In this section you prepare a folder for deployment.

  1. Open the Cloud Console
  2. Activate Cloud Shell \\ At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt.
"},{"location":"reference-architectures/backstage/backstage-quickstart/#project-creation","title":"Project Creation","text":"

In this section you prepare your project for deployment.

  1. Go to the project selector page in the Cloud Console. Select or create a Cloud project.

  2. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  3. In Cloud Shell, set environment variables with the ID of your project:

    export PROJECT_ID=<INSERT_YOUR_PROJECT_ID>\ngcloud config set project \"${PROJECT_ID}\"\n
  4. Clone the repository and change directory to the guide directory

    git clone https://github.com/GoogleCloudPlatform/platform-engineering && \\\ncd platform-engineering/reference-architectures/backstage/backstage-quickstart\n
  5. Set environment variables

    export BACKSTAGE_QS_BASE_DIR=$(pwd) && \\\nsed -n -i -e '/^export BACKSTAGE_QS_BASE_DIR=/!p' -i -e '$aexport  \\\nBACKSTAGE_QS_BASE_DIR=\"'\"${BACKSTAGE_QS_BASE_DIR}\"'\"' ${HOME}/.bashrc\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#project-configuration","title":"Project Configuration","text":"
  1. Set the project environment variables in Cloud Shell

    export BACKSTAGE_QS_STATE_BUCKET=\"${PROJECT_ID}-terraform\"\nexport IAP_USER_DOMAIN=\"<your org's domain>\"\nexport IAP_SUPPORT_EMAIL=\"<your org's support email>\"\n
  2. Create a Cloud Storage bucket to store the Terraform state

    gcloud storage buckets create gs://${BACKSTAGE_QS_STATE_BUCKET} --project ${PROJECT_ID}\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#deploy-backstage","title":"Deploy Backstage","text":"

Before running Terraform, make sure that the Service Usage API and Service Management API are enabled.

  1. Enable Service Usage API and Service Management API

    gcloud services enable \\\n  cloudresourcemanager.googleapis.com \\\n  iap.googleapis.com \\\n  serviceusage.googleapis.com \\\n  servicemanagement.googleapis.com\n
  2. Setup the Identity Aware Proxy brand

    gcloud iap oauth-brands create \\\n  --application_title=\"IAP Secured Backstage\" \\\n  --project=\"${PROJECT_ID}\" \\\n  --support_email=\"${IAP_SUPPORT_EMAIL}\"\n

    Capture the brand name in an environment variable, it will be in the format of: projects/[your_project_number]/brands/[your_project_number].

    export IAP_BRAND=<your_brand_name>\n
  3. Using the brand name create the IAP client.

    gcloud iap oauth-clients create \\\n  ${IAP_BRAND} \\\n  --display_name=\"IAP Secured Backstage\"\n

    Capture the client_id and client_secret in environment variables. For the client_id we only need the last value of the string, it will be in the format of: 549085115274-ksi3n9n41tp1vif79dda5ofauk0ebes9.apps.googleusercontent.com

    export IAP_CLIENT_ID=\"<your_client_id>\"\nexport IAP_SECRET=\"<your_iap_secret>\"\n
  4. Set the configuration variables

    sed -i \"s/YOUR_STATE_BUCKET/${BACKSTAGE_QS_STATE_BUCKET}/g\" ${BACKSTAGE_QS_BASE_DIR}/backend.tf\nsed -i \"s/YOUR_PROJECT_ID/${PROJECT_ID}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_USER_DOMAIN/${IAP_USER_DOMAIN}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_SUPPORT_EMAIL/${IAP_SUPPORT_EMAIL}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_CLIENT_ID/${IAP_CLIENT_ID}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_SECRET/${IAP_SECRET}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\n
  5. Create the resources

    cd ${BACKSTAGE_QS_BASE_DIR} && \\\nterraform init && \\\nterraform plan -input=false -out=tfplan && \\\nterraform apply -input=false tfplan && \\\nrm tfplan\n

    Initial run of the Terraform may result in errors due to they way the API services are asyrchonously enabled. Re-running the terraform usually resolves the errors.

    This will take a while to create all of the required resources, figure somewhere between 15 and 20 minutes.

  6. Build the container image for Backstage

    cd manifests/cloudbuild\ngcloud builds submit .\n

    The output of that command will include a fully qualified image path similar to: us-central1-docker.pkg.dev/[your_project]/backstage-qs/backstage-quickstart:d747db2a-deef-4783-8a0e-3b36e568f6fc Using that value create a new environment variable.

    export IMAGE_PATH=\"<your_image_path>\"\n

    This will take approximately 10 minutes to build and push the image.

  7. Configure Cloud SQL postgres user for password authentication.

    gcloud sql users set-password postgres --instance=backstage-qs --prompt-for-password\n
  8. Grant the backstage workload service account create database permissions.

    a. In the Cloud Console, navigate to SQL

    b. Select the database instance

    c. In the left menu select Cloud SQL Studio

    d. Choose the postgres database and login with the postgres user and password you created in step 4.

    e. Run the following sql commands, to grant create database permissions

    ALTER USER \"backstage-qs-workload@[your_project_id].iam\" CREATEDB\n
  9. Perform an initial deployment of Kubernetes resources.

    cd ../k8s\nsed -i \"s%CONTAINER_IMAGE%${IMAGE_PATH}%g\" deployment.yaml\ngcloud container clusters get-credentials backstage-qs --region us-central1 --dns-endpoint\nkubectl apply -f .\n
  10. Capture the IAP audience, the Backend Service may take a few minutes to appear.

    a. In the Cloud Console, navigate to Security > Identity-Aware Proxy

    b. Verify the IAP option is set to enabled. If not enable it now.

    b. Choose Get JWT audience code from the three dot menu on the right side of your Backend Service.

    c. The value will be in the format of: /projects/<your_project_number>/global/backendServices/<numeric_id>. Using that value create a new environment variable.

    export IAP_AUDIENCE_VALUE=\"<your_iap_audience_value>\"\n
  11. Redeploy the Kubernetes manifests with the IAP audience

    sed -i \"s%IAP_AUDIENCE_VALUE%${IAP_AUDIENCE_VALUE}%g\" deployment.yaml\nkubectl apply -f .\n
  12. In a browser navigate to you backstage endpoint. The URL will be similar to https://qs.endpoints.[your_project_id].cloud.goog

"},{"location":"reference-architectures/backstage/backstage-quickstart/#cleanup","title":"Cleanup","text":"
  1. Destroy the resources using Terraform destroy

    cd ${BACKSTAGE_QS_BASE_DIR} && \\\nterraform init && \\\nterraform destroy -auto-approve && \\\nrm -rf .terraform .terraform.lock.hcl\n
  2. Delete the project

    gcloud projects delete ${PROJECT_ID}\n
  3. Remove Terraform files and temporary files

    cd ${BACKSTAGE_QS_BASE_DIR} && \\\nrm -rf \\\n.terraform \\\n.terraform.lock.hcl \\\ninitialize/.terraform \\\ninitialize/.terraform.lock.hcl \\\ninitialize/backend.tf.local \\\ninitialize/state\n
  4. Reset the TF variables file

    cd ${BACKSTAGE_QS_BASE_DIR} && \\\ncp backstage-qs-auto.tfvars.local backstage-qs.auto.tfvars\n
  5. Remove the environment variables

    sed \\\n-i -e '/^export BACKSTAGE_QS_BASE_DIR=/d' \\\n${HOME}/.bashrc\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#advanced-options","title":"Advanced Options","text":""},{"location":"reference-architectures/backstage/backstage-quickstart/#terraform-managed-project","title":"Terraform managed project","text":"

In some instances you will need to create and manage the project through Terraform. This quickstart provides a sample process and Terraform to create and destory the project via Terraform.

To run this part of the quick start you will need the following information and permissions.

"},{"location":"reference-architectures/backstage/backstage-quickstart/#creating-a-terraform-managed-project","title":"Creating a Terraform managed project","text":"
  1. Set the configuration variables

    nano ${BACKSTAGE_QS_BASE_DIR}/initialize/initialize.auto.tfvars\n
    environment_name  = \"qs\"\niapUserDomain = \"\"\niapSupportEmail = \"\"\nproject = {\n  billing_account_id = \"XXXXXX-XXXXXX-XXXXXX\"\n  folder_id          = \"############\"\n  name               = \"backstage\"\n  org_id             = \"############\"\n}\n

    Values required :

  2. Authorize gcloud

    gcloud auth login --activate --no-launch-browser --quiet --update-adc\n
  3. Create a new project

    cd ${BACKSTAGE_QS_BASE_DIR}/initialize\nterraform init && \\\nterraform plan -input=false -out=tfplan && \\\nterraform apply -input=false tfplan && \\\nrm tfplan && \\\nterraform init -force-copy -migrate-state && \\\nrm -rf state\n
  4. Set the project environment variables in Cloud Shell

    PROJECT_ID=$(grep environment_project_id \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars |\nawk -F\"=\" '{print $2}' | xargs)\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#cleaning-up-a-terraform-managed-project","title":"Cleaning up a Terraform managed project","text":"
  1. Destroy the project

    cd ${BACKSTAGE_QS_BASE_DIR}/initialize && \\\nTERRAFORM_BUCKET_NAME=$(grep bucket backend.tf | awk -F\"=\" '{print $2}' |\nxargs) && \\\ncp backend.tf.local backend.tf && \\\nterraform init -force-copy -lock=false -migrate-state && \\\ngsutil -m rm -rf gs://${TERRAFORM_BUCKET_NAME}/* && \\\nterraform init && \\\nterraform destroy -auto-approve  && \\\nrm -rf .terraform .terraform.lock.hcl state/\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#re-using-an-existing-project","title":"Re-using an Existing Project","text":"

In situations where you have run this quickstart before and then cleaned-up the resources but are re-using the project, it might be neccasary to restore the endpoints from a deleted state first.

BACKSTAGE_QS_PREFIX=$(grep environment_name \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars | awk -F\"=\" '{print $2}' | xargs)\nBACKSTAGE_QS_PROJECT_ID=$(grep environment_project_id \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars | awk -F\"=\" '{print $2}' | xargs)\ngcloud endpoints services undelete \\\n${BACKSTAGE_QS_PREFIX}.endpoints.${BACKSTAGE_QS_PROJECT_ID}.cloud.goog \\\n--quiet 2>/dev/null\n
"},{"location":"reference-architectures/cloud_deploy_flow/","title":"Platform Engineering Deployment Demo","text":""},{"location":"reference-architectures/cloud_deploy_flow/#background","title":"Background","text":"

Platform engineering focuses on providing a robust framework for managing the deployment of applications across various environments. One of the critical components in this field is the automation of application deployments, which streamlines the entire process from development to production.

Most organizations have predefined rules around security, privacy, deployment, and change management to ensure consistency and compliance across environments. These rules often include automated security scans, privacy checks, and controlled release protocols that track all changes in both production and pre-production environments.

In this demo, the architecture is designed to show how a deployment tool like Cloud Deploy can integrate smoothly into such workflows, supporting both automation and oversight. The process starts with release validation, ensuring that only compliant builds reach the release stage. Rollout approvals then offer flexibility, allowing teams to implement either manual checks or automated responses depending on specific requirements.

This setup provides a blueprint for organizations to streamline deployment cycles while maintaining robust governance. By using this demo, you can see how these components work together, from container build through deployment, in a way that minimizes disruption to existing processes and aligns with typical organizational change management practices.

This demo showcases a complete workflow that begins with the build of a container and progresses through various stages, ultimately resulting in the deployment of a new application.

"},{"location":"reference-architectures/cloud_deploy_flow/#overview-of-the-demo","title":"Overview of the Demo","text":"

This demo illustrates the end-to-end deployment process, starting from the container build phase. Here's a high-level overview of the workflow:

  1. Container Build Process: The demo begins when a container is built-in Cloud Build. Upon completion, a notification is sent to a Pub/Sub message queue.

  2. Release Logic: A Cloud Run Function subscribes to this message queue, assessing whether a release should be created. If a release is warranted, a message is sent to a \"Command Queue\" (another Pub/Sub topic).

  3. Creating a Release: A dedicated function listens to the \"Command Queue\" and communicates with Cloud Deploy to create a new release. Once the release is created, a notification is dispatched to the Pub/Sub Operations topic.

  4. Rollout Process: Another Cloud Function picks up this notification and initiates the rollout process by sending a createRolloutRequest to the \"Command Queue.\"

  5. Approval Process: Since rollouts typically require approval, a notification is sent to the cloud-deploy-approvals Pub/Sub queue. An approval function then picks up this message, allowing you to implement your custom logic or utilize the provided site Demo to return JSON, such as { \"manualApproval\": \"true\" }.

  6. Deployment: Once approved, the rollout proceeds, and the new application is deployed.

"},{"location":"reference-architectures/cloud_deploy_flow/#prerequisites","title":"Prerequisites","text":""},{"location":"reference-architectures/cloud_deploy_flow/#iam-roles-used-by-terraform","title":"IAM Roles used by Terraform","text":"

To run this demo, the following IAM roles will be granted to the service account created by the Terraform configuration:

"},{"location":"reference-architectures/cloud_deploy_flow/#gcp-services-enabled-by-terraform","title":"GCP Services enabled by Terraform","text":"

The following Google Cloud services must be enabled in your project to run this demo:

"},{"location":"reference-architectures/cloud_deploy_flow/#getting-started","title":"Getting Started","text":"

To run this demo, follow these steps:

  1. Fork and Clone the Repository: Start by forking this repository to your GitHub account (So you can connect GCP to this repository), then clone it to your local environment. After cloning, change your directory to the deployment demo:

    cd platform-engineering/reference-architectures/cloud_deploy_flow\n

    Note: you can't use a repository inside an Organization, just use your personal account for this demo.

  2. Set Up Environment Variables or Variables File: You can set the necessary variables either by exporting them as environment variables or by creating a terraform.tfvars file. Refer to variables.tf for more details on each variable. Ensure the values match your Google Cloud project and GitHub configuration.

    For the repo-name and repo-owner here, use the repository you just cloned above.

  3. Initialize and Apply Terraform: With the environment variables set, initialize and apply the Terraform configuration:

    terraform init\nterraform apply\n

    Note: Applying Terraform may take a few minutes as it creates the necessary resources.

  4. Connect GitHub Repository to Cloud Build: Due to occasional issues with automatic connections, you may need to manually attach your GitHub repository to Cloud Build in the Google Cloud Console.

    If you get the following error you will need to manually connect your repository to the project:

    Error: Error creating Trigger: googleapi: Error 400: Repository mapping does\nnot exist.\n

    Re-run step 3 to ensure all resources are deployed

  5. Navigate to the Demo site: Once the Terraform setup is complete, switch to the Demo site directory:

    cd platform-engineering/reference-architectures/cloud-deploy-flow/WebsiteDemo\n
  6. Authenticate and Run the Demo site:

  7. Trigger a Build in Cloud Build:

  8. Approve the Rollout: When an approval message is received, you\u2019ll need to send a response to complete the deployment. Use the message data provided and add a ManualApproval field:

    {\n    \"message\": {\n    \"data\": \"<base64-encoded data>\",\n    \"attributes\": {\n        \"Action\": \"Required\",\n        \"Rollout\": \"rollout-123\",\n        \"ReleaseId\": \"release-456\",\n        \"ManualApproval\": \"true\"\n    }\n    }\n}\n
  9. Verify the Deployment: Once the approval is processed, the deployment should finish rolling out. Check the Cloud Deploy dashboard in the Google Cloud Console to confirm the deployment status.

"},{"location":"reference-architectures/cloud_deploy_flow/#conclusion","title":"Conclusion","text":"

This demo encapsulates the essential components and workflow for deploying applications using platform engineering practices. It illustrates how various services interact to ensure a smooth deployment process.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/","title":"Cloud Deployment Approvals with Pub/Sub","text":"

This project provides a Google Cloud Run Function to automate deployment approvals based on messages received via Google Cloud Pub/Sub. The function processes deployment requests, checks conditions for rollout approval, and publishes an approval command if the requirements are met.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#features","title":"Features","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#setup","title":"Setup","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#requirements","title":"Requirements","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#installation","title":"Installation","text":"
  1. Clone the repository:

    git clone <repository-url>\ncd <repository-folder>\n
  2. Enable APIs: Enable the Google Cloud Pub/Sub and Deploy APIs for your project:

    gcloud services enable pubsub.googleapis.com deploy.googleapis.com\n
  3. Deploy the Function: Use Google Cloud SDK to deploy the function:

    gcloud functions deploy cloudDeployApprovals --runtime go116 \\\n--trigger-event-type google.cloud.pubsub.topic.v1.messagePublished \\\n--trigger-resource YOUR_SUBSCRIBE_TOPIC\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#environment-variables","title":"Environment Variables","text":"

The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:

Variable Name Description Required PROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#code-structure","title":"Code Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#usage","title":"Usage","text":"

The function cloudDeployApprovals is invoked whenever a message is published to the configured Pub/Sub topic. Upon receiving a message, the function will:

  1. Parse and validate the message.
  2. Check if the action is Required, if a rollout ID is provided, and if manual approval is marked as \"true.\"
  3. If conditions are met, it will publish an approval command to the SENDTOPICID topic.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#sample-pubsub-message","title":"Sample Pub/Sub Message","text":"

A message sent to the function should resemble this JSON structure:

{\n  \"message\": {\n    \"data\": \"<base64-encoded data>\",\n    \"attributes\": {\n      \"Action\": \"Required\",\n      \"Rollout\": \"rollout-123\",\n      \"ReleaseId\": \"release-456\",\n      \"ManualApproval\": \"true\"\n    }\n  }\n}\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#custom-manual-approval-field","title":"Custom Manual Approval Field","text":"

In the ApprovalsData struct, there is a ManualApproval field. This field is a custom addition, not provided by Google Cloud Deploy, and serves as a placeholder for an external approval system.

To integrate the approval system, you can replace or adapt this field to suit your existing change process workflow. For instance, you could link this field to an external ticketing or project management system to track and verify approvals. Implementing an approval system allows greater control over deployment rollouts, ensuring they align with your organization\u2019s policies.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#logging","title":"Logging","text":"

The function logs each major step, from invocation to message processing and condition checking, to facilitate debugging and monitoring.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/","title":"Cloud Deploy Interactions with Pub/Sub","text":"

This project demonstrates a Google Cloud Run Function to manage deployments by creating releases, rollouts, or approving rollouts based on incoming Pub/Sub messages. The function leverages Google Cloud Deploy and listens for deployment-related commands sent via Pub/Sub, executing appropriate actions based on the command type.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#features","title":"Features","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#setup","title":"Setup","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#requirements","title":"Requirements","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#installation","title":"Installation","text":"
  1. Clone the repository:

    git clone <repository-url>\ncd <repository-folder>\n
  2. Set up Google Cloud: Ensure you have enabled the Google Cloud Deploy and Pub/Sub APIs in your project.

  3. Deploy the Function: Deploy the function using Google Cloud SDK:

    gcloud functions deploy cloudDeployInteractions --runtime go116 \\\n--trigger-event-type google.cloud.pubsub.topic.v1.messagePublished \\\n--trigger-resource YOUR_TOPIC_NAME\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#pubsub-message-format","title":"Pub/Sub Message Format","text":"

The Pub/Sub message should include a JSON payload with a command field specifying the type of deployment action to execute. Examples of the command types include:

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#sample-pubsub-message","title":"Sample Pub/Sub Message","text":"

The message should follow this structure:

{\n  \"message\": {\n    \"data\": \"<base64-encoded JSON containing command data>\"\n  }\n}\n

The JSON inside data should follow the format for DeployCommand:

{\n  \"command\": \"CreateRelease\",\n  \"createReleaseRequest\": {\n    // Release creation parameters\n  },\n  \"createRolloutRequest\": {\n    // Rollout creation parameters\n  },\n  \"approveRolloutRequest\": {\n    // Rollout approval parameters\n  }\n}\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#code-structure","title":"Code Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#logging","title":"Logging","text":"

Each function logs key steps, from initialization to message handling and completion of deployments, helping in troubleshooting and monitoring.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/","title":"Cloud Deploy Operations Function","text":"

This project contains a Google Cloud Run Function written in Go, designed to interact with Google Cloud Deploy. The function listens for deployment events on a Pub/Sub topic, processes those events, and triggers specific deployment operations based on the event details. For instance, when a deployment release succeeds, it triggers a rollout creation and sends the relevant command to another Pub/Sub topic.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#requirements","title":"Requirements","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#environment-variables","title":"Environment Variables","text":"

The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:

Variable Name Description Required PROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#structure","title":"Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#main-components","title":"Main Components","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#function-workflow","title":"Function Workflow","text":"
  1. Trigger: The function cloudDeployOperations is triggered by a deployment event, specifically a CloudEvent.
  2. Event Parsing: The function parses the event data into a Message struct, checking for deployment success events.
  3. Rollout Creation: If a release success is detected, it creates a CommandMessage for a rollout and calls sendCommandPubSub.
  4. Command Publish: The sendCommandPubSub function publishes the CommandMessage to a designated Pub/Sub topic to initiate the rollout.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#setup-and-deployment","title":"Setup and Deployment","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#local-development","title":"Local Development","text":"
  1. Clone the repository and set up your local environment with the necessary environment variables.
  2. Run the Cloud Run Functions framework locally to test the function:
functions-framework --target=cloudDeployOperations\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#deployment-to-google-cloud-run-functions","title":"Deployment to Google Cloud Run Functions","text":"
  1. Set up your Google Cloud environment and enable the necessary APIs:

    gcloud services enable cloudfunctions.googleapis.com pubsub.googleapis.com\nclouddeploy.googleapis.com\n
  2. Deploy the function to Google Cloud:

    gcloud functions deploy cloudDeployOperations \\\n   --runtime go120 \\\n   --trigger-topic <YOUR_TRIGGER_TOPIC> \\\n   --set-env-vars PROJECTID=<YOUR_PROJECT_ID>,LOCATION=<YOUR_LOCATION>,SENDTOPICID=<YOUR_SEND_TOPIC_ID>\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#error-handling","title":"Error Handling","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#license","title":"License","text":"

This project is licensed under the MIT License. See the LICENSE file for details.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#notes","title":"Notes","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/","title":"Example Cloud Run Function","text":"

This project demonstrates a Google Cloud Run Function that triggers deployments based on Pub/Sub messages. The function listens for build notifications from Google Cloud Build and initiates a release in Google Cloud Deploy when a build succeeds.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#table-of-contents","title":"Table of Contents","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#prerequisites","title":"Prerequisites","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#environment-variables","title":"Environment Variables","text":"

The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:

Variable Name Description Required PROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes PIPELINE The name of the delivery pipeline in Cloud Deploy. Yes TRIGGER The ID of the build trigger in Cloud Build. Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#function-overview","title":"Function Overview","text":"

The deployTrigger function is invoked by Pub/Sub events. Here's a breakdown of its key components:

  1. Initialization:

  2. Message Handling:

  3. Release Creation:

  4. Random ID Generation:

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#deploying-the-function","title":"Deploying the Function","text":"

To deploy the function, follow these steps:

  1. Ensure that your Google Cloud SDK is authenticated and configured with the correct project.
  2. Use the following command to deploy the function:
gcloud functions deploy deployTrigger \\\n    --runtime go113 \\\n    --trigger-topic YOUR_TOPIC_NAME \\\n    --env-file .env\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/","title":"Random Date Service","text":"

This repository contains a sample application designed to demonstrate how deployments can work through Google Cloud Deploy and Cloud Build. Instead of a traditional \"Hello World\" application, this project generates and serves a random date, showcasing how to set up a cloud-based service.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#overview","title":"Overview","text":"

The Random Date Service is built to illustrate the process of deploying an application using Cloud Run and Cloud Deploy. The application serves a random date formatted as a string. This simple service allows you to explore key concepts in cloud deployment without the complexity of a full-fledged application.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#components","title":"Components","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#1-maingo","title":"1. main.go","text":"

This is the core of the application, where the HTTP server is defined. It handles requests and responds with a randomly generated date.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#2-dockerfile","title":"2. Dockerfile","text":"

The Dockerfile specifies how to build a container image for the application. This image will be used in Cloud Run for deploying the service.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#3-skaffoldyaml","title":"3. skaffold.yaml","text":"

This file is configured for Google Cloud Deploy, facilitating the deployment process by managing builds and configurations in a single file.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#4-runyaml","title":"4. run.yaml","text":"

The run.yaml file defines the configuration for Cloud Run and Cloud Deploy. Key aspects to note include:

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#usage","title":"Usage","text":"

To deploy and test this application:

  1. Build the Docker Image: Use the provided Dockerfile to create a container image.
  2. Deploy to Cloud Run: Utilize the run.yaml configuration to deploy the service.
  3. Monitor Deployments: Use Cloud Deploy to observe the deployment pipeline and ensure the service is running as expected.
  4. Access the Service: After deployment, access the service through its endpoint to receive a random date.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#conclusion","title":"Conclusion","text":"

This sample application serves as a foundational example of how to leverage cloud services for deploying applications. By utilizing Google Cloud Deploy and Cloud Build, you can understand the deployment lifecycle and how cloud-native applications can be effectively managed and served.

Feel free to explore the code and configurations provided in this repository to get a better grasp of the deployment process.

"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/","title":"Pub/Sub Local Demo","text":"

This project is a simple demonstration of a Pub/Sub system using Google Cloud Pub/Sub and a basic Express.js server. It is designed to visually understand how messages are sent to and from Pub/Sub queues. The code provided is primarily for demonstration purposes and is not intended for production use.

"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#features","title":"Features","text":""},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#project-structure","title":"Project Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#installation","title":"Installation","text":"
  1. Install the required dependencies:

    npm install

  2. Create a .env file and populate it with the environment variables found in .env.sample

  3. Start the server:

    node index.js

  4. Open your web browser and go to http://localhost:8080 to access the demo.

"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#usage","title":"Usage","text":""},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#disclaimer","title":"Disclaimer","text":"

This code is intended for educational and demonstration purposes only. It may not be suitable for production environments due to lack of error handling, security considerations, and scalability.

"},{"location":"reference-architectures/github-runners-gke/","title":"Reference Guide: Deploy and use GitHub Actions Runners on GKE","text":""},{"location":"reference-architectures/github-runners-gke/#overview","title":"Overview","text":"

This guide walks you through the process of setting up self-hosted GitHub Actions Runners on Google Kubernetes Engine (GKE) using the Terraform module terraform-google-github-actions-runners. It then provides instructions on how to create a basic GitHub Actions workflow to leverage these runners.

"},{"location":"reference-architectures/github-runners-gke/#prerequisites","title":"Prerequisites","text":"

Run the following command to enable the prerequisite APIs:

gcloud services enable \\\n  cloudresourcemanager.googleapis.com \\\n  iam.googleapis.com \\\n  container.googleapis.com \\\n  serviceusage.googleapis.com \\\n  --project <YOUR_PROJECT_ID>\n
"},{"location":"reference-architectures/github-runners-gke/#register-a-github-app-for-authenticating-arc","title":"Register a GitHub App for Authenticating ARC","text":"

Using a GitHub App for authentication allows you to make your self-hosted runners available to a GitHub organization that you own or have administrative access to. For more details on registering GitHub Apps, see GitHub\u2019s documentation.

You will need 3 values from this section to use as inputs in the Terraform module:

"},{"location":"reference-architectures/github-runners-gke/#navigate-to-your-organization-github-app-settings","title":"Navigate to your Organization GitHub App settings","text":"
  1. Click your profile picture in the top-right
  2. Click Your organizations
  3. Select the organization you want to use for this walkthrough
  4. Click Settings
  5. Click \\<> Developer settings
  6. Click GitHub Apps
"},{"location":"reference-architectures/github-runners-gke/#create-a-new-github-app","title":"Create a new GitHub App","text":"
  1. Click New GitHub App
  2. Under \u201cGitHub App name\u201d, choose a unique name such as \u201cmy-gke-arc-app\u201d
  3. Under \u201cHomepage URL\u201d enter https://github.com/actions/actions-runner-controller
  4. Under \u201cWebhook,\u201d uncheck Active.
  5. Under \u201cPermissions,\u201d click Repository permissions and use the dropdown menu to select the following permissions:
    1. Metadata: Read-only
  6. Under \u201cPermissions,\u201d click Organization permissions and use the dropdown menu to select the following permissions:
    1. Self-hosted runners: Read and write
  7. Click the Create GitHub App button
"},{"location":"reference-architectures/github-runners-gke/#gather-required-ids-and-keys","title":"Gather required IDs and keys","text":"
  1. On the GitHub App\u2019s page, save the value for \u201cApp ID\u201d
    1. You will use this as the value for gh_app_id in the Terraform module
  2. Under \u201cPrivate keys\u201d click Generate a private key. Save the .pem file for later.
    1. You will use this as the value for gh_app_private_key in the Terraform module
  3. In the menu at the top-left corner of the page, click Install App, and next to your organization, click Install to install the app on your organization.
    1. Choose All repositories to allow any repository in your org to have access to your runners
    2. Choose Only select repositories to allow specific repos to have access to your runners
  4. Note the app installation ID, which you can find on the app installation page, which has the following URL format: https://github.com/organizations/ORGANIZATION/settings/installations/INSTALLATION_ID
    1. You will use this as the value for gh_app_installation_id in the Terraform module.
"},{"location":"reference-architectures/github-runners-gke/#configure-terraform-example","title":"Configure Terraform example","text":""},{"location":"reference-architectures/github-runners-gke/#open-the-terraform-example","title":"Open the Terraform example","text":"

Open the Terraform module repository in Cloud Shell automatically by clicking the button:

Clicking this button will clone the repository into Cloud Shell, change into the example directory, and open the main.tf file in the Cloud Shell Editor.

"},{"location":"reference-architectures/github-runners-gke/#modify-terraform-example-variables","title":"Modify Terraform example variables","text":"
  1. Insert your Google Cloud Project ID as the value of project_id
  2. Modify the sample values of the following variables with the values you saved from earlier.
    1. gh_app_id: insert the value of the App ID from the GitHub App page
    2. gh_app_installation_id: insert the value from the URL of the app installation page
    3. gh_app_private_key:
      1. Copy the .pem file to example directory, alongside the main.tf file
      2. Insert the .pem filename you downloaded after generating the private key for the app, like so:
        1. gh_app_private_key = file(\"example.private-key.pem\")
      3. Warning: Terraform will store the private key in state as plaintext. It\u2019s recommended to secure your state file by using a backend such as a GCS bucket with encryption. You can do so by following these instructions.
  3. Modify the value of gh_config_url with the URL of your GitHub organization. It will be in the format of https://github.com/ORGANIZATION
  4. (Optional) Specify any other parameters that you wish. For a full list of variables you can modify, refer to the module documentation.
"},{"location":"reference-architectures/github-runners-gke/#deploy-the-example","title":"Deploy the example","text":"
  1. Initialize Terraform: Run terraform init to download the required providers.
  2. Plan: Run terraform plan to preview the changes that will be made.
  3. Apply: Run terraform apply and confirm to create the resources.

You will see the runners become available in your GitHub Organization:

  1. Go to your GitHub organization page
  2. Click Settings
  3. Open the \u201cActions\u201d drop-down in the left menu and choose Runners

You should see the runners appear as \u201carc-runners\u201d

"},{"location":"reference-architectures/github-runners-gke/#creating-a-github-actions-workflow","title":"Creating a GitHub Actions Workflow","text":"
  1. Create a new GitHub repository within your organization.
  2. In your GitHub repository, click the Actions tab.
  3. Click New workflow
  4. Under \u201cChoose workflow\u201d click set up a workflow yourself
  5. Paste the following configuration into the text editor:

    name: Actions Runner Controller Demo\non:\nworkflow_dispatch:\njobs:\nExplore-GitHub-Actions:\n   runs-on: arc-runners\n   steps:\n   - run: echo \"This job uses runner scale set runners!\"\n
  6. Click Commit changes to save the workflow to your repository.

"},{"location":"reference-architectures/github-runners-gke/#test-the-github-actions-workflow","title":"Test the GitHub Actions Workflow","text":"
  1. Go back to the Actions tab in your repository.
  2. In the left menu, select the name of your workflow. This should be \u201cActions Runner Controller Demo\u201d if you left the above configuration unchanged
  3. Click Run workflow to open the drop-down menu, and click Run workflow
  4. The sample workflow executes on your GKE-hosted ARC runner set. You can view the output within the GitHub Actions run history.
"},{"location":"reference-architectures/github-runners-gke/#cleanup","title":"Cleanup","text":""},{"location":"reference-architectures/github-runners-gke/#teardown-terraform-managed-infrastructure","title":"Teardown Terraform-managed infrastructure","text":"
  1. Navigate back into the example directory you previously ran terraform apply

    cd terraform-google-github-actions-runners/examples/gh-runner-gke-simple/\n
  2. Destroy Terraform-managed infrastructure

    terraform destroy\n

Warning: this will destroy the GKE cluster, example VPC, service accounts, and the Helm-managed workloads previously deployed by this example.

"},{"location":"reference-architectures/github-runners-gke/#delete-github-resources","title":"Delete GitHub resources","text":"

If you created a new GitHub App for testing purposes of this walkthrough, you can delete it via the following instructions. Note that any services authenticating via this GitHub App will lose access.

  1. Navigate to your Organization GitHub App settings
    1. Click your profile picture in the top-right
    2. Click Your organizations
    3. Select the organization you used for this walkthrough
    4. Click Settings
    5. Click the \\<> Developer settings drop-down
    6. Click GitHub Apps
  2. In the row where your GitHub App is listed, click Edit
  3. In the left-side menu, click Advanced
  4. Click Delete GitHub App
  5. Type the name of the GitHub App to confirm and delete.
"},{"location":"reference-architectures/sandboxes/","title":"Sandbox Projects Reference Architecure","text":"

This architecture demonstrates how you can automate the provisioning of sandbox projects and automatically apply sensible guardrails and constraints. A sandbox project allows engineers to experiment with new technologies. Sandboxes are provisioned for a short period of time and with budget constraints.

"},{"location":"reference-architectures/sandboxes/#architecture","title":"Architecture","text":"

The following diagram is the high-level architecture for enabling self-service creation of sandbox projects.

  1. The system project contains the state database and infrastructure required to create, delete and manage the lifecycle of the sandboxes.
  2. User interface that engineers use to request and manage the sandboxes they own.
  3. Firestore stores the state of the overall environment. Documents in the database represent all the active and inactive sandboxes. The document model is detailed in the sandbox-modules readme.
  4. Firestore triggers are Cloud Run functions whenever a document is created or updated. Create and update events are handled by Cloud Run functions onCreate and onModify. The functions contain the logic to decide if a sandbox should be created or deleted.
  5. infraManagerProcessor is a Cloud Run service that works with Infrastructure Manager to kick off and monitor the infrastructure management. This is handled in a Cloud Run service because the execution of Terraform is a long running process.
  6. Cloud Storage contains the Terraform templates and state used by Infrastructure Manager.
  7. Cloud Scheduler triggers the execution of sandbox lifecycle management processes, for example a function that check for the expiration of sandboxes and marking them for deletion.
"},{"location":"reference-architectures/sandboxes/#structure-of-the-repository","title":"Structure of the Repository","text":"

This repository contains the code to stand up the reference architecture and also create difference sandbox templates in the catalog. This section describes the structure of the repository so you can better navigate the code.

"},{"location":"reference-architectures/sandboxes/#examples","title":"Examples","text":"

The /examples directory contains a sample Terraform deployment for deploying the reference architecture and command-line tool to exercise the automated creation of developer sandboxes. The examples are intended to provide you a starting point so you can incorporate the reference architecure into your infrastructure.

"},{"location":"reference-architectures/sandboxes/#gcp-sandboxes","title":"GCP Sandboxes","text":"

This example uses the Terraform modules from /sandbox-modules to deploy the reference architecture and includes instructions on how to get started.

"},{"location":"reference-architectures/sandboxes/#command-line-interface-cli","title":"Command Line Interface (CLI)","text":"

The workflows and lifecycle of the sandboxes deployed via the reference architecture are managed through the document model stored in Cloud Firestore. This abstraction has the benefit of separating the core logic included in the reference archiecture from the user experience (UX). As such the example command line interface lets you experiment with the reference architecture and learn about the object model.

"},{"location":"reference-architectures/sandboxes/#catalog","title":"Catalog","text":"

This directory contains a collection (catalog) of templates that you can use to deploy sandboxes. The reference architecture includes one for an empty project, but others could be added to support more specialized roles such as database admins, AI engineers, etc.

"},{"location":"reference-architectures/sandboxes/#sandbox-modules","title":"Sandbox Modules","text":"

These modules use the fabric modules to create the system project. Each module represents a large component of the overall reference architecture and each component can be combined into the one system project or spread across different projects to help with separation of duties.

"},{"location":"reference-architectures/sandboxes/#fabric-modules","title":"Fabric Modules","text":"

These are the base Terraform modules adopted from the Cloud Fabric Foundation. The fabric foundation is intended to be vendored, so we have copied them here for repeatbility of the overall deployment of the reference architecture.

We recommend that as you need additional modules for templates in the catalog that you start with and vendor the modules from the Cloud Foundation Fabric into this directory.

"},{"location":"reference-architectures/sandboxes/examples/cli/","title":"Example Command Line Interface","text":""},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/","title":"Overview","text":"

This directory contains Terraform configuration files that let you deploy the system project. This example is a good entry point for testing the reference architecture and learning how it can be incorportated into your own infrastructure as code processes.

"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#architecture","title":"Architecture","text":"

For an explanation of the components of the sandboxes reference architecture and the interaction flow, read the main Architecture section.

"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#before-you-begin","title":"Before you begin","text":"

In this section you prepare a folder for deployment.

  1. Open the Cloud Console
  2. Activate Cloud Shell \\ At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt.

  3. In Cloud Shell, clone this repository

    git clone https://github.com/GoogleCloudPlatform/platform-engineering.git\n
  4. Export variables for the working directories

    export SANDBOXES_DIR=\"$(pwd)/reference-architectures/examples/gcp-sandboxes\"\nexport SANDBOXES_CLI=\"$(pwd)/reference-architectures/examples/cli\"\n
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#preparing-the-sandboxes-folder","title":"Preparing the Sandboxes Folder","text":"

In this section you prepare your environment for deploying the system project.

  1. Go to the Manage Resources page in the Cloud Console in the IAM & Admin menu.

  2. Click Create folder, then choose Folder.

  3. Enter a name for your folder. This folder will be used to contain the system and sandbox projects.

  4. Click Create

  5. Copy the folder ID from the Manage resources page, you will need this value later for use as Terraform variable.

"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#deploying-the-reference-architecture","title":"Deploying the reference architecture","text":"
  1. Set the project ID and region in the corresponding Terraform environment variables

    export TF_VAR_billing_account=\"<your billing account id>\"\nexport TF_VAR_sandboxes_folder=\"folders/<folder id from step 5>\"\nexport TF_VAR_system_project_name=\"<name for the system project>\"\n
  2. Change directory into the Terraform example directory and initialize Terraform.

    cd \"${SANDBOXES_DIR}\"\nterraform init\n
  3. Apply the configuration. Answer yes when prompted, after reviewing the resources that Terraform intends to create.

    terraform apply\n
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#creating-a-sandbox","title":"Creating a sandbox","text":"

Now that the system project has been deployed, create a sandbox using the example cli.

  1. Change directory into the example command-line tool directory

    cd \"${SANDBOXES_DIR}\"\n
  2. Install there required Python libraries

    pip install -r requirements.txt\n
  3. Create a Sandbox using the cli

    python ./sandbox.py create \\\n--system=\"<name of your system project>\" \\\n--project_id=\"<name of the sandbox to create>\"\n
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#next-steps","title":"Next steps","text":"

Your sandboxes infrastructure is ready, you may continue to use the example cli to create and delete sandboxes. At this point it is recommended that you:

"},{"location":"reference-architectures/sandboxes/sandbox-modules/","title":"Sandbox Projects","text":""},{"location":"reference-architectures/sandboxes/sandbox-modules/#data-model","title":"Data Model","text":"

Each document stored in Cloud Firestore represents a sandbox. The following sections document the fields and structure of those documents.

"},{"location":"reference-architectures/sandboxes/sandbox-modules/#deployment","title":"Deployment","text":"Field Type Description _updateSource string This describes the last process or tool used to update or create the deployment document. For example, the example python cli _updateSource is set to python and when the firestore-processor Cloud Run updates the document it is set to cloudrun. status string Status of the sandbox, this changes create and delete operations progress. Refer to Key Statuses for detailed definitions of the values. projectId string The project ID of the sandbox. templateName string The name of the Terraform template from the catalog that the sandbox is based on. deploymentState object<DeploymentState> State object for the sandbox deployment. Contains data such as budget, current spend, expiration date, etc.The state object is updated by and used by the various lifecycle functions. infraManagerDeploymentId string ID returned by Infrastructure Manager for the deployment. infraManagerResult object<DeploymentResponse> This is the response object returned from Infrastructure Manager deployment operation. userId string Unique identifier for the user which owns the sandbox deployment. createdAt string Timestamp that the sandbox record was created at. updatedAt string Timestamp that the sandbox record was last updated. variables object<Variables> List of variable supplied by the user, which are in turned used by the template to create the sandbox. auditLog array[string] List of messages that the system can add as an audit log."},{"location":"reference-architectures/sandboxes/sandbox-modules/#deploymentstate","title":"DeploymentState","text":"Field Type Description budgetLimit number Spend limit for the sandbox. currentSpend number Current spend for the sandbox. expiresAt string Time base expiration for the sandbox."},{"location":"reference-architectures/sandboxes/sandbox-modules/#variables","title":"Variables","text":"

Collection of key-value pairs that are used in the Infrastructure Manager request, for use as the Terraform variable values.

"},{"location":"reference-architectures/sandboxes/sandbox-modules/#key-statuses","title":"Key Statuses","text":"

The following table describes important statuses that are used during the lifecycle of a deployment.

Status Set By Handled By Meaning provision_requested User Interface firestore-functions The user has requested that a sandbox be provisioned. provision_pending infra-manager-processor infra-manager-processor Indicates the request was received by the infra-manager-processor but the request hasn\u2019t yet been made to Infrastructure Manager. provision_inprogress infra-manager-processor infra-manager-processor Indicates that the request has been submitted to Infrastructure Manager and it is in progress with Infrastructure Manager. provision_error infra-manager-processor infra-manager-processor The deployment process has failed with an error. provision_successful infra-manager-processor infra-manager-processor The deployment process has succeeded and the sandbox is available and running. delete_requested User Interface firestore-functions The user or lifecycle process has requested that a sandbox be deleted. delete_pending infra-manager-processor infra-manager-processor Indicates the delete request was received by the infra-manager-processor but the request hasn\u2019t yet been made to Infrastructure Manager. delete_inprogress infra-manager-processor infra-manager-processor Indicates that the delete request has been submitted to Infrastructure Manager and it is in progress with Infrastructure Manager. delete_error infra-manager-processor infra-manager-processor The delete process has failed with an error. delete_successful infra-manager-processor infra-manager-processor The delete process has succeeded."}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Platform Engineering on Google Cloud","text":"

Platform engineering is an emerging practice in organizations to enable cross functional collaboration in order to deliver business value faster. It treats the internal groups; application developers, operators, security, infrastructure admins, etc. as customers and provides them the foundational platforms to accelerate their work. The key goals of platform engineering are providing everything as self-service, golden paths, improved collaboration, abstraction of technical complexities, all of which simplify the software development lifecycle, contributing towards delivering business values to consumers. Platform engineering is more effective in cloud computing as it helps realize the benefits possible on cloud like automation, security, productivity, faster time-to-market.

"},{"location":"#overview","title":"Overview","text":"

Google Cloud offers decomposable, elastic, secure, scalable and cost efficient tools built on the guiding principles of platform engineering. With a focus on developer experience and innovation coupled with practices like SRE embedded into the tools, they make a good place to begin your platform journey to empower the developers to enhance their experience and increase their productivity.

This repository contains a collection of guides, examples and design patterns spanning Google Cloud products and best in class OSS tools, which you can use to help build an internal developer platform.

For more information, see Platform Engineering on Google Cloud.

"},{"location":"#resources","title":"Resources","text":""},{"location":"#design-patterns","title":"Design Patterns","text":""},{"location":"#research-papers-and-white-papers","title":"Research papers and white papers","text":""},{"location":"#guides-and-building-blocks","title":"Guides and Building Blocks","text":""},{"location":"#manage-developer-environments-at-scale","title":"Manage Developer Environments at Scale","text":""},{"location":"#self-service-and-automation-patterns","title":"Self-service and Automation patterns","text":""},{"location":"#run-third-party-cicd-tools-on-google-cloud-infrastructure","title":"Run third-party CI/CD tools on Google Cloud infrastructure","text":""},{"location":"#enterprise-change-management","title":"Enterprise change management","text":""},{"location":"#application-migrations-and-modernization","title":"Application migrations and modernization","text":""},{"location":"#end-to-end-examples","title":"End-to-end Examples","text":""},{"location":"#usage-disclaimer","title":"Usage Disclaimer","text":"

Copy any code you need from this repository into your own project.

Warning: Do not depend directly on the samples in this repository. Breaking changes may be made at any time without warning.

"},{"location":"#contributing-changes","title":"Contributing changes","text":"

Entirely new samples are not accepted. Bugfixes are welcome, either as pull requests or as GitHub issues.

See CONTRIBUTING.md for details on how to contribute.

"},{"location":"#licensing","title":"Licensing","text":"

Copyright 2024 Google LLC Code in this repository is licensed under the Apache 2.0. See LICENSE.

"},{"location":"code-of-conduct/","title":"Code of Conduct","text":""},{"location":"code-of-conduct/#our-pledge","title":"Our Pledge","text":"

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

"},{"location":"code-of-conduct/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to creating a positive environment include:

Examples of unacceptable behavior by participants include:

"},{"location":"code-of-conduct/#our-responsibilities","title":"Our Responsibilities","text":"

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

"},{"location":"code-of-conduct/#scope","title":"Scope","text":"

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project email address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

This Code of Conduct also applies outside the project spaces when the Project Steward has a reasonable belief that an individual's behavior may have a negative impact on the project or its community.

"},{"location":"code-of-conduct/#conflict-resolution","title":"Conflict Resolution","text":"

We do not believe that all conflict is bad; healthy debate and disagreement often yield positive results. However, it is never okay to be disrespectful or to engage in behavior that violates the project\u2019s code of conduct.

If you see someone violating the code of conduct, you are encouraged to address the behavior directly with those involved. Many issues can be resolved quickly and easily, and this gives people more control over the outcome of their dispute. If you are unable to resolve the matter for any reason, or if the behavior is threatening or harassing, report it. We are dedicated to providing an environment where participants feel welcome and safe.

Reports should be directed to [PROJECT STEWARD NAME(s) AND EMAIL(s)], the Project Steward(s) for [PROJECT NAME]. It is the Project Steward\u2019s duty to receive and address reported violations of the code of conduct. They will then work with a committee consisting of representatives from the Open Source Programs Office and the Google Open Source Strategy team. If for any reason you are uncomfortable reaching out to the Project Steward, please email opensource@google.com.

We will investigate every complaint, but you may not receive a direct response. We will use our discretion in determining when and how to follow up on reported incidents, which may range from not taking action to permanent expulsion from the project and project-sponsored spaces. We will notify the accused of the report and provide them an opportunity to discuss it before any action is taken. The identity of the reporter will be omitted from the details of the report supplied to the accused. In potentially harmful situations, such as ongoing harassment or threats to anyone's safety, we may take action without notice.

"},{"location":"code-of-conduct/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

"},{"location":"contributing/","title":"How to Contribute","text":"

We'd love to accept your patches and contributions to this project.

"},{"location":"contributing/#before-you-begin","title":"Before you begin","text":""},{"location":"contributing/#sign-our-contributor-license-agreement","title":"Sign our Contributor License Agreement","text":"

Contributions to this project must be accompanied by a Contributor License Agreement (CLA). You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project.

If you or your current employer have already signed the Google CLA (even if it was for a different project), you probably don't need to do it again.

Visit https://cla.developers.google.com/ to see your current agreements or to sign a new one.

"},{"location":"contributing/#review-our-community-guidelines","title":"Review our Community Guidelines","text":"

This project follows Google's Open Source Community Guidelines.

"},{"location":"contributing/#contribution-process","title":"Contribution process","text":""},{"location":"contributing/#code-reviews","title":"Code Reviews","text":"

All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.

"},{"location":"contributing/#development-guide","title":"Development guide","text":"

This document contains technical information to contribute to this repository.

"},{"location":"contributing/#site","title":"Site","text":"

This repository includes scripts and configuration to build a site using Material for MkDocs:

"},{"location":"contributing/#build-the-site","title":"Build the site","text":"

To build the site, run the following command from the root of the repository:

scripts/run-mkdocs.sh\n
"},{"location":"contributing/#preview-the-site","title":"Preview the site","text":"

To preview the site, run the following command from the root of the repository:

scripts/run-mkdocs.sh \"serve\"\n
"},{"location":"contributing/#linting-and-formatting","title":"Linting and formatting","text":"

We configured several linters and formatters for code and documentation in this repository. Linting and formatting checks run as part of CI workflows.

Linting and formatting checks are configured to check changed files only by default. If you change the configuration of any linter or formatter, these checks run against the entire repository.

To run linting and formatting checks locally, you do the following:

scripts/lint.sh\n

To automatically fix certain linting and formatting errors, you do the following:

LINTER_CONTAINER_FIX_MODE=\"true\" scripts/lint.sh\n
"},{"location":"reference-architectures/accelerating-migrations/","title":"Accelerate migrations through platform engineering golden paths","text":"

This document helps you adopt platform engineering by designing a process to onboard and migrate your existing applications to use your internal developer platform (IDP). It also provides guidance to help you evaluate the opportunity to design a platform engineering process, and to explore how it might function. Google Cloud provides tools, products, guidance, and professional services to help you adopt platform engineering in your environments.

This document is aimed at the following personas:

The Cloud Native Computing Foundation defines a golden path as an integrated bundle of templates and documentation for rapid project development. Designing and developing golden paths can help facilitate the onboarding and the migration of existing applications to your IDP. When you use a golden path, your development and operations teams can take advantage of benefits like the following:

Onboarding and migrating existing applications to the IDP can let you experience the benefits of adopting platform engineering gradually and incrementally in your organization, without spending effort on large scale migration projects.

To migrate applications and onboard them to the IDP, we recommend that you design an application onboarding and migration process. This document describes a reference application onboarding and migration process. We recommend that you tailor the process to your requirements and your IDP.

If you're migrating your applications from your on-premises environment or from another cloud provider to Google Cloud, the application onboarding and migration process can help you to accelerate your migration. In that scenario, the teams that are managing the migration can refer to well-established golden paths, instead of having to design their own migration processes and project templates.

"},{"location":"reference-architectures/accelerating-migrations/#application-onboarding-and-migration-process","title":"Application onboarding and migration process","text":"

The goal of the application onboarding and migration process is to get an application on the IDP. After you onboard and migrate the application to the IDP, your teams can benefit from using the IDP. When you use an IDP, you can focus on providing business value for the application, rather than spending effort on ad-hoc processes and operations.

To manage the complexity of the application onboarding and migration process, we recommend that you design the process in the following phases:

  1. Intake the application onboarding and migration request.
  2. Assess the application to onboard and migrate.
  3. Set up and eventually extend the IDP to accommodate the needs of the application to onboard and migrate.
  4. Onboard and migrate the application.
  5. Optimize the application.

The high-level structure of this process matches the Google Cloud migration path. In this case, you follow the migration path to onboard and migrate existing applications on the IDP.

To ensure that the application onboarding and migration is on the right track, we recommend that you design validation checkpoints for each phase of the process, rather than having a single acceptance testing task. Having validation checkpoints for each phase helps you to promptly detect issues as they arise, rather than when you are close to the end of the migration.

Even when following a phased process, onboarding and migrating complex applications to the IDP might require a significant effort, and it might pose risks. To manage the effort and the risks of onboarding and migrating complex applications to the IDP, you can follow the onboarding and migration process iteratively, by migrating parts of the application on each iteration. For example, if an application is composed of multiple components, you can onboard and migrate one component for each iteration of the process.

To reduce toil, we recommend that you thoroughly document the application onboarding and migration process, and make it as self-service as possible, in line with platform-engineering principles.

In this document, we assume that the onboarding and migration process involves three teams:

The following sections describe each phase of the application onboarding and migration process.

"},{"location":"reference-architectures/accelerating-migrations/#intake-the-onboarding-and-migration-request","title":"Intake the onboarding and migration request","text":"

The first phase of the application onboarding and migration process is to intake the request to onboard and migrate the application. The request process is the following:

  1. The application onboarding and migration team files the onboarding and migration request.
  2. The IDP receives the request, and it recommends existing golden paths.
  3. If the IDP can't suggest an existing golden path, the IDP forwards the request to the team that manages the IDP for further evaluation.

We recommend that you keep this phase as light as possible by using a form or a guided, self-service process. For example, you can include migration guidance in the IDP documentation so that development teams can review it and prepare for the migration. You can also implement automated checks in your IDP to give initial feedback to development teams about potential migration blockers and issues.

To assist and offer consultation to the teams that filed or intend to file an application onboarding and migration request, we recommend that the team that manages the IDP establish communication channels to offer assistance to other teams. For example, the team that manages the IDP might set up dedicated discussion groups, chat rooms, and office hours where they can offer help and answer questions about the IDP. To help with onboarding and migration of complex applications and to facilitate communications, you can also attach a member of the team that manages the IDP to the application team while the migration is in progress.

"},{"location":"reference-architectures/accelerating-migrations/#plan-application-onboarding-and-migration","title":"Plan application onboarding and migration","text":"

As part of this phase, we recommend that the application onboarding and migration team starts drafting an onboarding and migration plan, even if the team doesn't have all of the data points to fully define it. When the team progresses through the assessment phase, they will gather information to finalize and validate the plan.

To manage the complexity of the migration plan, we recommend that you decompose it across the following sub-tasks:

Developing a comprehensive onboarding and migration plan is crucial to the success of the application onboarding and migration process. Having a plan helps you to define clear deadlines, assign responsibilities, and deal with unanticipated issues.

"},{"location":"reference-architectures/accelerating-migrations/#assess-the-application","title":"Assess the application","text":"

The second phase of the application onboarding and migration process is to follow up on the intake request by assessing the application to onboard and migrate to the IDP. The goal of this assessment phase is to produce the following artifacts:

These outputs of the assessment phase help you to plan and complete the migration. The outputs also help you to scope the enhancements that the IDP needs to support the application, and to increase the velocity of future migrations.

To manage the complexity of the assessment phase, we recommend that you decompose it into the following steps:

  1. Review the application design.
  2. Review application dependencies.
  3. Review continuous integration and continuous deployment (CI/CD) processes.
  4. Review data persistence and data management requirements.
  5. Review FinOps requirements.
  6. Review compliance requirements.
  7. Review the application team practices.
  8. Assess application refactoring and the IDP.
  9. Finalize the application onboarding and migration plan.

The preceding steps are described in the following sections. For more information about assessing applications and defining migration plans, see Migrate to Google Cloud: Assess and discover your workloads.

"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-design","title":"Review the application design","text":"

To gather a comprehensive understanding about the design of the application, we recommend that you complete a thorough assessment of the following aspects of the application:

Understanding the application architecture helps you to design and implement an effective onboarding and migration process for your application. It also helps you anticipate issues and potential problems that might arise during the migration. For example, if the architecture of your application to onboard and migrate to the IDP isn't compatible with your IDP, you might need to spend additional effort to refactor the application and enhance the IDP.

The application to onboard and migrate to the IDP might have dependencies on systems and data that are outside the scope of the application. To understand these dependencies, we recommend that you gather information about any reliance of your application on external systems and data, such as databases, datasets, and APIs. After you gather information, you classify the dependencies in order of importance and criticality. For example, your application might need access to a database to store persistent data, and to external APIs to integrate with to provide critical functionality to users, while it might have an optional dependency on a caching system.

Understanding the dependencies of your application on external systems and data is crucial to plan for continued access to these dependencies during and after the migration.

"},{"location":"reference-architectures/accelerating-migrations/#review-application-dependencies","title":"Review application dependencies","text":""},{"location":"reference-architectures/accelerating-migrations/#review-cicd-processes","title":"Review CI/CD processes","text":"

After you review the application design and its dependencies, we recommend that you refine the assessment about your application's deployable artifacts by reviewing your application's CI/CD processes. These processes usually let you build the artifacts to deploy the application and let you deploy them in your runtime environments. For example, you refine the assessment by answering questions about these CI/CD processes, such as the following:

Understanding how the application's CI/CD processes work helps you evaluate whether your IDP can support these CI/CD processes as is, or if you need to enhance your IDP to support them. For example, if your application has a business-critical requirement on a canary deployment process and your IDP doesn't support it, you might need to factor in additional effort to enhance the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#review-data-persistence-and-data-management-requirements","title":"Review data persistence and data management requirements","text":"

By completing the previous tasks of the assessment phase, you gathered information about the statefulness of the application and about the systems that the application uses to store persistent and transient data. In this section, you refine the assessment to develop a deeper understanding of the systems that the application uses to store stateful data. We recommend that you gather information on data persistence and data management requirements of your application. For example, you refine the assessment by answering questions such as the following:

Understanding your application's data persistence and data management requirements helps you to ensure that your IDP and your production environment can effectively support the application. This understanding can also help you determine whether you need to enhance the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#review-finops-requirements","title":"Review FinOps requirements","text":"

As part of the assessment of your application, we recommend that you gather data about the FinOps requirements of your application, such as budget control and cost management, and evaluate whether your IDP supports them. For example, the application might require certain mechanisms to control spending and manage costs, eventually sending alerts. The application might also require mechanisms to completely stop spending when it reaches a certain budget limit.

Understanding your application's FinOps requirements helps you to ensure that you keep your application costs under control. It also helps you to establish proper cost attribution and cost optimization practices.

"},{"location":"reference-architectures/accelerating-migrations/#review-compliance-requirements","title":"Review compliance requirements","text":"

The application to onboard and migrate to the IDP and its runtime environment might have to meet compliance requirements, especially in regulated industries. We recommend that you assess the compliance requirements of the application, and evaluate if the IDP already supports them. For example, the application might require isolation from other workloads, or it might have data locality requirements.

Understanding your application's compliance requirements helps you to scope the necessary refactoring and enhancements for your application and for the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-team-practices","title":"Review the application team practices","text":"

After you review the application, we recommend that you gather information about team practices and the methodologies for developing and operating the application. For example, the team might already have adopted DevOps principles, they might be already implementing Site Reliability Engineering (SRE), or they might be already familiar with platform engineering and with the IDP.

By gathering information about the team that develops and operates the application to migrate, you gain insights about the experience and the maturity of that team. You also learn whether there's a need to spend effort to train team members to proficiently use the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#assess-application-refactoring-and-the-idp","title":"Assess application refactoring and the IDP","text":"

After you gather information about the application, its development and operation teams, and its requirements, you evaluate the following:

The goal of this task is to answer the following questions:

  1. Does the application need any refactoring to onboard and migrate it to the IDP?
  2. Are there any new services or processes that the IDP should offer to migrate the application?
  3. Does the IDP meet the compliance and regulatory requirements that the application requires?

By answering these questions, you focus on evaluating potential onboarding and migration blockers. For example, you might experience the following onboarding and migration blockers:

The application development and operations team is responsible for the application refactoring tasks.

When you scope the eventual enhancements that the IDP needs to support the application, we recommend that you frame these enhancements in the broader vision that you have for the IDP, and not as a standalone exercise. We also recommend that you consider your IDP as a product for which you should develop a path to success. For example, if you're considering adding a new service to the IDP, we recommend that you evaluate how that service fits in the path to success for your IDP, in addition to the technical feasibility of the initiative.

By assessing the refactoring effort that's required to onboard and migrate the application, you develop a comprehensive understanding of the tasks that you need to complete to refactor the application and how you need to enhance the IDP to support the application.

"},{"location":"reference-architectures/accelerating-migrations/#finalize-the-application-onboarding-and-migration-plan","title":"Finalize the application onboarding and migration plan","text":"

To complete the assessment phase, you finalize the application onboarding and migration plan with consideration of the data that you gathered. To finalize the plan, you do the following:

"},{"location":"reference-architectures/accelerating-migrations/#set-up-the-idp","title":"Set up the IDP","text":"

After you complete the assessment phase, you use its outputs to:

  1. Enhance the IDP by adding missing features and services.
  2. Configure the IDP to support the application.
"},{"location":"reference-architectures/accelerating-migrations/#enhance-the-idp","title":"Enhance the IDP","text":"

During the assessment phase, you scope any enhancements to the IDP that it needs to support the application and how those enhancements fit in your plans for the IDP. By completing this task, you design and implement the enhancements. For example, you might need to enhance the IDP as follows:

By enhancing the IDP to support the application, you unblock the migration. You also help streamline processes for onboarding and migration projects for other applications that might need those IDP enhancements.

"},{"location":"reference-architectures/accelerating-migrations/#configure-the-idp","title":"Configure the IDP","text":"

After you enhance the IDP, if needed, you configure it to provide the resources that the application needs. For example, you configure the following IDP services for the application, or a subset of services:

By configuring the IDP, you prepare it to host the application that you want to onboard and migrate.

"},{"location":"reference-architectures/accelerating-migrations/#onboard-and-migrate-the-application","title":"Onboard and migrate the application","text":"

In this phase, you onboard and migrate the application to the IDP by completing the following tasks:

  1. Refactor the application to apply the changes that are necessary to onboard and migrate it on the IDP.
  2. Configure CI/CD workflows for the application and deploy the application in the development environment.
  3. Promote the application from the development environment to the staging environment.
  4. Perform acceptance testing.
  5. Migrate data from the source environment to the production environment.
  6. Promote the application from the staging environment to the production environment and ensure the application's operational readiness.
  7. Perform the cutover from the source environment.

By completing the preceding tasks, you onboard and migrate the application to the IDP. The following sections describe these tasks in more detail.

"},{"location":"reference-architectures/accelerating-migrations/#refactor-the-application","title":"Refactor the application","text":"

In the assessment phase, you scoped the refactoring that your application needs in order to onboard and migrate it to the IDP. By completing this task, you design and implement the refactoring. For example, you might need to refactor your application in the following ways in order to meet the IDP's requirements:

By refactoring the application, you prepare it to onboard and migrate it on the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#configure-cicd-workflows","title":"Configure CI/CD workflows","text":"

After you refactor the application, you do the following:

  1. Configure CI/CD workflows to deploy the application.
  2. Optionally migrate deployable artifacts from the source environment.
  3. Deploy the application in the development environment.
"},{"location":"reference-architectures/accelerating-migrations/#configure-cicd-workflows-to-deploy-the-application","title":"Configure CI/CD workflows to deploy the application","text":"

To build deployable artifacts and deploy them in your runtime environments, we recommend that you avoid manual processes. Instead of manual processes, configure CI/CD workflows by using the application delivery services that the IDP provides and store deployable artifacts in IDP-managed artifact repositories. For example, you can configure CI/CD workflows by using the following methods:

  1. Configure Cloud Build to build container images and store them in Artifact Registry.
  2. Configure a Cloud Deploy pipeline to automate delivery of your application.

When you build the CI/CD workflows for your environment, consider how many runtime environments the IDP supports. For example, the IDP might support different runtime environments that are isolated from each other such as the following:

If the IDP supports multiple runtime environments for the application, you need to configure the CI/CD workflows for the application to support promoting the application's deployable artifact. You should plan for promoting the application from development to staging, and then from staging to production.

When you promote the application from one environment to the next environment, we recommend that you avoid rebuilding the application's deployable artifacts. Rebuilding creates new artifacts, which means that you would be deploying something different than what you tested and validated.

"},{"location":"reference-architectures/accelerating-migrations/#migrate-deployable-artifacts-from-the-source-environment","title":"Migrate deployable artifacts from the source environment","text":"

If you need to support rolling back to previous versions of the application, you can migrate previous versions of the deployable artifacts that you built for the application from the source environment to an IDP-managed artifact repository. For example, if your application is containerized, you can migrate the container images that you built to deploy the application to Artifact Registry.

"},{"location":"reference-architectures/accelerating-migrations/#deploy-the-application-in-the-development-environment","title":"Deploy the application in the development environment","text":"

After configuring CI/CD workflows to build deployable artifacts for the application and to promote them from one environment to another, you deploy the application in the development environment using the CI/CD workflows that you configured.

By using CI/CD workflows to build deployable artifacts and deploy the application, you avoid manual processes that are less repeatable and more prone to errors. You also validate that the CI/CD workflows work as expected.

"},{"location":"reference-architectures/accelerating-migrations/#promote-from-development-to-staging","title":"Promote from development to staging","text":"

To promote your application from the development environment to the staging environment, you do the following:

  1. Test the application and verify that it works as expected.
  2. Fix any unanticipated issues.
  3. Promote the application from the development environment to the staging environment.

By promoting the application from the development environment to the staging environment, you accomplish the following:

"},{"location":"reference-architectures/accelerating-migrations/#perform-acceptance-testing","title":"Perform acceptance testing","text":"

After you promote the application to your staging environment, you perform extensive acceptance testing for both functional and non-functional requirements. When you perform acceptance testing, we recommend that you validate that the user journeys and the business processes that the application implements are working properly in situations that resemble real-world usage scenarios. For example, when you perform acceptance testing, you can do the following:

Acceptance testing helps you ensure that the application works as expected in an environment that resembles the production environment, and helps you identify unanticipated issues.

"},{"location":"reference-architectures/accelerating-migrations/#migrate-data","title":"Migrate data","text":"

After you complete acceptance testing for the application, you migrate data from the source environment to IDP-managed services such as the following:

To migrate data from your source environment to IDP-managed services, you can choose approaches like the following, depending on your requirements:

Each of the preceding approaches focuses on solving specific issues, and there's no approach that's inherently better than the others. For more information about migrating data to Google Cloud and choosing the best data migration approach for your application, see Migrate to Google Cloud: Transfer your large datasets.

I your data is stored in services managed by other cloud providers, see the following resources:

Migrating data from one environment to another is a complex task. If you think that the data migration is too complex to handle it as part of the application onboarding and migration process, you might consider migrating data as part of a dedicated migration project.

"},{"location":"reference-architectures/accelerating-migrations/#promote-from-staging-to-production","title":"Promote from staging to production","text":"

After you complete data migration and acceptance testing, you promote the application to the production environment. To complete this task, you do the following:

  1. Promote the application from the staging environment to the production environment. The process is similar to when you promoted the application from the development environment to the staging environment: you use the IDP-managed CI/CD workflows that you configured for the application to promote it from the staging environment to the production environment.
  2. Ensure the application's operational readiness. For example, to help you avoid performance issues if the application requires a cache, ensure that the cache is properly initialized.
  3. Fix any unanticipated issues.

When you check the application's operational readiness before you promote it from the staging environment to the production environment, you ensure that the application is ready for the production environment.

"},{"location":"reference-architectures/accelerating-migrations/#perform-the-cutover","title":"Perform the cutover","text":"

After you promote the application to the production environment and ensure that it works as expected, you configure the production environment to gradually route requests for the application to the newly promoted application release. For example, you can implement a canary deployment strategy that uses Cloud Deploy.

After you validate that the application continues to work as expected while the number of requests to the newly promoted application increases, you do the following:

  1. Configure your production environment to route all of the requests to your newly promoted application.
  2. Retire the application in the source environment.

Before you retire the application in the source environment, we recommend that you prepare backups and a rollback plan. Doing so will help you handle unanticipated issues that might force you to go back to using the source environment.

"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-application","title":"Optimize the application","text":"

Optimization is the last phase of the onboarding and migration process. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. For each iteration, you do the following:

  1. Assess your current environment, teams, and optimization loop.
  2. Establish your optimization requirements and goals.
  3. Optimize your environment and your teams.
  4. Tune the optimization loop.

You repeat the preceding sequence until you achieve your optimization goals.

For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization.

The following sections integrate the considerations in Migrate to Google Cloud: Optimize your environment.

"},{"location":"reference-architectures/accelerating-migrations/#establish-your-optimization-requirements","title":"Establish your optimization requirements","text":"

Optimization requirements help you to narrow the scope of the current optimization iteration. To establish your optimization requirements for the application, start by considering the following aspects:

For each aspect, we recommend that you establish your optimization requirements for the application. Then, you set measurable optimization goals to meet those requirements. For more information about optimization requirements and goals, see Establish your optimization requirements and goals.

After you realize the optimization requirements for the application, you completed the onboarding and migration process for the application.

"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-onboarding-and-migration-process-and-the-idp","title":"Optimize the onboarding and migration process and the IDP","text":"

After you onboard and migrate the application, you use the data that you gathered about the process and about the IDP to refine and optimize the process. Similarly to the optimization phase for your application, you complete the tasks that are described in the optimization phase, but with a focus on the onboarding and migration process and on the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#establish-your-optimization-requirements-for-the-idp","title":"Establish your optimization requirements for the IDP","text":"

To narrow down the scope to optimize the onboarding and migration process, and the IDP, you establish optimization requirements according to data you gather while running through the process. For example, during the onboarding and migration of an application, you might face unanticipated issues that involve the process and the IDP, such as:

To address the issues that arise while you're onboarding and migrating an application, you establish optimization requirements. For example, you might establish the following optimization requirements to address the example issues described above:

After establishing optimization requirements, you set measurable optimization goals to meet those requirements. For more information about optimization requirements and goals, see Establish your optimization requirements and goals.

"},{"location":"reference-architectures/accelerating-migrations/#application-onboarding-and-migration-example","title":"Application onboarding and migration example","text":"

In this section, you explore how the onboarding and migration process looks like for an example. The example that we describe in this section doesn't represent a real production application.

To reduce the scope of the example, we focus the example on the following environments:

This document focuses on the onboarding and migration process. For more information about migrating from Amazon EKS to GKE, see Migrate from AWS to Google Cloud: Migrate from Amazon EKS to GKE.

To onboard and migrate the application on the IDP, you follow the onboarding and migration process.

"},{"location":"reference-architectures/accelerating-migrations/#intake-the-onboarding-and-migration-request-example","title":"Intake the onboarding and migration request (example)","text":"

In this example, the application onboarding and migration team files a request to onboard and migrate the application on the IDP. To fully present the onboarding and migration process, we assume that IDP cannot find an existing golden path to suggest to onboard and migrate the application, so it forwards the request to the team that manages the IDP for further evaluation.

"},{"location":"reference-architectures/accelerating-migrations/#plan-application-onboarding-and-migration-example","title":"Plan application onboarding and migration (example)","text":"

To define timelines and milestones to onboard and migrate the application on the IDP, the application onboarding and migration team prepares a countdown plan:

Phase Task Countdown [days] Status Assess the application Review the application design -27 Not started Review application dependencies -23 Not started Review CI/CD processes -21 Not started Review data persistence and data management requirements -21 Not started Review FinOps requirements -20 Not started Review compliance requirements -20 Not started Review the application's team practices -19 Not started Assess application refactoring and the IDP -19 Not started Finalize the application onboarding and migration plan -18 Not started Set up the IDP Enhance the IDP N/A Not necessary Configure the IDP -17 Not started Onboard and migrate the application Refactor the application -15 Not started Configure CI/CD workflows -9 Not started Promote from development to staging -6 Not started Perform acceptance testing -5 Not started Migrate data -3 Not started Promote from staging to production -1 Not started Perform the cutover 0 Not started Optimize the application Assess your current environment, teams, and optimization loop 1 Not started Establish your optimization requirements and goals 1 Not started Optimize your environment and your teams 3 Not started Tune the optimization loop 5 Not started

To clearly outline responsibility assignments, the application onboarding and migration team defines the following RACI matrix for each phase and task of the process:

Phase Task Application onboarding and migration team Application development and operations team IDP team Assess the application Review the application design Responsible Accountable Informed Review application dependencies Responsible Accountable Informed Review CI/CD processes Responsible Accountable Informed Review data persistence and data management requirements Responsible Accountable Informed Review FinOps requirements Responsible Accountable Informed Review compliance requirements Responsible Accountable Informed Review the application's team practices Responsible Accountable Informed Assess application refactoring and the IDP Responsible Accountable Consulted Plan application onboarding and migration Responsible Accountable Consulted Set up the IDP Enhance the IDP Accountable Consulted Responsible Configure the IDP Responsible, Accountable Consulted Consulted Onboard and migrate the application Refactor the application Accountable Responsible Consulted Configure CI/CD workflows Responsible, Accountable Consulted Consulted Promote from development to staging Responsible, Accountable Consulted Informed Perform acceptance testing Responsible, Accountable Consulted Informed Migrate data Responsible, Accountable Consulted Consulted Promote from staging to production Responsible, Accountable Consulted Informed Perform the cutover Responsible, Accountable Consulted Informed Optimize the application Assess your current environment, teams, and optimization loop Informed Responsible, Accountable Informed Establish your optimization requirements and goals Informed Responsible, Accountable Informed Optimize your environment and your teams Informed Responsible, Accountable Informed Tune the optimization loop Informed Responsible, Accountable Informed"},{"location":"reference-architectures/accelerating-migrations/#assess-the-application-example","title":"Assess the application (example)","text":"

In the assessment phase, the application onboarding and migration team assesses the application by completing the assessment phase tasks.

"},{"location":"reference-architectures/accelerating-migrations/#review-the-application-design-example","title":"Review the application design (example)","text":"

The application onboarding and migration team reviews the application design, and gathers the following information:

  1. Application source code. The application source code is available on the company source code management and hosting solution.
  2. Deployable artifacts. The application is fully containerized using a single Open Container Initiative (OCI) container image. The container image uses Debian as a base image.
  3. Configuration injection. The application supports injecting configuration using environment variables and configuration files. Environment variables take precedence over configuration files. The application reads runtime- and environment-specific configuration from a Kubernetes ConfigMap.
  4. Security requirements. Container images need to be scanned for vulnerabilities. Also, container images need to be verified for authenticity and bills of materials. The application requires periodic secret rotation. The application doesn't allow direct access to its production runtime environment.
  5. Identity and access management. The application requires a dedicated service account with the minimum set of permissions to work correctly.
  6. Observability requirements. The application logs messages to stout and stderr streams, and exposes metrics and tracing in OpenTelemetry format. The application requires SLO monitoring for uptime and request error rates.
  7. Availability and reliability requirements. The application is not business critical, and can afford two hours of downtime at maximum. The application is designed to shed load under degraded conditions, and is capable of automated recovery after a loss of connectivity.
  8. Network and connectivity requirements. The application needs:

    The application doesn't require any specific service mesh configuration.

  9. Statefulness. The application stores persistent data on Amazon Relational Database Service (Amazon RDS) for PostgreSQL and on Amazon Simple Storage Service (Amazon S3).

  10. Runtime environment requirements. The application doesn't depend on any preview Kubernetes features, and doesn't need dependencies outside what is packaged in its container image.
  11. Development tools and environments. The application doesn't have any dependency on specific IDEs or development hardware.
  12. Multi-tenancy requirements. The application doesn't have any multi-tenancy requirements.
"},{"location":"reference-architectures/accelerating-migrations/#review-application-dependencies-example","title":"Review application dependencies (example)","text":"

The application onboarding and migration team reviews dependencies on systems that are outside the scope of the application, and gathers the following information:

"},{"location":"reference-architectures/accelerating-migrations/#review-cicd-processes-example","title":"Review CI/CD processes (example)","text":"

The application onboarding and migration team reviews the application's CI/CD processes, and gathers the following information:

"},{"location":"reference-architectures/accelerating-migrations/#review-data-persistence-and-data-management-requirements-example","title":"Review data persistence and data management requirements (example)","text":"

The application onboarding and migration team reviews data persistence and data management requirements, and gathers the following information:

The application onboarding and migration team is also tasked to migrate data from Amazon RDS for PostgreSQL and Amazon S3 to database and object storage services offered by the IDP. In this example, the IDP offers Cloud SQL for PostgreSQL as a database service, and Cloud Storage as an object storage service.

As part of this application dependency review, the application onboarding and migration team assesses the application's Amazon RDS database and the Amazon S3 buckets. For simplicity, we omit details about those assessments from this example. For more information about assessing Amazon RDS and Amazon S3, see the Assess the source environment sections in the following documents:

"},{"location":"reference-architectures/accelerating-migrations/#review-finops-requirements-example","title":"Review FinOps requirements (example)","text":"

The application onboarding and migration team reviews FinOps requirements, and gathers the following information:

"},{"location":"reference-architectures/accelerating-migrations/#review-compliance-requirements-example","title":"Review compliance requirements (example)","text":"

The application onboarding and migration team reviews compliance requirements, and gathers the following information:

"},{"location":"reference-architectures/accelerating-migrations/#review-the-applications-team-practices","title":"Review the application's team practices","text":"

The application onboarding and migration team reviews development and operational practices that the application development and operations team has in place, and gathers the following information:

The application onboarding and migration team suggests the following:

"},{"location":"reference-architectures/accelerating-migrations/#assess-application-refactoring-and-the-idp-example","title":"Assess application refactoring and the IDP (example)","text":"

After reviewing the application and its related CI/CD process, the team application onboarding and migration team assesses the refactoring that the application needs to onboard and migrate it on the IDP, scopes the following refactoring tasks:

The application onboarding and migration team evaluates the IDP against the application's requirements, and concludes that:

"},{"location":"reference-architectures/accelerating-migrations/#finalize-the-application-onboarding-and-migration-plan-example","title":"Finalize the application onboarding and migration plan (example)","text":"

After completing the application review, the application onboarding and migration team refines the onboarding and migration plan, and validates it in collaboration with technical and non-technical stakeholders.

"},{"location":"reference-architectures/accelerating-migrations/#set-up-the-idp-example","title":"Set up the IDP (example)","text":"

After you assess the application and plan the onboarding and migration process, you set up the IDP.

"},{"location":"reference-architectures/accelerating-migrations/#enhance-the-idp-example","title":"Enhance the IDP (example)","text":"

The IDP team doesn't need to enhance the IDP to onboard and migrate the application because:

"},{"location":"reference-architectures/accelerating-migrations/#configure-the-idp-example","title":"Configure the IDP (example)","text":"

The application onboarding and migration team configures the runtime environments for the application using the IDP: a development environment, a staging environment, and a production environment. For each environment, the application onboarding and migration team completes the following tasks:

  1. Configures foundational services:

    1. Creates a new Google Cloud project.
    2. Configures IAM roles and service accounts.
    3. Configures a VPC and a subnet.
    4. Creates DNS records in the DNS zone.
  2. Provisions and configures a GKE cluster for the application.

  3. Provisions and configures a Cloud SQL for PostgreSQL instance.
  4. Provisions and configures two Cloud Storage buckets.
  5. Provisions and configures an Artifact Registry repository for container images.
  6. Instruments Cloud Operations Suite to observe the application.
  7. Configures Cloud Billing budget and budget alerts for the application.
"},{"location":"reference-architectures/accelerating-migrations/#onboard-and-migrate-the-application-example","title":"Onboard and migrate the application (example)","text":"

To onboard and migrate the application, the application development and operations team refactors the application and then the application onboarding and migration team proceeds with the onboarding and migration process.

"},{"location":"reference-architectures/accelerating-migrations/#refactor-the-application-example","title":"Refactor the application (example)","text":"

The application development and operations team refactors the application as follows:

  1. Refactors the application to read from and write objects to Cloud Storage, instead of Amazon S3.
  2. Updates the application configuration to use the Cloud SQL for PostgreSQL, instance instead of the Amazon RDS for PostgreSQL instance.
  3. Exposes the metrics that the IDP needs to observe the application.
  4. Update application dependencies that are affected by known vulnerabilities.
"},{"location":"reference-architectures/accelerating-migrations/#configure-cicd-workflows-example","title":"Configure CI/CD workflows (example)","text":"

To configure CI/CD workflows, the application onboarding and migration team does the following:

  1. Refactors the application CI workflow to push container images to the Artifact Registry repository, in addition to Amazon ECR.
  2. Implements a Cloud Deploy pipeline to automatically deploy the application, and promote it across runtime environments.
  3. Deploys the application in the development environment using the Cloud Deploy pipeline.
"},{"location":"reference-architectures/accelerating-migrations/#promote-the-application-from-development-to-staging","title":"Promote the application from development to staging","text":"

After deploying the application in the development environment, the application onboarding and migration team:

  1. Tests the application, and verifies that it works as expected.
  2. Promotes the application from the development environment to the staging environment.
"},{"location":"reference-architectures/accelerating-migrations/#perform-acceptance-testing-example","title":"Perform acceptance testing (example)","text":"

After promoting the application from the development environment to the staging environment, the application onboarding and migration team performs acceptance testing.

To perform acceptance testing to validate the application's real-world user journeys and business processes, the application onboarding and migration team consults with the application development and operations team.

The application onboarding and migration team performs acceptance testing as follows:

  1. Ensures that the application works as expected when dealing with amounts of data and traffic that are similar to production ones.
  2. Validates that the application works as designed under degraded conditions, and that it recovers once the issues are resolved. The application onboarding and migration team tests the following scenarios:

  3. Verifies that observability and alerting for the application are correctly configured.

"},{"location":"reference-architectures/accelerating-migrations/#migrate-data-example","title":"Migrate data (example)","text":"

After completing acceptance testing for the application, the application onboarding and migration team migrates data from the source environment to the Google Cloud environment as follows:

  1. Migrate data from Amazon RDS for PostgreSQL to Cloud SQL for PostgreSQL.
  2. Migrate data from Amazon S3 to Cloud Storage.

For simplicity, this document doesn't describe the details of migrating from Amazon RDS and Amazon S3 to Google Cloud. For more information about migrating from Amazon RDS and Amazon S3 to Google Cloud, see:

"},{"location":"reference-architectures/accelerating-migrations/#promote-from-staging-to-production-example","title":"Promote from staging to production (example)","text":"

After performing acceptance testing and after migrating data to the Google Cloud environment, the application onboarding and migration team:

  1. Promotes the application from the staging environment to the production environment using the Cloud Deploy pipeline.
  2. Ensures the application's operational readiness by verifying that the application:

  3. Correctly connects to the Cloud SQL for PostgreSQL instance

"},{"location":"reference-architectures/accelerating-migrations/#perform-the-cutover-example","title":"Perform the cutover (example)","text":"

After promoting the application to the production environment, and ensuring that the application is operationally ready, the application onboarding and migration team:

  1. Configures the production environment to gradually route requests to the application in 5% increments, until all the requests are routed to the Google Cloud environment.
  2. Refactors the CI workflow to push container images to Artifact Registry only.
  3. Takes backups to ensure that a rollback is possible, in case of unanticipated issues.
  4. Retires the application in the source environment.
"},{"location":"reference-architectures/accelerating-migrations/#optimize-the-application-example","title":"Optimize the application (example)","text":"

After performing the cutover, the application development and operations team takes over the maintenance of the application, and establishes the following optimization requirements:

After establishing optimization requirements, the application development and operations team completes the rest of the tasks of the optimization phase.

"},{"location":"reference-architectures/accelerating-migrations/#whats-next","title":"What's next","text":""},{"location":"reference-architectures/accelerating-migrations/#contributors","title":"Contributors","text":"

Authors:

Other contributors:

"},{"location":"reference-architectures/automated-password-rotation/","title":"Overview","text":"

Secrets rotation is a broadly accepted best practice across the information technology industry. However, often times it is cumbersome and disruptive process. In this guide you will use Google Cloud tools to automate the process of rotating passwords for a Cloud SQL instance. This method could easily be extended to other tools and types of secrets.

"},{"location":"reference-architectures/automated-password-rotation/#storing-passwords-in-google-cloud","title":"Storing passwords in Google Cloud","text":"

In Google Cloud, secrets including passwords can be stored using many different tools including common open source tools such as Vault, however in this guide, you will use Secret Manager, Google Cloud's fully managed product for securely storing secrets. Regardless of the tool you use, passwords stored should be further secured. When using Secret Manager, following are some of the ways you can further secure your secrets:

  1. Limiting access : The secrets should be readable writable only through the Service Accounts via IAM roles. The principle of least privilege must be followed while granting roles to the service accounts.

  2. Encryption : The secrets should be encrypted. Secret Manager encrypts the secret at rest using AES-256 by default. But you can use your own encryption keys, customer-managed encryption keys (CMEK) to encrypt your secret at rest. For details, see Enable customer-managed encryption keys for Secret Manager.

  3. Password rotation : The passwords stored in the secret manager should be rotated on a regular basis to reduce the risk of a security incident.

"},{"location":"reference-architectures/automated-password-rotation/#why-password-rotation","title":"Why password rotation","text":"

Security best practices require us to regularly rotate the passwords in our stack. Changing the password mitigates the risk in the event where passwords are compromised.

"},{"location":"reference-architectures/automated-password-rotation/#how-to-rotate-passwords","title":"How to rotate passwords","text":"

Manually rotating the passwords is an antipattern and should not be done as it exposes the password to the human rotating it and may result in security and system incidents. Manual rotation processes also introduce the risk that the rotation isn't actually performed due to human error, for example forgetting or typos.

This necessitates having a workflow that automates password rotation. The password could be of an application, a database, a third-party service or a SaaS vendor etc.

"},{"location":"reference-architectures/automated-password-rotation/#automatic-password-rotation","title":"Automatic password rotation","text":"

Typically, rotating a password requires these steps:

(such as applications,databases, SaaS).

application source the latest passwords.

The following architecture represents a general design for a systems that can rotate password for any underlying software/system.

"},{"location":"reference-architectures/automated-password-rotation/#workflow","title":"Workflow","text":""},{"location":"reference-architectures/automated-password-rotation/#example-deployment-for-automatic-password-rotation-in-cloudsql","title":"Example deployment for automatic password rotation in CloudSQL","text":"

The following architecture demonstrates a way to automatically rotate CloudSQL password.

"},{"location":"reference-architectures/automated-password-rotation/#workflow-of-the-example-deployment","title":"Workflow of the example deployment","text":"

Note : The architecture doesn't show the flow to restart the application after the password rotation as shown in thee Generic architecture but it can be added easily with minimal changes to the Terraform code.

"},{"location":"reference-architectures/automated-password-rotation/#deploy-the-architecture","title":"Deploy the architecture","text":"

The code to build the architecture has been provided with this repository. Follow these instructions to create the architecture and use it:

  1. Open Cloud Shell on Google Cloud Console and log in with your credentials.

  2. If you want to use an existing project, get role/project.owner role on the project and set the environment in Cloud Shell as shown below. Then, move to step 4.

     #set shell environment variable\n export PROJECT_ID=<PROJECT_ID>\n

    Replace <PROJECT_ID> with the ID of the existing project.

  3. If you want to create a new GCP project run the following commands in Cloud Shell.

     #set shell environment variable\n export PROJECT_ID=<PROJECT_ID>\n #create project\n gcloud projects create ${PROJECT_ID} --folder=<FOLDER_ID>\n #associate the project with billing account\n gcloud billing projects link ${PROJECT_ID} --billing-account=<BILLING_ACCOUNT_ID>\n

    Replace <PROJECT_ID> with the ID of the new project. Replace <BILLING_ACCOUNT_ID> with the billing account ID that the project should be associated with.

  4. Set the project ID in Cloud Shell and enable APIs in the project:

     gcloud config set project ${PROJECT_ID}\n gcloud services enable \\\n  cloudresourcemanager.googleapis.com \\\n  serviceusage.googleapis.com \\\n  --project ${PROJECT_ID}\n
  5. Download the Git repository containing the code to build the example architecture:

     cd ~\n git clone https://github.com/GoogleCloudPlatform/platform-engineering\n cd platform-engineering/reference-architectures/automated-password-rotation/terraform\n\n terraform init\n terraform plan -var \"project_id=$PROJECT_ID\"\n terraform apply -var \"project_id=$PROJECT_ID\" --auto-approve\n

    Note: It takes around 30 mins for the entire architecture to get deployed.

"},{"location":"reference-architectures/automated-password-rotation/#review-the-deployed-architecture","title":"Review the deployed architecture","text":"

Once the Terraform apply has successfully finished, the example architecture will be deployed in the your Google Cloud project. Before exercising the rotation process, review and verify the deployment in the Google Cloud Console.

"},{"location":"reference-architectures/automated-password-rotation/#review-cloud-sql-database","title":"Review Cloud SQL database","text":"
  1. In the Cloud Console, using the naviagion menu select Databases > SQL. Confirm that cloudsql-for-pg is present in the instance list.
  2. Click on cloudsql-for-pg, to open the instance details page.
  3. In the left hand menu select Users. Confirm you see a user with the name user1.
  4. In the left hand menu select Databases. Confirm you see see a database named test.
  5. In the left hand menu select Overview.
  6. In the Connect to this instance section, note that only Private IP address is present and no public IP address. This restricts access to the instance over public network.
"},{"location":"reference-architectures/automated-password-rotation/#review-secret-manager","title":"Review Secret Manager","text":"
  1. In the Cloud Console, using the naviagion menu select Security > Secret Manager. Confirm that cloudsql-pswd is present in the list.
  2. Click on cloudsql-pswd.
  3. Click three dots icon and select View secret value to view the password for Cloud SQL database.
  4. Copy the secret value, you will use this in the next section to confirm access to the Cloud SQL instance.
"},{"location":"reference-architectures/automated-password-rotation/#review-cloud-scheduler-job","title":"Review Cloud Scheduler job","text":"
  1. In the Cloud Console, using the naviagion menu select Integration Services > Cloud Scheduler. Confirm that password-rotator-job is present in the Scheduler Jobs list.
  2. Click on password-rotator-job, confirm it is configured to run on 1st of every month.
  3. Click Continue to see execution configuration. Confirm the following settings:

  4. Click Cancel, to exit the Cloud Scheduler job details.

"},{"location":"reference-architectures/automated-password-rotation/#review-pubsub-topic-configuration","title":"Review Pub/Sub topic configuration","text":"
  1. In the Cloud Console, using the naviagion menu select Analytics > Pub/Sub.
  2. In the left hand menu select Topic. Confirm that pswd-rotation-topic is present in the topics list.
  3. Click on pswd-rotation-topic.
  4. In the Subscriptions tab, click on Subscription ID for the rotator Cloud Function.
  5. Click on the Details tab. Confirm, the Audience tag shows the rotator Cloud Function.
  6. In the left hand menu select Topic.
  7. Click on pswd-rotation-topic.
  8. Click on the Details tab.
  9. Click on the schema in the Schema name field.
  10. In the Details, confirm that the schema contains these keys: secretid, instance_name, db_user, db_name and db_location. These keys will be used to identify what database and user password is to be rotated.
"},{"location":"reference-architectures/automated-password-rotation/#review-cloud-run-function","title":"Review Cloud Run Function","text":"
  1. In the Cloud Console, using the naviagion menu select Serverless > Cloud Run Functions. Confirm that pswd_rotator_function is present in the list.
  2. Click on pswd_rotator_function.
  3. Click on the Trigger tab. Confirm that the field Receive events from has the Pub/Sub topic pswd-rotation-topic. This indicates that the function will run when a message arrives to that topic.
  4. Click on the Details tab. Confirm that under Network Settings VPC connector is set to connector-for-sql. This allows the function to connect to the CloudSQL over private IPs.
  5. Click on the Source tab to see the python code that the function executes.

Note: For the purpose of this tutorial, the secret is accessible to the human users and not encrypted. See the section and Secret Manager best practice

"},{"location":"reference-architectures/automated-password-rotation/#verify-that-you-are-able-to-connect-to-the-cloud-sql-instance","title":"Verify that you are able to connect to the Cloud SQL instance","text":"
  1. In the Cloud Console, using the naviagion menu select Databases > SQL
  2. Click on cloudsql-for-pg
  3. In the left hand menu select Cloud SQL Studio.
  4. In Database dropdown, choose test.
  5. In User dropdown, choose user1.
  6. In Password textbox paste the password copied from the cloudsql-pswd secret.
  7. Click Authenticate. Confirm you were able to log in to the database.
"},{"location":"reference-architectures/automated-password-rotation/#rotate-the-cloud-sql-password","title":"Rotate the Cloud SQL password","text":"

Typically, the Cloud Scheduler will automatically run on 1st day of every month triggering password rotation. However, for this tutorial you will run the Cloud Scheduler job manually, which causes the Cloud Run Function to generate a new password, update it in Cloud SQL and store it in Secret Manager.

  1. In the Cloud Console, using the naviagion menu select Integration Services > Cloud Scheduler.
  2. For the scheduler job password-rotator-job. Click the three dots icon and select Force run.
  3. Verify that the Status of last execution shows Success.
  4. In the Cloud Console, using the naviagion menu select Serverless > Cloud Run Functions.
  5. Click function named pswd_rotator_function.
  6. Select the Logs tab.
  7. Review the logs and verify the function has run and completed without errors. Successful completion will be noted with log entries containing Secret cloudsql-pswd changed in Secret Manager!, DB password changed successfully! and DB password verified successfully!.
"},{"location":"reference-architectures/automated-password-rotation/#test-the-new-password","title":"Test the new password","text":"
  1. In the Cloud Console, using the naviagion menu select Security > Secret Manager. Confirm that cloudsql-pswd is present in the list.
  2. Click on cloudsql-pswd. Note you should now see a new version, version 2 of the secret.
  3. Click three dots icon and select View secret value to view the password for Cloud SQL database.
  4. Copy the secret value.
  5. In the Cloud Console, using the naviagion menu select Databases > SQL
  6. Click on cloudsql-for-pg
  7. In the left hand menu select Cloud SQL Studio.
  8. In Database dropdown, choose test.
  9. In User dropdown, choose user1.
  10. In Password textbox paste the password copied from the cloudsql-pswd secret.
  11. Click Authenticate. Confirm you were able to log in to the database.
"},{"location":"reference-architectures/automated-password-rotation/#destroy-the-architecture","title":"Destroy the architecture","text":"
  cd platform-engineering/reference-architectures/automated-password-rotation/terraform\n\n  terraform init\n  terraform plan -var \"project_id=$PROJECT_ID\"\n  terraform destroy -var \"project_id=$PROJECT_ID\" --auto-approve\n
"},{"location":"reference-architectures/automated-password-rotation/#conclusion","title":"Conclusion","text":"

In this tutorial, you saw a way to automate password rotation on Google Cloud. First, you saw a generic reference architecture that can be used to automate password rotation in any password management system. In the later section, you saw an example deployment that uses Google Cloud services to rotate password of Cloud Sql database in Google Cloud Secret Manager.

Implementing an automatic flow to rotate passwords takes away manual overhead and provide seamless way to tighten your password security. It is recommended to create an automation flow that runs on a regular schedule but can also be easily triggered manually when needed. There can be many variations of this architecture that can be adopted. For example, you can directly trigger a Cloud Run Function from a Google Cloud Scheduler job without sending a message to pub/sub if you don't want to broadcast the password rotation. You should identify a flow that fits your organization requirements and modify the reference architecture to implement it.

"},{"location":"reference-architectures/backstage/","title":"Backstage on Google Cloud","text":"

A collection of resources related to utilizing Backstage on Google Cloud.

"},{"location":"reference-architectures/backstage/#backstage-plugins-for-google-cloud","title":"Backstage Plugins for Google Cloud","text":"

A repository for various plugins can be found here -> google-cloud-backstage-plugins

"},{"location":"reference-architectures/backstage/#backstage-quickstart","title":"Backstage Quickstart","text":"

This is an example deployment of Backstage on Google Cloud with various Google Cloud services providing the infrastructure.

"},{"location":"reference-architectures/backstage/backstage-quickstart/","title":"Backstage on Google Cloud Quickstart","text":"

This quick-start deployment guide can be used to set up an environment to familiarize yourself with the architecture and get an understanding of the concepts related to hosting Backstage on Google Cloud.

NOTE: This environment is not intended to be a long lived environment. It is intended for temporary demonstration and learning purposes. You will need to modify the configurations provided to align with your orginazations needs. Along the way the guide will make callouts to tasks or areas that should be productionized in for long lived deployments.

"},{"location":"reference-architectures/backstage/backstage-quickstart/#architecture","title":"Architecture","text":"

The following diagram depicts the high level architecture of the infrastucture that will be deployed.

"},{"location":"reference-architectures/backstage/backstage-quickstart/#requirements-and-assumptions","title":"Requirements and Assumptions","text":"

To keep this guide simple it makes a few assumptions. Where the are alternatives we have linked to some additional documentation.

  1. The Backstage quick start will be deployed in a new project that you will manually create. If you want to use a project managed through Terraform refer to the Terraform managed project section.
  2. Identity Aware Proxy (IAP) will be used for controlling access to Backstage.
"},{"location":"reference-architectures/backstage/backstage-quickstart/#before-you-begin","title":"Before you begin","text":"

In this section you prepare a folder for deployment.

  1. Open the Cloud Console
  2. Activate Cloud Shell \\ At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt.
"},{"location":"reference-architectures/backstage/backstage-quickstart/#project-creation","title":"Project Creation","text":"

In this section you prepare your project for deployment.

  1. Go to the project selector page in the Cloud Console. Select or create a Cloud project.

  2. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  3. In Cloud Shell, set environment variables with the ID of your project:

    export PROJECT_ID=<INSERT_YOUR_PROJECT_ID>\ngcloud config set project \"${PROJECT_ID}\"\n
  4. Clone the repository and change directory to the guide directory

    git clone https://github.com/GoogleCloudPlatform/platform-engineering && \\\ncd platform-engineering/reference-architectures/backstage/backstage-quickstart\n
  5. Set environment variables

    export BACKSTAGE_QS_BASE_DIR=$(pwd) && \\\nsed -n -i -e '/^export BACKSTAGE_QS_BASE_DIR=/!p' -i -e '$aexport  \\\nBACKSTAGE_QS_BASE_DIR=\"'\"${BACKSTAGE_QS_BASE_DIR}\"'\"' ${HOME}/.bashrc\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#project-configuration","title":"Project Configuration","text":"
  1. Set the project environment variables in Cloud Shell

    export BACKSTAGE_QS_STATE_BUCKET=\"${PROJECT_ID}-terraform\"\nexport IAP_USER_DOMAIN=\"<your org's domain>\"\nexport IAP_SUPPORT_EMAIL=\"<your org's support email>\"\n
  2. Create a Cloud Storage bucket to store the Terraform state

    gcloud storage buckets create gs://${BACKSTAGE_QS_STATE_BUCKET} --project ${PROJECT_ID}\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#deploy-backstage","title":"Deploy Backstage","text":"

Before running Terraform, make sure that the Service Usage API and Service Management API are enabled.

  1. Enable Service Usage API and Service Management API

    gcloud services enable \\\n  cloudresourcemanager.googleapis.com \\\n  iap.googleapis.com \\\n  serviceusage.googleapis.com \\\n  servicemanagement.googleapis.com\n
  2. Setup the Identity Aware Proxy brand

    gcloud iap oauth-brands create \\\n  --application_title=\"IAP Secured Backstage\" \\\n  --project=\"${PROJECT_ID}\" \\\n  --support_email=\"${IAP_SUPPORT_EMAIL}\"\n

    Capture the brand name in an environment variable, it will be in the format of: projects/[your_project_number]/brands/[your_project_number].

    export IAP_BRAND=<your_brand_name>\n
  3. Using the brand name create the IAP client.

    gcloud iap oauth-clients create \\\n  ${IAP_BRAND} \\\n  --display_name=\"IAP Secured Backstage\"\n

    Capture the client_id and client_secret in environment variables. For the client_id we only need the last value of the string, it will be in the format of: 549085115274-ksi3n9n41tp1vif79dda5ofauk0ebes9.apps.googleusercontent.com

    export IAP_CLIENT_ID=\"<your_client_id>\"\nexport IAP_SECRET=\"<your_iap_secret>\"\n
  4. Set the configuration variables

    sed -i \"s/YOUR_STATE_BUCKET/${BACKSTAGE_QS_STATE_BUCKET}/g\" ${BACKSTAGE_QS_BASE_DIR}/backend.tf\nsed -i \"s/YOUR_PROJECT_ID/${PROJECT_ID}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_USER_DOMAIN/${IAP_USER_DOMAIN}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_SUPPORT_EMAIL/${IAP_SUPPORT_EMAIL}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_CLIENT_ID/${IAP_CLIENT_ID}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\nsed -i \"s/YOUR_IAP_SECRET/${IAP_SECRET}/g\" ${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars\n
  5. Create the resources

    cd ${BACKSTAGE_QS_BASE_DIR} && \\\nterraform init && \\\nterraform plan -input=false -out=tfplan && \\\nterraform apply -input=false tfplan && \\\nrm tfplan\n

    Initial run of the Terraform may result in errors due to they way the API services are asyrchonously enabled. Re-running the terraform usually resolves the errors.

    This will take a while to create all of the required resources, figure somewhere between 15 and 20 minutes.

  6. Build the container image for Backstage

    cd manifests/cloudbuild\ngcloud builds submit .\n

    The output of that command will include a fully qualified image path similar to: us-central1-docker.pkg.dev/[your_project]/backstage-qs/backstage-quickstart:d747db2a-deef-4783-8a0e-3b36e568f6fc Using that value create a new environment variable.

    export IMAGE_PATH=\"<your_image_path>\"\n

    This will take approximately 10 minutes to build and push the image.

  7. Configure Cloud SQL postgres user for password authentication.

    gcloud sql users set-password postgres --instance=backstage-qs --prompt-for-password\n
  8. Grant the backstage workload service account create database permissions.

    a. In the Cloud Console, navigate to SQL

    b. Select the database instance

    c. In the left menu select Cloud SQL Studio

    d. Choose the postgres database and login with the postgres user and password you created in step 4.

    e. Run the following sql commands, to grant create database permissions

    ALTER USER \"backstage-qs-workload@[your_project_id].iam\" CREATEDB\n
  9. Perform an initial deployment of Kubernetes resources.

    cd ../k8s\nsed -i \"s%CONTAINER_IMAGE%${IMAGE_PATH}%g\" deployment.yaml\ngcloud container clusters get-credentials backstage-qs --region us-central1 --dns-endpoint\nkubectl apply -f .\n
  10. Capture the IAP audience, the Backend Service may take a few minutes to appear.

    a. In the Cloud Console, navigate to Security > Identity-Aware Proxy

    b. Verify the IAP option is set to enabled. If not enable it now.

    b. Choose Get JWT audience code from the three dot menu on the right side of your Backend Service.

    c. The value will be in the format of: /projects/<your_project_number>/global/backendServices/<numeric_id>. Using that value create a new environment variable.

    export IAP_AUDIENCE_VALUE=\"<your_iap_audience_value>\"\n
  11. Redeploy the Kubernetes manifests with the IAP audience

    sed -i \"s%IAP_AUDIENCE_VALUE%${IAP_AUDIENCE_VALUE}%g\" deployment.yaml\nkubectl apply -f .\n
  12. In a browser navigate to you backstage endpoint. The URL will be similar to https://qs.endpoints.[your_project_id].cloud.goog

"},{"location":"reference-architectures/backstage/backstage-quickstart/#cleanup","title":"Cleanup","text":"
  1. Destroy the resources using Terraform destroy

    cd ${BACKSTAGE_QS_BASE_DIR} && \\\nterraform init && \\\nterraform destroy -auto-approve && \\\nrm -rf .terraform .terraform.lock.hcl\n
  2. Delete the project

    gcloud projects delete ${PROJECT_ID}\n
  3. Remove Terraform files and temporary files

    cd ${BACKSTAGE_QS_BASE_DIR} && \\\nrm -rf \\\n.terraform \\\n.terraform.lock.hcl \\\ninitialize/.terraform \\\ninitialize/.terraform.lock.hcl \\\ninitialize/backend.tf.local \\\ninitialize/state\n
  4. Reset the TF variables file

    cd ${BACKSTAGE_QS_BASE_DIR} && \\\ncp backstage-qs-auto.tfvars.local backstage-qs.auto.tfvars\n
  5. Remove the environment variables

    sed \\\n-i -e '/^export BACKSTAGE_QS_BASE_DIR=/d' \\\n${HOME}/.bashrc\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#advanced-options","title":"Advanced Options","text":""},{"location":"reference-architectures/backstage/backstage-quickstart/#terraform-managed-project","title":"Terraform managed project","text":"

In some instances you will need to create and manage the project through Terraform. This quickstart provides a sample process and Terraform to create and destory the project via Terraform.

To run this part of the quick start you will need the following information and permissions.

"},{"location":"reference-architectures/backstage/backstage-quickstart/#creating-a-terraform-managed-project","title":"Creating a Terraform managed project","text":"
  1. Set the configuration variables

    nano ${BACKSTAGE_QS_BASE_DIR}/initialize/initialize.auto.tfvars\n
    environment_name  = \"qs\"\niapUserDomain = \"\"\niapSupportEmail = \"\"\nproject = {\n  billing_account_id = \"XXXXXX-XXXXXX-XXXXXX\"\n  folder_id          = \"############\"\n  name               = \"backstage\"\n  org_id             = \"############\"\n}\n

    Values required :

  2. Authorize gcloud

    gcloud auth login --activate --no-launch-browser --quiet --update-adc\n
  3. Create a new project

    cd ${BACKSTAGE_QS_BASE_DIR}/initialize\nterraform init && \\\nterraform plan -input=false -out=tfplan && \\\nterraform apply -input=false tfplan && \\\nrm tfplan && \\\nterraform init -force-copy -migrate-state && \\\nrm -rf state\n
  4. Set the project environment variables in Cloud Shell

    PROJECT_ID=$(grep environment_project_id \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars |\nawk -F\"=\" '{print $2}' | xargs)\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#cleaning-up-a-terraform-managed-project","title":"Cleaning up a Terraform managed project","text":"
  1. Destroy the project

    cd ${BACKSTAGE_QS_BASE_DIR}/initialize && \\\nTERRAFORM_BUCKET_NAME=$(grep bucket backend.tf | awk -F\"=\" '{print $2}' |\nxargs) && \\\ncp backend.tf.local backend.tf && \\\nterraform init -force-copy -lock=false -migrate-state && \\\ngcloud storage rm --recursive --continue-on-error gs://${TERRAFORM_BUCKET_NAME}/* && \\\nterraform init && \\\nterraform destroy -auto-approve  && \\\nrm -rf .terraform .terraform.lock.hcl state/\n
"},{"location":"reference-architectures/backstage/backstage-quickstart/#re-using-an-existing-project","title":"Re-using an Existing Project","text":"

In situations where you have run this quickstart before and then cleaned-up the resources but are re-using the project, it might be neccasary to restore the endpoints from a deleted state first.

BACKSTAGE_QS_PREFIX=$(grep environment_name \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars | awk -F\"=\" '{print $2}' | xargs)\nBACKSTAGE_QS_PROJECT_ID=$(grep environment_project_id \\\n${BACKSTAGE_QS_BASE_DIR}/backstage-qs.auto.tfvars | awk -F\"=\" '{print $2}' | xargs)\ngcloud endpoints services undelete \\\n${BACKSTAGE_QS_PREFIX}.endpoints.${BACKSTAGE_QS_PROJECT_ID}.cloud.goog \\\n--quiet 2>/dev/null\n
"},{"location":"reference-architectures/cloud_deploy_flow/","title":"Platform Engineering Deployment Demo","text":""},{"location":"reference-architectures/cloud_deploy_flow/#background","title":"Background","text":"

Platform engineering focuses on providing a robust framework for managing the deployment of applications across various environments. One of the critical components in this field is the automation of application deployments, which streamlines the entire process from development to production.

Most organizations have predefined rules around security, privacy, deployment, and change management to ensure consistency and compliance across environments. These rules often include automated security scans, privacy checks, and controlled release protocols that track all changes in both production and pre-production environments.

In this demo, the architecture is designed to show how a deployment tool like Cloud Deploy can integrate smoothly into such workflows, supporting both automation and oversight. The process starts with release validation, ensuring that only compliant builds reach the release stage. Rollout approvals then offer flexibility, allowing teams to implement either manual checks or automated responses depending on specific requirements.

This setup provides a blueprint for organizations to streamline deployment cycles while maintaining robust governance. By using this demo, you can see how these components work together, from container build through deployment, in a way that minimizes disruption to existing processes and aligns with typical organizational change management practices.

This demo showcases a complete workflow that begins with the build of a container and progresses through various stages, ultimately resulting in the deployment of a new application.

"},{"location":"reference-architectures/cloud_deploy_flow/#overview-of-the-demo","title":"Overview of the Demo","text":"

This demo illustrates the end-to-end deployment process, starting from the container build phase. Here's a high-level overview of the workflow:

  1. Container Build Process: The demo begins when a container is built-in Cloud Build. Upon completion, a notification is sent to a Pub/Sub message queue.

  2. Release Logic: A Cloud Run Function subscribes to this message queue, assessing whether a release should be created. If a release is warranted, a message is sent to a \"Command Queue\" (another Pub/Sub topic).

  3. Creating a Release: A dedicated function listens to the \"Command Queue\" and communicates with Cloud Deploy to create a new release. Once the release is created, a notification is dispatched to the Pub/Sub Operations topic.

  4. Rollout Process: Another Cloud Function picks up this notification and initiates the rollout process by sending a createRolloutRequest to the \"Command Queue.\"

  5. Approval Process: Since rollouts typically require approval, a notification is sent to the cloud-deploy-approvals Pub/Sub queue. An approval function then picks up this message, allowing you to implement your custom logic or utilize the provided site Demo to return JSON, such as { \"manualApproval\": \"true\" }.

  6. Deployment: Once approved, the rollout proceeds, and the new application is deployed.

"},{"location":"reference-architectures/cloud_deploy_flow/#prerequisites","title":"Prerequisites","text":""},{"location":"reference-architectures/cloud_deploy_flow/#iam-roles-used-by-terraform","title":"IAM Roles used by Terraform","text":"

To run this demo, the following IAM roles will be granted to the service account created by the Terraform configuration:

"},{"location":"reference-architectures/cloud_deploy_flow/#gcp-services-enabled-by-terraform","title":"GCP Services enabled by Terraform","text":"

The following Google Cloud services must be enabled in your project to run this demo:

"},{"location":"reference-architectures/cloud_deploy_flow/#getting-started","title":"Getting Started","text":"

To run this demo, follow these steps:

  1. Fork and Clone the Repository: Start by forking this repository to your GitHub account (So you can connect GCP to this repository), then clone it to your local environment. After cloning, change your directory to the deployment demo:

    cd platform-engineering/reference-architectures/cloud_deploy_flow\n

    Note: you can't use a repository inside an Organization, just use your personal account for this demo.

  2. Set Up Environment Variables or Variables File: You can set the necessary variables either by exporting them as environment variables or by creating a terraform.tfvars file. Refer to variables.tf for more details on each variable. Ensure the values match your Google Cloud project and GitHub configuration.

    For the repo-name and repo-owner here, use the repository you just cloned above.

  3. Initialize and Apply Terraform: With the environment variables set, initialize and apply the Terraform configuration:

    terraform init\nterraform apply\n

    Note: Applying Terraform may take a few minutes as it creates the necessary resources.

  4. Connect GitHub Repository to Cloud Build: Due to occasional issues with automatic connections, you may need to manually attach your GitHub repository to Cloud Build in the Google Cloud Console.

    If you get the following error you will need to manually connect your repository to the project:

    Error: Error creating Trigger: googleapi: Error 400: Repository mapping does\nnot exist.\n

    Re-run step 3 to ensure all resources are deployed

  5. Navigate to the Demo site: Once the Terraform setup is complete, switch to the Demo site directory:

    cd platform-engineering/reference-architectures/cloud-deploy-flow/WebsiteDemo\n
  6. Authenticate and Run the Demo site:

  7. Trigger a Build in Cloud Build:

  8. Approve the Rollout: When an approval message is received, you\u2019ll need to send a response to complete the deployment. Use the message data provided and add a ManualApproval field:

    {\n    \"message\": {\n    \"data\": \"<base64-encoded data>\",\n    \"attributes\": {\n        \"Action\": \"Required\",\n        \"Rollout\": \"rollout-123\",\n        \"ReleaseId\": \"release-456\",\n        \"ManualApproval\": \"true\"\n    }\n    }\n}\n
  9. Verify the Deployment: Once the approval is processed, the deployment should finish rolling out. Check the Cloud Deploy dashboard in the Google Cloud Console to confirm the deployment status.

"},{"location":"reference-architectures/cloud_deploy_flow/#conclusion","title":"Conclusion","text":"

This demo encapsulates the essential components and workflow for deploying applications using platform engineering practices. It illustrates how various services interact to ensure a smooth deployment process.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/","title":"Cloud Deployment Approvals with Pub/Sub","text":"

This project provides a Google Cloud Run Function to automate deployment approvals based on messages received via Google Cloud Pub/Sub. The function processes deployment requests, checks conditions for rollout approval, and publishes an approval command if the requirements are met.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#features","title":"Features","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#setup","title":"Setup","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#requirements","title":"Requirements","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#installation","title":"Installation","text":"
  1. Clone the repository:

    git clone <repository-url>\ncd <repository-folder>\n
  2. Enable APIs: Enable the Google Cloud Pub/Sub and Deploy APIs for your project:

    gcloud services enable pubsub.googleapis.com deploy.googleapis.com\n
  3. Deploy the Function: Use Google Cloud SDK to deploy the function:

    gcloud functions deploy cloudDeployApprovals --runtime go116 \\\n--trigger-event-type google.cloud.pubsub.topic.v1.messagePublished \\\n--trigger-resource YOUR_SUBSCRIBE_TOPIC\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#environment-variables","title":"Environment Variables","text":"

The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:

Variable Name Description Required PROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#code-structure","title":"Code Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#usage","title":"Usage","text":"

The function cloudDeployApprovals is invoked whenever a message is published to the configured Pub/Sub topic. Upon receiving a message, the function will:

  1. Parse and validate the message.
  2. Check if the action is Required, if a rollout ID is provided, and if manual approval is marked as \"true.\"
  3. If conditions are met, it will publish an approval command to the SENDTOPICID topic.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#sample-pubsub-message","title":"Sample Pub/Sub Message","text":"

A message sent to the function should resemble this JSON structure:

{\n  \"message\": {\n    \"data\": \"<base64-encoded data>\",\n    \"attributes\": {\n      \"Action\": \"Required\",\n      \"Rollout\": \"rollout-123\",\n      \"ReleaseId\": \"release-456\",\n      \"ManualApproval\": \"true\"\n    }\n  }\n}\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#custom-manual-approval-field","title":"Custom Manual Approval Field","text":"

In the ApprovalsData struct, there is a ManualApproval field. This field is a custom addition, not provided by Google Cloud Deploy, and serves as a placeholder for an external approval system.

To integrate the approval system, you can replace or adapt this field to suit your existing change process workflow. For instance, you could link this field to an external ticketing or project management system to track and verify approvals. Implementing an approval system allows greater control over deployment rollouts, ensuring they align with your organization\u2019s policies.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployApprovals/#logging","title":"Logging","text":"

The function logs each major step, from invocation to message processing and condition checking, to facilitate debugging and monitoring.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/","title":"Cloud Deploy Interactions with Pub/Sub","text":"

This project demonstrates a Google Cloud Run Function to manage deployments by creating releases, rollouts, or approving rollouts based on incoming Pub/Sub messages. The function leverages Google Cloud Deploy and listens for deployment-related commands sent via Pub/Sub, executing appropriate actions based on the command type.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#features","title":"Features","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#setup","title":"Setup","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#requirements","title":"Requirements","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#installation","title":"Installation","text":"
  1. Clone the repository:

    git clone <repository-url>\ncd <repository-folder>\n
  2. Set up Google Cloud: Ensure you have enabled the Google Cloud Deploy and Pub/Sub APIs in your project.

  3. Deploy the Function: Deploy the function using Google Cloud SDK:

    gcloud functions deploy cloudDeployInteractions --runtime go116 \\\n--trigger-event-type google.cloud.pubsub.topic.v1.messagePublished \\\n--trigger-resource YOUR_TOPIC_NAME\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#pubsub-message-format","title":"Pub/Sub Message Format","text":"

The Pub/Sub message should include a JSON payload with a command field specifying the type of deployment action to execute. Examples of the command types include:

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#sample-pubsub-message","title":"Sample Pub/Sub Message","text":"

The message should follow this structure:

{\n  \"message\": {\n    \"data\": \"<base64-encoded JSON containing command data>\"\n  }\n}\n

The JSON inside data should follow the format for DeployCommand:

{\n  \"command\": \"CreateRelease\",\n  \"createReleaseRequest\": {\n    // Release creation parameters\n  },\n  \"createRolloutRequest\": {\n    // Rollout creation parameters\n  },\n  \"approveRolloutRequest\": {\n    // Rollout approval parameters\n  }\n}\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#code-structure","title":"Code Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployInteractions/#logging","title":"Logging","text":"

Each function logs key steps, from initialization to message handling and completion of deployments, helping in troubleshooting and monitoring.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/","title":"Cloud Deploy Operations Function","text":"

This project contains a Google Cloud Run Function written in Go, designed to interact with Google Cloud Deploy. The function listens for deployment events on a Pub/Sub topic, processes those events, and triggers specific deployment operations based on the event details. For instance, when a deployment release succeeds, it triggers a rollout creation and sends the relevant command to another Pub/Sub topic.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#requirements","title":"Requirements","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#environment-variables","title":"Environment Variables","text":"

The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:

Variable Name Description Required PROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#structure","title":"Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#main-components","title":"Main Components","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#function-workflow","title":"Function Workflow","text":"
  1. Trigger: The function cloudDeployOperations is triggered by a deployment event, specifically a CloudEvent.
  2. Event Parsing: The function parses the event data into a Message struct, checking for deployment success events.
  3. Rollout Creation: If a release success is detected, it creates a CommandMessage for a rollout and calls sendCommandPubSub.
  4. Command Publish: The sendCommandPubSub function publishes the CommandMessage to a designated Pub/Sub topic to initiate the rollout.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#setup-and-deployment","title":"Setup and Deployment","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#local-development","title":"Local Development","text":"
  1. Clone the repository and set up your local environment with the necessary environment variables.
  2. Run the Cloud Run Functions framework locally to test the function:
functions-framework --target=cloudDeployOperations\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#deployment-to-google-cloud-run-functions","title":"Deployment to Google Cloud Run Functions","text":"
  1. Set up your Google Cloud environment and enable the necessary APIs:

    gcloud services enable cloudfunctions.googleapis.com pubsub.googleapis.com\nclouddeploy.googleapis.com\n
  2. Deploy the function to Google Cloud:

    gcloud functions deploy cloudDeployOperations \\\n   --runtime go120 \\\n   --trigger-topic <YOUR_TRIGGER_TOPIC> \\\n   --set-env-vars PROJECTID=<YOUR_PROJECT_ID>,LOCATION=<YOUR_LOCATION>,SENDTOPICID=<YOUR_SEND_TOPIC_ID>\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#error-handling","title":"Error Handling","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#license","title":"License","text":"

This project is licensed under the MIT License. See the LICENSE file for details.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/cloudDeployOperations/#notes","title":"Notes","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/","title":"Example Cloud Run Function","text":"

This project demonstrates a Google Cloud Run Function that triggers deployments based on Pub/Sub messages. The function listens for build notifications from Google Cloud Build and initiates a release in Google Cloud Deploy when a build succeeds.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#table-of-contents","title":"Table of Contents","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#prerequisites","title":"Prerequisites","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#environment-variables","title":"Environment Variables","text":"

The function relies on environment variables to specify project configuration. Ensure these are set before deploying the function:

Variable Name Description Required PROJECTID Google Cloud project ID Yes LOCATION The deployment location (region) Yes PIPELINE The name of the delivery pipeline in Cloud Deploy. Yes TRIGGER The ID of the build trigger in Cloud Build. Yes SENDTOPICID Pub/Sub topic ID for sending commands Yes"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#function-overview","title":"Function Overview","text":"

The deployTrigger function is invoked by Pub/Sub events. Here's a breakdown of its key components:

  1. Initialization:

  2. Message Handling:

  3. Release Creation:

  4. Random ID Generation:

"},{"location":"reference-architectures/cloud_deploy_flow/CloudFunctions/createRelease/#deploying-the-function","title":"Deploying the Function","text":"

To deploy the function, follow these steps:

  1. Ensure that your Google Cloud SDK is authenticated and configured with the correct project.
  2. Use the following command to deploy the function:
gcloud functions deploy deployTrigger \\\n    --runtime go113 \\\n    --trigger-topic YOUR_TOPIC_NAME \\\n    --env-file .env\n
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/","title":"Random Date Service","text":"

This repository contains a sample application designed to demonstrate how deployments can work through Google Cloud Deploy and Cloud Build. Instead of a traditional \"Hello World\" application, this project generates and serves a random date, showcasing how to set up a cloud-based service.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#usage-note","title":"Usage Note","text":"

This code is designed to integrate with the Terraform configuration for the cloud_deploy_flow demo. While you can deploy this component individually, it's primarily intended to be used as part of the full Terraform-managed workflow. Please note that this section of the readme may be less actively maintained, as the preferred deployment method relies on the Terraform setup.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#overview","title":"Overview","text":"

The Random Date Service is built to illustrate the process of deploying an application using Cloud Run and Cloud Deploy. The application serves a random date formatted as a string. This simple service allows you to explore key concepts in cloud deployment without the complexity of a full-fledged application.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#components","title":"Components","text":""},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#1-maingo","title":"1. main.go","text":"

This is the core of the application, where the HTTP server is defined. It handles requests and responds with a randomly generated date.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#2-dockerfile","title":"2. Dockerfile","text":"

The Dockerfile specifies how to build a container image for the application. This image will be used in Cloud Run for deploying the service.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#3-skaffoldyaml","title":"3. skaffold.yaml","text":"

This file is configured for Google Cloud Deploy, facilitating the deployment process by managing builds and configurations in a single file.

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#4-runyaml","title":"4. run.yaml","text":"

The run.yaml file defines the configuration for Cloud Run and Cloud Deploy. Key aspects to note include:

"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#usage","title":"Usage","text":"

To deploy and test this application:

  1. Build the Docker Image: Use the provided Dockerfile to create a container image.
  2. Deploy to Cloud Run: Utilize the run.yaml configuration to deploy the service.
  3. Monitor Deployments: Use Cloud Deploy to observe the deployment pipeline and ensure the service is running as expected.
  4. Access the Service: After deployment, access the service through its endpoint to receive a random date.
"},{"location":"reference-architectures/cloud_deploy_flow/CloudRun/#conclusion","title":"Conclusion","text":"

This sample application serves as a foundational example of how to leverage cloud services for deploying applications. By utilizing Google Cloud Deploy and Cloud Build, you can understand the deployment lifecycle and how cloud-native applications can be effectively managed and served.

Feel free to explore the code and configurations provided in this repository to get a better grasp of the deployment process.

"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/","title":"Pub/Sub Local Demo","text":"

This project is a simple demonstration of a Pub/Sub system using Google Cloud Pub/Sub and a basic Express.js server. It is designed to visually understand how messages are sent to and from Pub/Sub queues. The code provided is primarily for demonstration purposes and is not intended for production use.

"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#features","title":"Features","text":""},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#project-structure","title":"Project Structure","text":""},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#installation","title":"Installation","text":"
  1. Install the required dependencies:

    npm install

  2. Create a .env file and populate it with the environment variables found in .env.sample

  3. Start the server:

    node index.js

  4. Open your web browser and go to http://localhost:8080 to access the demo.

"},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#usage","title":"Usage","text":""},{"location":"reference-architectures/cloud_deploy_flow/WebsiteDemo/#disclaimer","title":"Disclaimer","text":"

This code is intended for educational and demonstration purposes only. It may not be suitable for production environments due to lack of error handling, security considerations, and scalability.

"},{"location":"reference-architectures/github-runners-gke/","title":"Reference Guide: Deploy and use GitHub Actions Runners on GKE","text":""},{"location":"reference-architectures/github-runners-gke/#overview","title":"Overview","text":"

This guide walks you through the process of setting up self-hosted GitHub Actions Runners on Google Kubernetes Engine (GKE) using the Terraform module terraform-google-github-actions-runners. It then provides instructions on how to create a basic GitHub Actions workflow to leverage these runners.

"},{"location":"reference-architectures/github-runners-gke/#prerequisites","title":"Prerequisites","text":"

Run the following command to enable the prerequisite APIs:

gcloud services enable \\\n  cloudresourcemanager.googleapis.com \\\n  iam.googleapis.com \\\n  container.googleapis.com \\\n  serviceusage.googleapis.com \\\n  --project <YOUR_PROJECT_ID>\n
"},{"location":"reference-architectures/github-runners-gke/#register-a-github-app-for-authenticating-arc","title":"Register a GitHub App for Authenticating ARC","text":"

Using a GitHub App for authentication allows you to make your self-hosted runners available to a GitHub organization that you own or have administrative access to. For more details on registering GitHub Apps, see GitHub\u2019s documentation.

You will need 3 values from this section to use as inputs in the Terraform module:

"},{"location":"reference-architectures/github-runners-gke/#navigate-to-your-organization-github-app-settings","title":"Navigate to your Organization GitHub App settings","text":"
  1. Click your profile picture in the top-right
  2. Click Your organizations
  3. Select the organization you want to use for this walkthrough
  4. Click Settings
  5. Click \\<> Developer settings
  6. Click GitHub Apps
"},{"location":"reference-architectures/github-runners-gke/#create-a-new-github-app","title":"Create a new GitHub App","text":"
  1. Click New GitHub App
  2. Under \u201cGitHub App name\u201d, choose a unique name such as \u201cmy-gke-arc-app\u201d
  3. Under \u201cHomepage URL\u201d enter https://github.com/actions/actions-runner-controller
  4. Under \u201cWebhook,\u201d uncheck Active.
  5. Under \u201cPermissions,\u201d click Repository permissions and use the dropdown menu to select the following permissions:
    1. Metadata: Read-only
  6. Under \u201cPermissions,\u201d click Organization permissions and use the dropdown menu to select the following permissions:
    1. Self-hosted runners: Read and write
  7. Click the Create GitHub App button
"},{"location":"reference-architectures/github-runners-gke/#gather-required-ids-and-keys","title":"Gather required IDs and keys","text":"
  1. On the GitHub App\u2019s page, save the value for \u201cApp ID\u201d
    1. You will use this as the value for gh_app_id in the Terraform module
  2. Under \u201cPrivate keys\u201d click Generate a private key. Save the .pem file for later.
    1. You will use this as the value for gh_app_private_key in the Terraform module
  3. In the menu at the top-left corner of the page, click Install App, and next to your organization, click Install to install the app on your organization.
    1. Choose All repositories to allow any repository in your org to have access to your runners
    2. Choose Only select repositories to allow specific repos to have access to your runners
  4. Note the app installation ID, which you can find on the app installation page, which has the following URL format: https://github.com/organizations/ORGANIZATION/settings/installations/INSTALLATION_ID
    1. You will use this as the value for gh_app_installation_id in the Terraform module.
"},{"location":"reference-architectures/github-runners-gke/#configure-terraform-example","title":"Configure Terraform example","text":""},{"location":"reference-architectures/github-runners-gke/#open-the-terraform-example","title":"Open the Terraform example","text":"

Open the Terraform module repository in Cloud Shell automatically by clicking the button:

Clicking this button will clone the repository into Cloud Shell, change into the example directory, and open the main.tf file in the Cloud Shell Editor.

"},{"location":"reference-architectures/github-runners-gke/#modify-terraform-example-variables","title":"Modify Terraform example variables","text":"
  1. Insert your Google Cloud Project ID as the value of project_id
  2. Modify the sample values of the following variables with the values you saved from earlier.
    1. gh_app_id: insert the value of the App ID from the GitHub App page
    2. gh_app_installation_id: insert the value from the URL of the app installation page
    3. gh_app_private_key:
      1. Copy the .pem file to example directory, alongside the main.tf file
      2. Insert the .pem filename you downloaded after generating the private key for the app, like so:
        1. gh_app_private_key = file(\"example.private-key.pem\")
      3. Warning: Terraform will store the private key in state as plaintext. It\u2019s recommended to secure your state file by using a backend such as a GCS bucket with encryption. You can do so by following these instructions.
  3. Modify the value of gh_config_url with the URL of your GitHub organization. It will be in the format of https://github.com/ORGANIZATION
  4. (Optional) Specify any other parameters that you wish. For a full list of variables you can modify, refer to the module documentation.
"},{"location":"reference-architectures/github-runners-gke/#deploy-the-example","title":"Deploy the example","text":"
  1. Initialize Terraform: Run terraform init to download the required providers.
  2. Plan: Run terraform plan to preview the changes that will be made.
  3. Apply: Run terraform apply and confirm to create the resources.

You will see the runners become available in your GitHub Organization:

  1. Go to your GitHub organization page
  2. Click Settings
  3. Open the \u201cActions\u201d drop-down in the left menu and choose Runners

You should see the runners appear as \u201carc-runners\u201d

"},{"location":"reference-architectures/github-runners-gke/#creating-a-github-actions-workflow","title":"Creating a GitHub Actions Workflow","text":"
  1. Create a new GitHub repository within your organization.
  2. In your GitHub repository, click the Actions tab.
  3. Click New workflow
  4. Under \u201cChoose workflow\u201d click set up a workflow yourself
  5. Paste the following configuration into the text editor:

    name: Actions Runner Controller Demo\non:\nworkflow_dispatch:\njobs:\nExplore-GitHub-Actions:\n   runs-on: arc-runners\n   steps:\n   - run: echo \"This job uses runner scale set runners!\"\n
  6. Click Commit changes to save the workflow to your repository.

"},{"location":"reference-architectures/github-runners-gke/#test-the-github-actions-workflow","title":"Test the GitHub Actions Workflow","text":"
  1. Go back to the Actions tab in your repository.
  2. In the left menu, select the name of your workflow. This should be \u201cActions Runner Controller Demo\u201d if you left the above configuration unchanged
  3. Click Run workflow to open the drop-down menu, and click Run workflow
  4. The sample workflow executes on your GKE-hosted ARC runner set. You can view the output within the GitHub Actions run history.
"},{"location":"reference-architectures/github-runners-gke/#cleanup","title":"Cleanup","text":""},{"location":"reference-architectures/github-runners-gke/#teardown-terraform-managed-infrastructure","title":"Teardown Terraform-managed infrastructure","text":"
  1. Navigate back into the example directory you previously ran terraform apply

    cd terraform-google-github-actions-runners/examples/gh-runner-gke-simple/\n
  2. Destroy Terraform-managed infrastructure

    terraform destroy\n

Warning: this will destroy the GKE cluster, example VPC, service accounts, and the Helm-managed workloads previously deployed by this example.

"},{"location":"reference-architectures/github-runners-gke/#delete-github-resources","title":"Delete GitHub resources","text":"

If you created a new GitHub App for testing purposes of this walkthrough, you can delete it via the following instructions. Note that any services authenticating via this GitHub App will lose access.

  1. Navigate to your Organization GitHub App settings
    1. Click your profile picture in the top-right
    2. Click Your organizations
    3. Select the organization you used for this walkthrough
    4. Click Settings
    5. Click the \\<> Developer settings drop-down
    6. Click GitHub Apps
  2. In the row where your GitHub App is listed, click Edit
  3. In the left-side menu, click Advanced
  4. Click Delete GitHub App
  5. Type the name of the GitHub App to confirm and delete.
"},{"location":"reference-architectures/sandboxes/","title":"Sandbox Projects Reference Architecure","text":"

This architecture demonstrates how you can automate the provisioning of sandbox projects and automatically apply sensible guardrails and constraints. A sandbox project allows engineers to experiment with new technologies. Sandboxes are provisioned for a short period of time and with budget constraints.

"},{"location":"reference-architectures/sandboxes/#architecture","title":"Architecture","text":"

The following diagram is the high-level architecture for enabling self-service creation of sandbox projects.

  1. The system project contains the state database and infrastructure required to create, delete and manage the lifecycle of the sandboxes.
  2. User interface that engineers use to request and manage the sandboxes they own.
  3. Firestore stores the state of the overall environment. Documents in the database represent all the active and inactive sandboxes. The document model is detailed in the sandbox-modules readme.
  4. Firestore triggers are Cloud Run functions whenever a document is created or updated. Create and update events are handled by Cloud Run functions onCreate and onModify. The functions contain the logic to decide if a sandbox should be created or deleted.
  5. infraManagerProcessor is a Cloud Run service that works with Infrastructure Manager to kick off and monitor the infrastructure management. This is handled in a Cloud Run service because the execution of Terraform is a long running process.
  6. Cloud Storage contains the Terraform templates and state used by Infrastructure Manager.
  7. Cloud Scheduler triggers the execution of sandbox lifecycle management processes, for example a function that check for the expiration of sandboxes and marking them for deletion.
"},{"location":"reference-architectures/sandboxes/#structure-of-the-repository","title":"Structure of the Repository","text":"

This repository contains the code to stand up the reference architecture and also create difference sandbox templates in the catalog. This section describes the structure of the repository so you can better navigate the code.

"},{"location":"reference-architectures/sandboxes/#examples","title":"Examples","text":"

The /examples directory contains a sample Terraform deployment for deploying the reference architecture and command-line tool to exercise the automated creation of developer sandboxes. The examples are intended to provide you a starting point so you can incorporate the reference architecure into your infrastructure.

"},{"location":"reference-architectures/sandboxes/#gcp-sandboxes","title":"GCP Sandboxes","text":"

This example uses the Terraform modules from /sandbox-modules to deploy the reference architecture and includes instructions on how to get started.

"},{"location":"reference-architectures/sandboxes/#command-line-interface-cli","title":"Command Line Interface (CLI)","text":"

The workflows and lifecycle of the sandboxes deployed via the reference architecture are managed through the document model stored in Cloud Firestore. This abstraction has the benefit of separating the core logic included in the reference archiecture from the user experience (UX). As such the example command line interface lets you experiment with the reference architecture and learn about the object model.

"},{"location":"reference-architectures/sandboxes/#catalog","title":"Catalog","text":"

This directory contains a collection (catalog) of templates that you can use to deploy sandboxes. The reference architecture includes one for an empty project, but others could be added to support more specialized roles such as database admins, AI engineers, etc.

"},{"location":"reference-architectures/sandboxes/#sandbox-modules","title":"Sandbox Modules","text":"

These modules use the fabric modules to create the system project. Each module represents a large component of the overall reference architecture and each component can be combined into the one system project or spread across different projects to help with separation of duties.

"},{"location":"reference-architectures/sandboxes/#fabric-modules","title":"Fabric Modules","text":"

These are the base Terraform modules adopted from the Cloud Fabric Foundation. The fabric foundation is intended to be vendored, so we have copied them here for repeatbility of the overall deployment of the reference architecture.

We recommend that as you need additional modules for templates in the catalog that you start with and vendor the modules from the Cloud Foundation Fabric into this directory.

"},{"location":"reference-architectures/sandboxes/examples/cli/","title":"Example Command Line Interface","text":""},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/","title":"Overview","text":"

This directory contains Terraform configuration files that let you deploy the system project. This example is a good entry point for testing the reference architecture and learning how it can be incorportated into your own infrastructure as code processes.

"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#architecture","title":"Architecture","text":"

For an explanation of the components of the sandboxes reference architecture and the interaction flow, read the main Architecture section.

"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#before-you-begin","title":"Before you begin","text":"

In this section you prepare a folder for deployment.

  1. Open the Cloud Console
  2. Activate Cloud Shell \\ At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt.

  3. In Cloud Shell, clone this repository

    git clone https://github.com/GoogleCloudPlatform/platform-engineering.git\n
  4. Export variables for the working directories

    export SANDBOXES_DIR=\"$(pwd)/reference-architectures/examples/gcp-sandboxes\"\nexport SANDBOXES_CLI=\"$(pwd)/reference-architectures/examples/cli\"\n
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#preparing-the-sandboxes-folder","title":"Preparing the Sandboxes Folder","text":"

In this section you prepare your environment for deploying the system project.

  1. Go to the Manage Resources page in the Cloud Console in the IAM & Admin menu.

  2. Click Create folder, then choose Folder.

  3. Enter a name for your folder. This folder will be used to contain the system and sandbox projects.

  4. Click Create

  5. Copy the folder ID from the Manage resources page, you will need this value later for use as Terraform variable.

"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#deploying-the-reference-architecture","title":"Deploying the reference architecture","text":"
  1. Set the project ID and region in the corresponding Terraform environment variables

    export TF_VAR_billing_account=\"<your billing account id>\"\nexport TF_VAR_sandboxes_folder=\"folders/<folder id from step 5>\"\nexport TF_VAR_system_project_name=\"<name for the system project>\"\n
  2. Change directory into the Terraform example directory and initialize Terraform.

    cd \"${SANDBOXES_DIR}\"\nterraform init\n
  3. Apply the configuration. Answer yes when prompted, after reviewing the resources that Terraform intends to create.

    terraform apply\n
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#creating-a-sandbox","title":"Creating a sandbox","text":"

Now that the system project has been deployed, create a sandbox using the example cli.

  1. Change directory into the example command-line tool directory

    cd \"${SANDBOXES_DIR}\"\n
  2. Install there required Python libraries

    pip install -r requirements.txt\n
  3. Create a Sandbox using the cli

    python ./sandbox.py create \\\n--system=\"<name of your system project>\" \\\n--project_id=\"<name of the sandbox to create>\"\n
"},{"location":"reference-architectures/sandboxes/examples/gcp-sandboxes/#next-steps","title":"Next steps","text":"

Your sandboxes infrastructure is ready, you may continue to use the example cli to create and delete sandboxes. At this point it is recommended that you:

"},{"location":"reference-architectures/sandboxes/sandbox-modules/","title":"Sandbox Projects","text":""},{"location":"reference-architectures/sandboxes/sandbox-modules/#data-model","title":"Data Model","text":"

Each document stored in Cloud Firestore represents a sandbox. The following sections document the fields and structure of those documents.

"},{"location":"reference-architectures/sandboxes/sandbox-modules/#deployment","title":"Deployment","text":"Field Type Description _updateSource string This describes the last process or tool used to update or create the deployment document. For example, the example python cli _updateSource is set to python and when the firestore-processor Cloud Run updates the document it is set to cloudrun. status string Status of the sandbox, this changes create and delete operations progress. Refer to Key Statuses for detailed definitions of the values. projectId string The project ID of the sandbox. templateName string The name of the Terraform template from the catalog that the sandbox is based on. deploymentState object<DeploymentState> State object for the sandbox deployment. Contains data such as budget, current spend, expiration date, etc.The state object is updated by and used by the various lifecycle functions. infraManagerDeploymentId string ID returned by Infrastructure Manager for the deployment. infraManagerResult object<DeploymentResponse> This is the response object returned from Infrastructure Manager deployment operation. userId string Unique identifier for the user which owns the sandbox deployment. createdAt string Timestamp that the sandbox record was created at. updatedAt string Timestamp that the sandbox record was last updated. variables object<Variables> List of variable supplied by the user, which are in turned used by the template to create the sandbox. auditLog array[string] List of messages that the system can add as an audit log."},{"location":"reference-architectures/sandboxes/sandbox-modules/#deploymentstate","title":"DeploymentState","text":"Field Type Description budgetLimit number Spend limit for the sandbox. currentSpend number Current spend for the sandbox. expiresAt string Time base expiration for the sandbox."},{"location":"reference-architectures/sandboxes/sandbox-modules/#variables","title":"Variables","text":"

Collection of key-value pairs that are used in the Infrastructure Manager request, for use as the Terraform variable values.

"},{"location":"reference-architectures/sandboxes/sandbox-modules/#key-statuses","title":"Key Statuses","text":"

The following table describes important statuses that are used during the lifecycle of a deployment.

Status Set By Handled By Meaning provision_requested User Interface firestore-functions The user has requested that a sandbox be provisioned. provision_pending infra-manager-processor infra-manager-processor Indicates the request was received by the infra-manager-processor but the request hasn\u2019t yet been made to Infrastructure Manager. provision_inprogress infra-manager-processor infra-manager-processor Indicates that the request has been submitted to Infrastructure Manager and it is in progress with Infrastructure Manager. provision_error infra-manager-processor infra-manager-processor The deployment process has failed with an error. provision_successful infra-manager-processor infra-manager-processor The deployment process has succeeded and the sandbox is available and running. delete_requested User Interface firestore-functions The user or lifecycle process has requested that a sandbox be deleted. delete_pending infra-manager-processor infra-manager-processor Indicates the delete request was received by the infra-manager-processor but the request hasn\u2019t yet been made to Infrastructure Manager. delete_inprogress infra-manager-processor infra-manager-processor Indicates that the delete request has been submitted to Infrastructure Manager and it is in progress with Infrastructure Manager. delete_error infra-manager-processor infra-manager-processor The delete process has failed with an error. delete_successful infra-manager-processor infra-manager-processor The delete process has succeeded."}]} \ No newline at end of file diff --git a/reference-architectures/backstage/backstage-quickstart/README.md b/reference-architectures/backstage/backstage-quickstart/README.md index 90417bf..35620fc 100644 --- a/reference-architectures/backstage/backstage-quickstart/README.md +++ b/reference-architectures/backstage/backstage-quickstart/README.md @@ -373,7 +373,8 @@ permissions. xargs) && \ cp backend.tf.local backend.tf && \ terraform init -force-copy -lock=false -migrate-state && \ - gsutil -m rm -rf gs://${TERRAFORM_BUCKET_NAME}/* && \ + gcloud storage rm --recursive \ + --continue-on-error gs://${TERRAFORM_BUCKET_NAME}/* && \ terraform init && \ terraform destroy -auto-approve && \ rm -rf .terraform .terraform.lock.hcl state/