We would like to inform you that ${{ env.accelerator_name }} accelerator test automation process has encountered issues and failed to complete successfully.
The ${{ env.accelerator_name }} pipeline executed against the specified Target URL and the test automation has encountered issues and failed to complete successfully.
Failure Details: • Target URL: ${EXISTING_URL} ${TEST_REPORT_URL:+• Test Report: View Report} • Test Suite: ${TEST_SUITE_NAME} • Deployment: Skipped
-
-
-**Note:** With any AI solutions you create using these templates, you are responsible for assessing all associated risks and for complying with all applicable laws and safety standards. Learn more in the transparency documents for [Agent Service](https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/agents/transparency-note) and [Agent Framework](https://github.com/microsoft/agent-framework/blob/main/TRANSPARENCY_FAQ.md).
-
-
-
-Solution overview
-
-
-It leverages Azure OpenAI Service and Azure AI Search, to identify relevant documents, summarize unstructured information, and generate document templates.
-
-The sample data is sourced from generic AI-generated promissory notes. The documents are intended for use as sample data only.
-
-### Solution architecture
-||
-|---|
-
-
-
-
-### Additional resources
-
-[Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/)
-
-[Azure AI Search](https://learn.microsoft.com/en-us/azure/search/)
-
-[Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-studio/)
-
-
-
-
-### Key features
-
- Click to learn more about the key features this solution enables
-
- - **Semantic search**
- Azure AI Search to enable RAG and grounding of the application on the processed dataset.
-
- - **Summarization**
- Azure OpenAI Service and GPT models to help summarize the search content and answer questions.
-
- - **Content generation**
- Azure OpenAI Service and GPT models to help generate relevant content with Prompt Flow.
-
-
-
-
-
-
-
-Quick deploy
-
-
-### How to install or deploy
-Follow the quick deploy steps on the deployment guide to deploy this solution to your own Azure subscription.
-
-> **Note:** This solution accelerator requires **Azure Developer CLI (azd) version 1.18.0 or higher**. Please ensure you have the latest version installed before proceeding with deployment. [Download azd here](https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/install-azd).
-
-[Click here to launch the deployment guide](./docs/DeploymentGuide.md)
-
-
-**For Local Development**
-- [Local Development Setup Guide](docs/LocalDevelopmentSetup.md) - Comprehensive setup instructions for Windows, Linux, and macOS
-
-| [](https://codespaces.new/microsoft/document-generation-solution-accelerator) | [](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/microsoft/document-generation-solution-accelerator) | [&message=Open&color=blue&logo=visualstudiocode&logoColor=white)](https://vscode.dev/azure/?vscode-azure-exp=foundry&agentPayload=eyJiYXNlVXJsIjogImh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9taWNyb3NvZnQvZG9jdW1lbnQtZ2VuZXJhdGlvbi1zb2x1dGlvbi1hY2NlbGVyYXRvci9yZWZzL2hlYWRzL21haW4vaW5mcmEvdnNjb2RlX3dlYiIsICJpbmRleFVybCI6ICIvaW5kZXguanNvbiIsICJ2YXJpYWJsZXMiOiB7ImFnZW50SWQiOiAiIiwgImNvbm5lY3Rpb25TdHJpbmciOiAiIiwgInRocmVhZElkIjogIiIsICJ1c2VyTWVzc2FnZSI6ICIiLCAicGxheWdyb3VuZE5hbWUiOiAiIiwgImxvY2F0aW9uIjogIiIsICJzdWJzY3JpcHRpb25JZCI6ICIiLCAicmVzb3VyY2VJZCI6ICIiLCAicHJvamVjdFJlc291cmNlSWQiOiAiIiwgImVuZHBvaW50IjogIiJ9LCAiY29kZVJvdXRlIjogWyJhaS1wcm9qZWN0cy1zZGsiLCAicHl0aG9uIiwgImRlZmF1bHQtYXp1cmUtYXV0aCIsICJlbmRwb2ludCJdfQ==) |
-|---|---|---|
-
-
-
-> ⚠️ **Important: Check Azure OpenAI Quota Availability**
- To ensure sufficient quota is available in your subscription, please follow [quota check instructions guide](./docs/QuotaCheck.md) before you deploy the solution.
-
-
-
-### Prerequisites and costs
-
-To deploy this solution accelerator, ensure you have access to an [Azure subscription](https://azure.microsoft.com/free/) with the necessary permissions to create **resource groups, resources, app registrations, and assign roles at the resource group level**. This should include Contributor role at the subscription level and Role Based Access Control role on the subscription and/or resource group level. Follow the steps in [Azure Account Set Up](./docs/AzureAccountSetUp.md).
-
-Check the [Azure Products by Region](https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?products=all®ions=all) page and select a **region** where the following services are available.
-
-Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. The majority of the Azure resources used in this infrastructure are on usage-based pricing tiers. However, Azure Container Registry has a fixed cost per registry per day.
-
-Use the [Azure pricing calculator](https://azure.microsoft.com/en-us/pricing/calculator) to calculate the cost of this solution in your subscription.
-
-Review a [sample pricing sheet](https://azure.com/e/2402502429fc46429e395e0bb93d0711) in the event you want to customize and scale usage.
-
-_Note: This is not meant to outline all costs as selected SKUs, scaled use, customizations, and integrations into your own tenant can affect the total consumption of this sample solution. The sample pricing sheet is meant to give you a starting point to customize the estimate for your specific needs._
-
-
-
-| Product | Description | Cost |
-|---|---|---|
-| [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/) | Free tier. Build generative AI applications on an enterprise-grade platform. | [Pricing](https://azure.microsoft.com/pricing/details/ai-studio/) |
-| [Azure AI Search](https://learn.microsoft.com/en-us/azure/search/) | Standard tier, S1. Pricing is based on the number of documents and operations. Information retrieval at scale for vector and text content in traditional or generative search scenarios. | [Pricing](https://azure.microsoft.com/pricing/details/search/) |
-| [Azure Storage Account](https://learn.microsoft.com/en-us/azure/storage/blobs/) | Standard tier, LRS. Pricing is based on storage and operations. Blob storage in the clopud, optimized for storing massive amounts of unstructured data. | [Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) |
-| [Azure Key Vault](https://learn.microsoft.com/en-us/azure/key-vault/) | Standard tier. Pricing is based on the number of operations. Maintain keys that access and encrypt your cloud resources, apps, and solutions. | [Pricing](https://azure.microsoft.com/pricing/details/key-vault/) |
-| [Azure AI Services](https://learn.microsoft.com/en-us/azure/ai-services/) | S0 tier, defaults to gpt-4.1 and text-embedding-ada-002 models. Pricing is based on token count. | [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) |
-| [Azure Container App](https://learn.microsoft.com/en-us/azure/container-apps/) | Consumption tier with 0.5 CPU, 1GiB memory/storage. Pricing is based on resource allocation, and each month allows for a certain amount of free usage. Allows you to run containerized applications without worrying about orchestration or infrastructure. | [Pricing](https://azure.microsoft.com/pricing/details/container-apps/) |
-| [Azure Container Registry](https://learn.microsoft.com/en-us/azure/container-registry/) | Basic tier. Build, store, and manage container images and artifacts in a private registry for all types of container deployments | [Pricing](https://azure.microsoft.com/pricing/details/container-registry/) |
-| [Log analytics](https://learn.microsoft.com/en-us/azure/azure-monitor/) | Pay-as-you-go tier. Costs based on data ingested. Collect and analyze on telemetry data generated by Azure. | [Pricing](https://azure.microsoft.com/pricing/details/monitor/) |
-| [Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/) | Fully managed, distributed NoSQL, relational, and vector database for modern app development. | [Pricing](https://azure.microsoft.com/en-us/pricing/details/cosmos-db/autoscale-provisioned/) |
-
-
-
-
-
->⚠️ **Important:** To avoid unnecessary costs, remember to take down your app if it's no longer in use,
-either by deleting the resource group in the Portal or running `azd down`.
-
-
-
-Business Scenario
-
-
-
-||
-|---|
-
-
-
-Put your data to work by reducing blank page anxiety, speeding up document drafting, improving draft document quality, and reference information quickly - keeping experts in their expertise. Draft document templates for your organization including Invoices, End-user Contracts, Purchase Orders, Investment Proposals, and Grant Submissions.
-
-⚠️ The sample data used in this repository is synthetic and generated using Azure OpenAI Service. The data is intended for use as sample data only.
-
-
-### Business value
-
- Click to learn more about what value this solution provides
-
- - **Draft templates quickly**
- Put your data to work to create any kind of document that is supported by a large data library.
-
- - **Share**
- Share with co-authors, contributors and approvers quickly.
-
- - **Contextualize information**
- Provide context using natural language. Primary and secondary queries allow for access to supplemental detail – reducing cognitive load, increasing efficiency, and enabling focus on higher value work.
-
- - **Gain confidence in responses**
- Trust responses to queries by customizing how data is referenced and returned to users, reducing the risk of hallucinated responses.
Access reference documents in the same chat window to get more detail and confirm accuracy.
-
- - **Secure data and responsible AI for innovation**
- Improve data security to minimize breaches, fostering a culture of responsible AI adoption, maximize innovation opportunities, and sustain competitive edge.
-
-
-
-
-
-
-
-Supporting documentation
-
-
-### Security guidelines
-
-This template uses Azure Key Vault to store all connections to communicate between resources.
-
-This template also uses [Managed Identity](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/overview) for local development and deployment.
-
-To ensure continued best practices in your own repository, we recommend that anyone creating solutions based on our templates ensure that the [Github secret scanning](https://docs.github.com/code-security/secret-scanning/about-secret-scanning) setting is enabled.
-
-You may want to consider additional security measures, such as:
-
-* Enabling Microsoft Defender for Cloud to [secure your Azure resources](https://learn.microsoft.com/azure/defender-for-cloud).
-* Protecting the Azure Container Apps instance with a [firewall](https://learn.microsoft.com/azure/container-apps/waf-app-gateway) and/or [Virtual Network](https://learn.microsoft.com/azure/container-apps/networking?tabs=workload-profiles-env%2Cazure-cli).
-
-
-
-### Cross references
-Check out similar solution accelerators
-
-| Solution Accelerator | Description |
-|---|---|
-| [Chat with your data](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) | Chat with their own data by combining Azure Cognitive Search and Large Language Models (LLMs) to create a conversational search experience. It enables increased user efficiency by minimizing endpoints required to access internal company knowledgebases. |
-| [Document knowledge mining](https://github.com/microsoft/Document-Knowledge-Mining-Solution-Accelerator) | Built on Azure OpenAI Service and Azure AI Document Intelligence to process and extract summaries, entities, and metadata from unstructured, multi-modal documents and enable searching and chatting over this data. |
-| [Build your own copilot](https://github.com/microsoft/Build-your-own-copilot-Solution-Accelerator) | Helps client advisors to save time and prepare relevant discussion topics for scheduled meetings with overviews, client profile views, and chatting with structured data. |
-
-
-
-
-
-## Provide feedback
-
-Have questions, find a bug, or want to request a feature? [Submit a new issue](https://github.com/microsoft/document-generation-solution-accelerator/issues) on this repo and we'll connect.
-
-
-
-## Responsible AI Transparency FAQ
-Please refer to [Transparency FAQ](./docs/TRANSPARENCY_FAQ.md) for responsible AI transparency details of this solution accelerator.
-
-
-
-## Disclaimers
-
-This release is an artificial intelligence (AI) system that generates text based on user input. The text generated by this system may include ungrounded content, meaning that it is not verified by any reliable source or based on any factual data. The data included in this release is synthetic, meaning that it is artificially created by the system and may contain factual errors or inconsistencies. Users of this release are responsible for determining the accuracy, validity, and suitability of any content generated by the system for their intended purposes. Users should not rely on the system output as a source of truth or as a substitute for human judgment or expertise.
-
-This release only supports English language input and output. Users should not attempt to use the system with any other language or format. The system output may not be compatible with any translation tools or services, and may lose its meaning or coherence if translated.
-
-This release does not reflect the opinions, views, or values of Microsoft Corporation or any of its affiliates, subsidiaries, or partners. The system output is solely based on the system's own logic and algorithms, and does not represent any endorsement, recommendation, or advice from Microsoft or any other entity. Microsoft disclaims any liability or responsibility for any damages, losses, or harms arising from the use of this release or its output by any user or third party.
-
-This release does not provide any financial advice, and is not designed to replace the role of qualified client advisors in appropriately advising clients. Users should not use the system output for any financial decisions or transactions, and should consult with a professional financial advisor before taking any action based on the system output. Microsoft is not a financial institution or a fiduciary, and does not offer any financial products or services through this release or its output.
-
-This release is intended as a proof of concept only, and is not a finished or polished product. It is not intended for commercial use or distribution, and is subject to change or discontinuation without notice. Any planned deployment of this release or its output should include comprehensive testing and evaluation to ensure it is fit for purpose and meets the user's requirements and expectations. Microsoft does not guarantee the quality, performance, reliability, or availability of this release or its output, and does not provide any warranty or support for it.
-
-This Software requires the use of third-party components which are governed by separate proprietary or open-source licenses as identified below, and you must comply with the terms of each applicable license in order to use the Software. You acknowledge and agree that this license does not grant you a license or other right to use any such third-party proprietary or open-source components.
-
-To the extent that the Software includes components or code used in or derived from Microsoft products or services, including without limitation Microsoft Azure Services (collectively, “Microsoft Products and Services”), you must also comply with the Product Terms applicable to such Microsoft Products and Services. You acknowledge and agree that the license governing the Software does not grant you a license or other right to use Microsoft Products and Services. Nothing in the license or this ReadMe file will serve to supersede, amend, terminate or modify any terms in the Product Terms for any Microsoft Products and Services.
-
-You must also comply with all domestic and international export laws and regulations that apply to the Software, which include restrictions on destinations, end users, and end use. For further information on export restrictions, visit https://aka.ms/exporting.
-
-You acknowledge that the Software and Microsoft Products and Services (1) are not designed, intended or made available as a medical device(s), and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customer is solely responsible for displaying and/or obtaining appropriate consents, warnings, disclaimers, and acknowledgements to end users of Customer’s implementation of the Online Services.
-
-You acknowledge the Software is not subject to SOC 1 and SOC 2 compliance audits. No Microsoft technology, nor any of its component technologies, including the Software, is intended or made available as a substitute for the professional advice, opinion, or judgment of a certified financial services professional. Do not use the Software to replace, substitute, or provide professional financial advice or judgment.
-
-BY ACCESSING OR USING THE SOFTWARE, YOU ACKNOWLEDGE THAT THE SOFTWARE IS NOT DESIGNED OR INTENDED TO SUPPORT ANY USE IN WHICH A SERVICE INTERRUPTION, DEFECT, ERROR, OR OTHER FAILURE OF THE SOFTWARE COULD RESULT IN THE DEATH OR SERIOUS BODILY INJURY OF ANY PERSON OR IN PHYSICAL OR ENVIRONMENTAL DAMAGE (COLLECTIVELY, “HIGH-RISK USE”), AND THAT YOU WILL ENSURE THAT, IN THE EVENT OF ANY INTERRUPTION, DEFECT, ERROR, OR OTHER FAILURE OF THE SOFTWARE, THE SAFETY OF PEOPLE, PROPERTY, AND THE ENVIRONMENT ARE NOT REDUCED BELOW A LEVEL THAT IS REASONABLY, APPROPRIATE, AND LEGAL, WHETHER IN GENERAL OR IN A SPECIFIC INDUSTRY. BY ACCESSING THE SOFTWARE, YOU FURTHER ACKNOWLEDGE THAT YOUR HIGH-RISK USE OF THE SOFTWARE IS AT YOUR OWN RISK.
diff --git a/archive-doc-gen/SECURITY.md b/archive-doc-gen/SECURITY.md
deleted file mode 100644
index 96d73bc27..000000000
--- a/archive-doc-gen/SECURITY.md
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-## Security
-
-Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin).
-
-If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
-
-## Reporting Security Issues
-
-**Please do not report security vulnerabilities through public GitHub issues.**
-
-Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report).
-
-If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp).
-
-You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
-
-Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
-
- * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
- * Full paths of source file(s) related to the manifestation of the issue
- * The location of the affected source code (tag/branch/commit or direct URL)
- * Any special configuration required to reproduce the issue
- * Step-by-step instructions to reproduce the issue
- * Proof-of-concept or exploit code (if possible)
- * Impact of the issue, including how an attacker might exploit the issue
-
-This information will help us triage your report more quickly.
-
-If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
-
-## Preferred Languages
-
-We prefer all communications to be in English.
-
-## Policy
-
-Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd).
-
-
\ No newline at end of file
diff --git a/archive-doc-gen/SUPPORT.md b/archive-doc-gen/SUPPORT.md
deleted file mode 100644
index 2c42db0f8..000000000
--- a/archive-doc-gen/SUPPORT.md
+++ /dev/null
@@ -1,13 +0,0 @@
-# Support
-
-## How to file issues and get help
-
-This project uses GitHub Issues to track bugs and feature requests. Please search the existing
-issues before filing new issues to avoid duplicates. For new issues, file your bug or
-feature request as a new Issue.
-
-For help and questions about using this project, please submit an issue on this repository.
-
-## Microsoft Support Policy
-
-Support for this repository is limited to the resources listed above.
\ No newline at end of file
diff --git a/archive-doc-gen/app-azure.yaml b/archive-doc-gen/app-azure.yaml
deleted file mode 100644
index a4f96371a..000000000
--- a/archive-doc-gen/app-azure.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-# yaml-language-server: $schema=https://raw.githubusercontent.com/Azure/azure-dev/main/schemas/v1.0/azure.yaml.json
-
-name: sample-app-aoai-chatgpt
-metadata:
- template: sample-app-aoai-chatgpt@0.0.1-beta
-services:
- backend:
- project: .
- language: py
- host: appservice
- hooks:
- prepackage:
- windows:
- shell: pwsh
- run: cd ./frontend;npm install;npm run build
- interactive: true
- continueOnError: false
- posix:
- shell: sh
- run: cd ./frontend;npm install;npm run build
- interactive: true
- continueOnError: false
-hooks:
- preprovision:
- windows:
- shell: pwsh
- run: ./scripts/auth_init.ps1
- interactive: true
- continueOnError: false
- posix:
- shell: sh
- run: ./scripts/auth_init.sh
- interactive: true
- continueOnError: false
- postprovision:
- windows:
- shell: pwsh
- run: ./scripts/auth_update.ps1;
- interactive: true
- continueOnError: false
- posix:
- shell: sh
- run: ./scripts/auth_update.sh;
- interactive: true
- continueOnError: false
diff --git a/archive-doc-gen/azure.yaml b/archive-doc-gen/azure.yaml
deleted file mode 100644
index f38189ef6..000000000
--- a/archive-doc-gen/azure.yaml
+++ /dev/null
@@ -1,51 +0,0 @@
-environment:
- name: document-generation
- location: eastus
-
-name: document-generation
-metadata:
- template: document-generation@1.0
-
-requiredVersions:
- azd: '>= 1.18.0'
-
-parameters:
- solutionPrefix:
- type: string
- default: bs-azdtest
- otherLocation:
- type: string
- default: eastus2
- baseUrl:
- type: string
- default: 'https://github.com/microsoft/document-generation-solution-accelerator'
-
-deployment:
- mode: Incremental
- template: ./infra/main.bicep # Path to the main.bicep file inside the 'deployment' folder
- parameters:
- solutionPrefix: ${parameters.solutionPrefix}
- otherLocation: ${parameters.otherLocation}
- baseUrl: ${parameters.baseUrl}
-
-hooks:
- postprovision:
- windows:
- run: |
- Write-Host "Web app URL: "
- Write-Host "$env:WEB_APP_URL" -ForegroundColor Cyan
- Write-Host "`nIf you want to use the Sample Data, run the following command in the Bash terminal to process it:"
- Write-Host "bash ./infra/scripts/process_sample_data.sh" -ForegroundColor Cyan
- shell: pwsh
- continueOnError: false
- interactive: true
- posix:
- run: |
- echo "Web app URL: "
- echo $WEB_APP_URL
- echo ""
- echo "If you want to use the Sample Data, run the following command in the bash terminal to process it:"
- echo "bash ./infra/scripts/process_sample_data.sh"
- shell: sh
- continueOnError: false
- interactive: true
\ No newline at end of file
diff --git a/archive-doc-gen/azure_custom.yaml b/archive-doc-gen/azure_custom.yaml
deleted file mode 100644
index af8bae654..000000000
--- a/archive-doc-gen/azure_custom.yaml
+++ /dev/null
@@ -1,48 +0,0 @@
-environment:
- name: document-generation
- location: eastus
-
-name: document-generation
-metadata:
- template: document-generation@1.0
-
-requiredVersions:
- azd: '>= 1.18.0'
-
-parameters:
- solutionPrefix:
- type: string
- default: bs-azdtest
- otherLocation:
- type: string
- default: eastus2
- baseUrl:
- type: string
- default: 'https://github.com/microsoft/document-generation-solution-accelerator'
-
-services:
- webapp:
- project: ./src
- language: py
- host: appservice
- dist: ./dist
- hooks:
- prepackage:
- windows:
- shell: pwsh
- run: ../infra/scripts/package_webapp.ps1
- interactive: true
- continueOnError: false
- posix:
- shell: sh
- run: bash ../infra/scripts/package_webapp.sh
- interactive: true
- continueOnError: false
-
-deployment:
- mode: Incremental
- template: ./infra/main.bicep # Path to the main.bicep file inside the 'deployment' folder
- parameters:
- solutionPrefix: ${parameters.solutionPrefix}
- otherLocation: ${parameters.otherLocation}
- baseUrl: ${parameters.baseUrl}
diff --git a/archive-doc-gen/docs/ACRBuildAndPushGuide.md b/archive-doc-gen/docs/ACRBuildAndPushGuide.md
deleted file mode 100644
index e37889ea9..000000000
--- a/archive-doc-gen/docs/ACRBuildAndPushGuide.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# Azure Container Registry (ACR) – Build & Push Guide
-
-This guide provides step-by-step instructions to build and push Docker images for **WebApp** and **Backend** services into Azure Container Registry (ACR).
-
-## 📋 Prerequisites
-Before starting, ensure you have:
-- An active [Azure Subscription](https://portal.azure.com/)
-- [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) installed and logged in
-- [Docker Desktop](https://docs.docker.com/get-docker/) installed and running
-- Access to your Azure Container Registry (ACR)
-- To create an Azure Container Registry (ACR), you can refer to the following guides:
-
- - [Create Container Registry using Azure CLI](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli)
-
- - [Create Container Registry using Azure Portal](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli)
-
- - [Create Container Registry using PowerShell](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-powershell)
-
- - [Create Container Registry using ARM Template](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-geo-replication-template)
-
- - [Create Container Registry using Bicep](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-bicep?tabs=CLI)
-
----
-
-Login to ACR :
-``` bash
-az acr login --name $ACR_NAME
-```
-
-## 🚀 Build and Push Images
-
-**Backend** / **WebApp :**
-
- ```bash
-az acr login --name
-docker build --no-cache -f WebApp.Dockerfile -t /: .
-docker push /:
- ```
-
-If you want to update image tag and image manually you can follow below steps:
-- Go to your App Service in the [Azure Portal](https://portal.azure.com/#home).
-- In the left menu, select Deployment → Deployment Center
-- Under Registry settings, you can configure:
-
- - Image Source → (e.g., Azure Container Registry / Docker Hub / Other).
-
- - Image Name → e.g., myapp/backend.
-
- - Tag → e.g., v1.2.3.
-
-
-
-## ✅ Verification
-
-Run the following command to verify that images were pushed successfully:
-```bash
-az acr repository list --name $ACR_NAME --output table
-```
-
-You should see repositories in the output.
-
-## 📝 Notes
-
-- Always use meaningful tags (v1.0.0, staging, prod) instead of just latest.
-
-- If you are pushing from a CI/CD pipeline, make sure the pipeline agent has access to Docker and ACR.
-
-- For private images, ensure your services (e.g., Azure Container Apps, AKS, App Service) are configured with appropriate ACR pull permissions.
-
-
-
diff --git a/archive-doc-gen/docs/AppAuthentication.md b/archive-doc-gen/docs/AppAuthentication.md
deleted file mode 100644
index 34ab4533a..000000000
--- a/archive-doc-gen/docs/AppAuthentication.md
+++ /dev/null
@@ -1,32 +0,0 @@
-# Set Up Authentication in Azure App Service
-
-This document provides step-by-step instructions to configure Azure App Registrations for a front-end application.
-
-## Prerequisites
-
-- Access to **Microsoft Entra ID**
-- Necessary permissions to create and manage **App Registrations**
-
-## Step 1: Add Authentication in Azure App Service configuration
-1. Click on `Authentication` from left menu.
-
- 
-
-2. Click on `+ Add identity provider` to see a list of identity providers.
-
- 
-
-3. Click on `Identity Provider` dropdown to see a list of identity providers.
-
- 
-
-4. Select the first option `Microsoft Entra Id` from the drop-down list and select `client secret expiration` under App registration.
-> NOTE: If `Create new app registration` is disabled, then go to [Create new app registration](/docs/create_new_app_registration.md) and come back to this step to complete the app authentication.
-
- 
-
-5. Accept the default values and click on `Add` button to go back to the previous page with the identity provider added.
-
- 
-
-6. You have successfully added app authentication, and now required to log in to access the application.
diff --git a/archive-doc-gen/docs/AzureAccountSetUp.md b/archive-doc-gen/docs/AzureAccountSetUp.md
deleted file mode 100644
index 22ffa836f..000000000
--- a/archive-doc-gen/docs/AzureAccountSetUp.md
+++ /dev/null
@@ -1,14 +0,0 @@
-## Azure account setup
-
-1. Sign up for a [free Azure account](https://azure.microsoft.com/free/) and create an Azure Subscription.
-2. Check that you have the necessary permissions:
- * Your Azure account must have `Microsoft.Authorization/roleAssignments/write` permissions, such as [Role Based Access Control Administrator](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#role-based-access-control-administrator-preview), [User Access Administrator](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#user-access-administrator), or [Owner](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#owner).
- * Your Azure account also needs `Microsoft.Resources/deployments/write` permissions on the subscription level.
-
-You can view the permissions for your account and subscription by following the steps below:
-- Navigate to the [Azure Portal](https://portal.azure.com/) and click on `Subscriptions` under 'Navigation'
-- Select the subscription you are using for this accelerator from the list.
- - If you try to search for your subscription and it does not come up, make sure no filters are selected.
-- Select `Access control (IAM)` and you can see the roles that are assigned to your account for this subscription.
- - If you want to see more information about the roles, you can go to the `Role assignments`
- tab and search by your account name and then click the role you want to view more information about.
\ No newline at end of file
diff --git a/archive-doc-gen/docs/AzureGPTQuotaSettings.md b/archive-doc-gen/docs/AzureGPTQuotaSettings.md
deleted file mode 100644
index a91be396a..000000000
--- a/archive-doc-gen/docs/AzureGPTQuotaSettings.md
+++ /dev/null
@@ -1,10 +0,0 @@
-## How to Check & Update Quota
-
-1. **Navigate** to the [Azure AI Foundry portal](https://ai.azure.com/).
-2. **Select** the AI Project associated with this accelerator.
-3. **Go to** the `Management Center` from the bottom-left navigation menu.
-4. Select `Quota`
- - Click on the `GlobalStandard` dropdown.
- - Select the required **GPT model** (`GPT-4.1`) or **Embeddings model** (`text-embedding-ada-002`).
- - Choose the **region** where the deployment is hosted.
-5. Request More Quota or delete any unused model deployments as needed.
diff --git a/archive-doc-gen/docs/AzureSemanticSearchRegion.md b/archive-doc-gen/docs/AzureSemanticSearchRegion.md
deleted file mode 100644
index 6016dcd51..000000000
--- a/archive-doc-gen/docs/AzureSemanticSearchRegion.md
+++ /dev/null
@@ -1,7 +0,0 @@
-## Select a region where Semantic Search Availability is available before proceeding with the deployment.
-
-Steps to Check Semantic Search Availability
-1. Open the [Semantic Search Availability](https://learn.microsoft.com/en-us/azure/search/search-region-support) page.
-2. Scroll down to the **"Azure Public regions"** section.
-3. Use the table to find supported regions for **Azure AI Search** and its **Semantic ranker** feature.
-4. If your target region is not listed, choose a supported region for deployment.
diff --git a/archive-doc-gen/docs/CustomizingAzdParameters.md b/archive-doc-gen/docs/CustomizingAzdParameters.md
deleted file mode 100644
index 671848c5a..000000000
--- a/archive-doc-gen/docs/CustomizingAzdParameters.md
+++ /dev/null
@@ -1,42 +0,0 @@
-## [Optional]: Customizing resource names
-
-By default this template will use the environment name as the prefix to prevent naming collisions within Azure. The parameters below show the default values. You only need to run the statements below if you need to change the values.
-
-
-> To override any of the parameters, run `azd env set ` before running `azd up`. On the first azd command, it will prompt you for the environment name. Be sure to choose 3-20 charaters alphanumeric unique name.
-
-## Parameters
-
-| Name | Type | Example Value | Purpose |
-| -------------------------------------- | ------- | ---------------------------- | ----------------------------------------------------------------------------- |
-| `AZURE_LOCATION` | string | `` | Sets the Azure region for resource deployment. |
-| `AZURE_ENV_NAME` | string | `docgen` | Sets the environment name prefix for all Azure resources. |
-| `AZURE_ENV_SECONDARY_LOCATION` | string | `eastus2` | Specifies a secondary Azure region. |
-| `AZURE_ENV_MODEL_DEPLOYMENT_TYPE` | string | `Standard` | Defines the model deployment type (allowed: `Standard`, `GlobalStandard`). |
-| `AZURE_ENV_MODEL_NAME` | string | `gpt-4.1` | Specifies the GPT model name (allowed: `gpt-4.1`). |
-| `AZURE_ENV_MODEL_VERSION` | string | `2025-04-14` | Set the Azure model version. |
-| `AZURE_ENV_OPENAI_API_VERSION` | string | `2025-01-01-preview` | Specifies the API version for Azure OpenAI. |
-| `AZURE_ENV_MODEL_CAPACITY` | integer | `30` | Sets the GPT model capacity (based on what's available in your subscription). |
-| `AZURE_ENV_EMBEDDING_MODEL_NAME` | string | `text-embedding-ada-002` | Sets the name of the embedding model to use. |
-| `AZURE_ENV_ACR_NAME` | string | `byocgacontainerreg` | Sets the Azure Container Registry name (allowed value: `byocgacontainerreg`)|
-| `AZURE_ENV_IMAGETAG` | string | `latest_waf` | Set the Image tag Like (allowed values: latest_waf, dev, hotfix) |
-| `AZURE_ENV_EMBEDDING_MODEL_CAPACITY` | integer | `80` | Sets the capacity for the embedding model deployment. |
-| `AZURE_ENV_LOG_ANALYTICS_WORKSPACE_ID` | string | Guide to get your [Existing Workspace ID](/docs/re-use-log-analytics.md) | Reuses an existing Log Analytics Workspace instead of creating a new one. |
-| `AZURE_EXISTING_AI_PROJECT_RESOURCE_ID` | string | Guid to get your existing AI Foundry Project resource ID | Reuses an existing AIFoundry and AIFoundryProject instead of creating a new one. |
-| `AZURE_ENV_OPENAI_LOCATION` | string | `` | Sets the Azure region for OpenAI resource deployment. |
-
-
-## How to Set a Parameter
-
-
-To customize any of the above values, run the following command **before** `azd up`:
-
-```bash
-azd env set
-```
-
-**Example:**
-
-```bash
-azd env set AZURE_LOCATION westus2
-```
diff --git a/archive-doc-gen/docs/DeleteResourceGroup.md b/archive-doc-gen/docs/DeleteResourceGroup.md
deleted file mode 100644
index aebe0adb6..000000000
--- a/archive-doc-gen/docs/DeleteResourceGroup.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# Deleting Resources After a Failed Deployment in Azure Portal
-
-If your deployment fails and you need to clean up the resources manually, follow these steps in the Azure Portal.
-
----
-
-## **1. Navigate to the Azure Portal**
-1. Open [Azure Portal](https://portal.azure.com/).
-2. Sign in with your Azure account.
-
----
-
-## **2. Find the Resource Group**
-1. In the search bar at the top, type **"Resource groups"** and select it.
-2. Locate the **resource group** associated with the failed deployment.
-
-
-
-
-
----
-
-## **3. Delete the Resource Group**
-1. Click on the **resource group name** to open it.
-2. Click the **Delete resource group** button at the top.
-
-
-
-3. Type the resource group name in the confirmation box and click **Delete**.
-
-📌 **Note:** Deleting a resource group will remove all resources inside it.
-
----
-
-## **4. Delete Individual Resources (If Needed)**
-If you don’t want to delete the entire resource group, follow these steps:
-
-1. Open **Azure Portal** and go to the **Resource groups** section.
-2. Click on the specific **resource group**.
-3. Select the **resource** you want to delete (e.g., App Service, Storage Account).
-4. Click **Delete** at the top.
-
-
-
----
-
-## **5. Verify Deletion**
-- After a few minutes, refresh the **Resource groups** page.
-- Ensure the deleted resource or group no longer appears.
-
-📌 **Tip:** If a resource fails to delete, check if it's **locked** under the **Locks** section and remove the lock.
-
-
diff --git a/archive-doc-gen/docs/DeploymentGuide.md b/archive-doc-gen/docs/DeploymentGuide.md
deleted file mode 100644
index a6b4c2789..000000000
--- a/archive-doc-gen/docs/DeploymentGuide.md
+++ /dev/null
@@ -1,509 +0,0 @@
-# Deployment Guide
-
-## Overview
-
-This guide walks you through deploying the Document Generation Solution Accelerator to Azure. The deployment process takes approximately 7-10 minutes for the default Development/Testing configuration and includes both infrastructure provisioning and application setup.
-
-🆘 **Need Help?** If you encounter any issues during deployment, check our [Troubleshooting Guide](./TroubleShootingSteps.md) for solutions to common problems.
-
-## Step 1: Prerequisites & Setup
-
-### 1.1 Azure Account Requirements
-
-Ensure you have access to an [Azure subscription](https://azure.microsoft.com/free/) with the following permissions:
-
-| **Required Permission/Role** | **Scope** | **Purpose** |
-|------------------------------|-----------|-------------|
-| **Contributor** | Subscription level | Create and manage Azure resources |
-| **User Access Administrator** | Subscription level | Manage user access and role assignments |
-| **Role Based Access Control** | Subscription/Resource Group level | Configure RBAC permissions |
-| **App Registration Creation** | Azure Active Directory | Create and configure authentication |
-
-**🔍 How to Check Your Permissions:**
-
-1. Go to [Azure Portal](https://portal.azure.com/)
-2. Navigate to **Subscriptions** (search for "subscriptions" in the top search bar)
-3. Click on your target subscription
-4. In the left menu, click **Access control (IAM)**
-5. Scroll down to see the table with your assigned roles - you should see:
- - **Contributor**
- - **User Access Administrator**
- - **Role Based Access Control Administrator** (or similar RBAC role)
-
-**For App Registration permissions:**
-1. Go to **Microsoft Entra ID** → **Manage** → **App registrations**
-2. Try clicking **New registration**
-3. If you can access this page, you have the required permissions
-4. Cancel without creating an app registration
-
-📖 **Detailed Setup:** Follow [Azure Account Set Up](./AzureAccountSetUp.md) for complete configuration.
-
-### 1.2 Check Service Availability & Quota
-
-⚠️ **CRITICAL:** Before proceeding, ensure your chosen region has all required services available:
-
-**Required Azure Services:**
-- [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/)
-- [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/)
-- [Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/)
-- [Azure AI Search](https://learn.microsoft.com/en-us/azure/search/)
-- [Azure Semantic Search](./AzureSemanticSearchRegion.md)
-
-**Recommended Regions:** East US, East US2, Australia East, UK South, France Central.
-
-🔍 **Check Availability:** Use [Azure Products by Region](https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/) to verify service availability.
-
-### 1.3 Quota Check (Optional)
-
-💡 **RECOMMENDED:** Check your Azure OpenAI quota availability before deployment for optimal planning.
-
-📖 **Follow:** [Quota Check Instructions](./QuotaCheck.md) to ensure sufficient capacity.
-
-**Recommended Configuration:**
-
-- **Minimum:** 150k tokens for Global Standard GPT-4.1
-- **Optimal:** More 150k tokens (for best performance)
-
-> **Note:** When you run `azd up`, the deployment will automatically show you regions with available quota, so this pre-check is optional but helpful for planning purposes. You can customize these settings later in [Step 3.3: Advanced Configuration](#33-advanced-configuration-optional).
-
-📖 **Adjust Quota:** Follow [Azure AI Model Quota Settings](./AzureGPTQuotaSettings.md) if needed.
-
-## Step 2: Choose Your Deployment Environment
-
-Select one of the following options to deploy the Document Generation Solution Accelerator:
-
-### Environment Comparison
-
-| **Option** | **Best For** | **Prerequisites** | **Setup Time** |
-|------------|--------------|-------------------|----------------|
-| **GitHub Codespaces** | Quick deployment, no local setup required | GitHub account | ~5-7 minutes |
-| **VS Code Dev Containers** | Fast deployment with local tools | Docker Desktop, VS Code | ~6-10 minutes |
-| **VS Code Web** | Quick deployment, no local setup required | Azure account | ~6-8 minutes |
-| **Local Environment** | Enterprise environments, full control | All tools individually | ~7-10 minutes |
-
-**💡 Recommendation:** For fastest deployment, start with **GitHub Codespaces** - no local installation required.
-
----
-
-
-Option A: GitHub Codespaces (Easiest)
-
-[](https://codespaces.new/microsoft/document-generation-solution-accelerator)
-
-1. Click the badge above (may take several minutes to load)
-2. Accept default values on the Codespaces creation page
-3. Wait for the environment to initialize (includes all deployment tools)
-4. Proceed to [Step 3: Configure Deployment Settings](#step-3-configure-deployment-settings)
-
-
-
-
-Option B: VS Code Dev Containers
-
-[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/microsoft/document-generation-solution-accelerator)
-
-**Prerequisites:**
-- [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed and running
-- [VS Code](https://code.visualstudio.com/) with [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
-
-**Steps:**
-1. Start Docker Desktop
-2. Click the badge above to open in Dev Containers
-3. Wait for the container to build and start (includes all deployment tools)
-4. Proceed to [Step 3: Configure Deployment Settings](#step-3-configure-deployment-settings)
-
-
-
-
-Option C: Visual Studio Code Web
-
- [&message=Open&color=blue&logo=visualstudiocode&logoColor=white)](https://vscode.dev/azure/?vscode-azure-exp=foundry&agentPayload=eyJiYXNlVXJsIjogImh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9taWNyb3NvZnQvZG9jdW1lbnQtZ2VuZXJhdGlvbi1zb2x1dGlvbi1hY2NlbGVyYXRvci9yZWZzL2hlYWRzL21haW4vaW5mcmEvdnNjb2RlX3dlYiIsICJpbmRleFVybCI6ICIvaW5kZXguanNvbiIsICJ2YXJpYWJsZXMiOiB7ImFnZW50SWQiOiAiIiwgImNvbm5lY3Rpb25TdHJpbmciOiAiIiwgInRocmVhZElkIjogIiIsICJ1c2VyTWVzc2FnZSI6ICIiLCAicGxheWdyb3VuZE5hbWUiOiAiIiwgImxvY2F0aW9uIjogIiIsICJzdWJzY3JpcHRpb25JZCI6ICIiLCAicmVzb3VyY2VJZCI6ICIiLCAicHJvamVjdFJlc291cmNlSWQiOiAiIiwgImVuZHBvaW50IjogIiJ9LCAiY29kZVJvdXRlIjogWyJhaS1wcm9qZWN0cy1zZGsiLCAicHl0aG9uIiwgImRlZmF1bHQtYXp1cmUtYXV0aCIsICJlbmRwb2ludCJdfQ==)
-
-1. Click the badge above (may take a few minutes to load)
-2. Sign in with your Azure account when prompted
-3. Select the subscription where you want to deploy the solution
-4. Wait for the environment to initialize (includes all deployment tools)
-5. Once the solution opens, the **AI Foundry terminal** will automatically start running the following command to install the required dependencies:
-
- ```shell
- sh install.sh
- ```
- During this process, you’ll be prompted with the message:
- ```
- What would you like to do with these files?
- - Overwrite with versions from template
- - Keep my existing files unchanged
- ```
- Choose “**Overwrite with versions from template**” and provide a unique environment name when prompted.
-
-6. **Authenticate with Azure** (VS Code Web requires device code authentication):
-
- ```shell
- az login --use-device-code
- ```
- > **Note:** In VS Code Web environment, the regular `az login` command may fail. Use the `--use-device-code` flag to authenticate via device code flow. Follow the prompts in the terminal to complete authentication.
-
-7. Proceed to [Step 3: Configure Deployment Settings](#step-3-configure-deployment-settings)
-
-
-
-
-Option D: Local Environment
-
-**Required Tools:**
-- [PowerShell 7.0+](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell)
-- [Azure Developer CLI (azd) 1.18.0+](https://aka.ms/install-azd)
-- [Python 3.9+](https://www.python.org/downloads/)
-- [Docker Desktop](https://www.docker.com/products/docker-desktop/)
-- [Git](https://git-scm.com/downloads)
-
-**Setup Steps:**
-1. Install all required deployment tools listed above
-2. Clone the repository:
- ```shell
- azd init -t microsoft/document-generation-solution-accelerator/
- ```
-3. Open the project folder in your terminal
-4. Proceed to [Step 3: Configure Deployment Settings](#step-3-configure-deployment-settings)
-
-**PowerShell Users:** If you encounter script execution issues, run:
-```powershell
-Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
-```
-
-
-
-## Step 3: Configure Deployment Settings
-
-Review the configuration options below. You can customize any settings that meet your needs, or leave them as defaults to proceed with a standard deployment.
-
-### 3.1 Choose Deployment Type (Optional)
-
-| **Aspect** | **Development/Testing (Default)** | **Production** |
-|------------|-----------------------------------|----------------|
-| **Configuration File** | `main.parameters.json` (sandbox) | Copy `main.waf.parameters.json` to `main.parameters.json` |
-| **Security Controls** | Minimal (for rapid iteration) | Enhanced (production best practices) |
-| **Cost** | Lower costs | Cost optimized |
-| **Use Case** | POCs, development, testing | Production workloads |
-| **Framework** | Basic configuration | [Well-Architected Framework](https://learn.microsoft.com/en-us/azure/well-architected/) |
-| **Features** | Core functionality | Reliability, security, operational excellence |
-
-**To use production configuration:**
-
-Copy the contents from the production configuration file to your main parameters file:
-
-1. Navigate to the `infra` folder in your project
-2. Open `main.waf.parameters.json` in a text editor (like Notepad, VS Code, etc.)
-3. Select all content (Ctrl+A) and copy it (Ctrl+C)
-4. Open `main.parameters.json` in the same text editor
-5. Select all existing content (Ctrl+A) and paste the copied content (Ctrl+V)
-6. Save the file (Ctrl+S)
-
-### 3.2 Set VM Credentials (Optional - Production Deployment Only)
-
-> **Note:** This section only applies if you selected **Production** deployment type in section 3.1. VMs are not deployed in the default Development/Testing configuration.
-
-By default, random GUIDs are generated for VM credentials. To set custom credentials:
-
-```shell
-azd env set AZURE_ENV_VM_ADMIN_USERNAME
-azd env set AZURE_ENV_VM_ADMIN_PASSWORD
-```
-
-### 3.3 Advanced Configuration (Optional)
-
-
-Configurable Parameters
-
-You can customize various deployment settings before running `azd up`, including Azure regions, AI model configurations (deployment type, version, capacity), container registry settings, and resource names.
-
-📖 **Complete Guide:** See [Parameter Customization Guide](./CustomizingAzdParameters.md) for the full list of available parameters and their usage.
-
-
-
-
-Reuse Existing Resources
-
-To optimize costs and integrate with your existing Azure infrastructure, you can configure the solution to reuse compatible resources already deployed in your subscription.
-
-**Supported Resources for Reuse:**
-
-- **Log Analytics Workspace:** Integrate with your existing monitoring infrastructure by reusing an established Log Analytics workspace for centralized logging and monitoring. [Configuration Guide](./re-use-log-analytics.md)
-
-- **Azure AI Foundry Project:** Leverage your existing AI Foundry project and deployed models to avoid duplication and reduce provisioning time. [Configuration Guide](./re-use-foundry-project.md)
-
-**Key Benefits:**
-- **Cost Optimization:** Eliminate duplicate resource charges
-- **Operational Consistency:** Maintain unified monitoring and AI infrastructure
-- **Faster Deployment:** Skip resource creation for existing compatible services
-- **Simplified Management:** Reduce the number of resources to manage and monitor
-
-**Important Considerations:**
-- Ensure existing resources meet the solution's requirements and are in compatible regions
-- Review access permissions and configurations before reusing resources
-- Consider the impact on existing workloads when sharing resources
-
-
-
-## Step 4: Deploy the Solution
-
-💡 **Before You Start:** If you encounter any issues during deployment, check our [Troubleshooting Guide](./TroubleShootingSteps.md) for common solutions.
-
-⚠️ **Critical: Redeployment Warning:** If you have previously run `azd up` in this folder (i.e., a `.azure` folder exists), you must [create a fresh environment](#creating-a-new-environment) to avoid conflicts and deployment failures.
-
-### 4.1 Authenticate with Azure
-
-```shell
-azd auth login
-```
-
-**For specific tenants:**
-```shell
-azd auth login --tenant-id
-```
-
-> **Finding Tenant ID:**
- > 1. Open the [Azure Portal](https://portal.azure.com/).
- > 2. Navigate to **Microsoft Entra ID** from the left-hand menu.
- > 3. Under the **Overview** section, locate the **Tenant ID** field. Copy the value displayed.
-
-### 4.2 Start Deployment
-
-```shell
-azd up
-```
-
-**During deployment, you'll be prompted for:**
-1. **Environment name** (e.g., "docgen") - Must be 3-16 characters long, alphanumeric only
-2. **Azure subscription** selection
-3. **Azure AI Foundry deployment region** - Select a region with available OpenAI model quota for AI operations
-4. **Primary location** - Select the region where your infrastructure resources will be deployed
-5. **Resource group** selection (create new or use existing)
-
-**Expected Duration:** 6-8 minutes for default configuration
-
-**⚠️ Deployment Issues:** If you encounter errors or timeouts, try a different region as there may be capacity constraints. For detailed error solutions, see our [Troubleshooting Guide](./TroubleShootingSteps.md).
-
-### 4.3 Get Application URL
-
-After successful deployment:
-
-1. Open [Azure Portal](https://portal.azure.com/)
-2. Navigate to your resource group
-3. Find the App Service with "app" in the name
-4. Copy the **Application URI**
-
-⚠️ **Important:** Complete [Post-Deployment Steps](#step-5-post-deployment-configuration) before accessing the application.
-
-## Step 5: Post-Deployment Configuration
-
-### 5.1 Sample Data Import
-
-1. Once the deployment has completed successfully and you would like to use the sample data, please open a **Git Bash** terminal and run the bash command printed below. The bash command will look like the following:
- ```shell
- bash ./infra/scripts/process_sample_data.sh
- ```
- If you don't have azd env then you need to pass parameters along with the command. Then the command will look like the following:
- ```shell
- bash ./infra/scripts/process_sample_data.sh
- ```
-
-### 5.2 Configure Authentication (Optional)
-
-1. Follow [App Authentication Configuration](./AppAuthentication.md)
-2. Wait up to 10 minutes for authentication changes to take effect
-
-### 5.3 Verify Deployment
-
-1. Access your application using the URL from Step 4.3
-2. Confirm the application loads successfully
-3. Verify you can sign in with your authenticated account
-
-## Step 6: Clean Up (Optional)
-
-### Remove All Resources
-
-```shell
-azd down
-```
-
-> **Note:** To purge resources and clean up after deployment, use the `azd down` command or follow the [Delete Resource Group Guide](./DeleteResourceGroup.md) for manual cleanup through Azure Portal. If you deployed with `enableRedundancy=true` and Log Analytics workspace replication is enabled, you must first disable replication before running `azd down` else resource group delete will fail. Follow the steps in [Handling Log Analytics Workspace Deletion with Replication Enabled](./LogAnalyticsReplicationDisable.md), wait until replication returns `false`, then run `azd down`.
-
-### Manual Cleanup (if needed)
-
-If deployment fails or you need to clean up manually:
-
-- Follow [Delete Resource Group Guide](./DeleteResourceGroup.md)
-- See section below for "Deleting Resources After a Failed Deployment"
-
-## Local Development & Debugging
-
-After deploying the solution to Azure, you can run and debug the application locally by connecting to your deployed Azure resources.
-
-### Configure Environment Variables
-
-1. Create a `.env` file in the `src` directory of your project
-2. Set the `APP_ENV` variable to match your deployed environment name:
- ```
- APP_ENV=
- ```
-3. Authenticate with Azure CLI to access deployed resources:
- ```shell
- az login
- ```
-
-The application will use the Azure CLI credentials to connect to the deployed Azure resources (Azure AI Search, Cosmos DB, etc.) using the environment name specified in `APP_ENV`.
-
-For complete local development setup instructions, see the [Local Development Setup Guide](./LocalDevelopmentSetup.md).
-
-## 🛠️ Troubleshooting
-
-If you encounter issues during deployment, see our comprehensive [Troubleshooting Guide](./TroubleShootingSteps.md) for solutions to common problems.
-
-## Managing Multiple Environments
-
-### Recover from Failed Deployment
-
-If your deployment failed or encountered errors, here are the steps to recover:
-
-
-Recover from Failed Deployment
-
-**If your deployment failed or encountered errors:**
-
-1. **Try a different region:** Create a new environment and select a different Azure region during deployment
-2. **Clean up and retry:** Use `azd down` to remove failed resources, then `azd up` to redeploy
-3. **Check troubleshooting:** Review [Troubleshooting Guide](./TroubleShootingSteps.md) for specific error solutions
-4. **Fresh start:** Create a completely new environment with a different name
-
-**Example Recovery Workflow:**
-```shell
-# Remove failed deployment (optional)
-azd down
-
-# Create new environment (3-16 chars, alphanumeric only)
-azd env new docgenretry
-
-# Deploy with different settings/region
-azd up
-```
-
-
-
-### Creating a New Environment
-
-If you need to deploy to a different region, test different configurations, or create additional environments:
-
-
-Create a New Environment
-
-**Create Environment Explicitly:**
-```shell
-# Create a new named environment (3-16 characters, alphanumeric only)
-azd env new
-
-# Select the new environment
-azd env select
-
-# Deploy to the new environment
-azd up
-```
-
-**Example:**
-```shell
-# Create a new environment for production (valid: 3-16 chars)
-azd env new docgenprod
-
-# Switch to the new environment
-azd env select docgenprod
-
-# Deploy with fresh settings
-azd up
-```
-
-> **Environment Name Requirements:**
-> - **Length:** 3-16 characters
-> - **Characters:** Alphanumeric only (letters and numbers)
-> - **Valid examples:** `docgen`, `test123`, `myappdev`, `prod2024`
-> - **Invalid examples:** `co` (too short), `my-very-long-environment-name` (too long), `test_env` (underscore not allowed), `myapp-dev` (hyphen not allowed)
-
-
-
-
-Switch Between Environments
-
-**List Available Environments:**
-```shell
-azd env list
-```
-
-**Switch to Different Environment:**
-```shell
-azd env select
-```
-
-**View Current Environment:**
-```shell
-azd env get-values
-```
-
-
-
-### Best Practices for Multiple Environments
-
-- **Use descriptive names:** `docgendev`, `docgenprod`, `docgentest` (remember: 3-16 chars, alphanumeric only)
-- **Different regions:** Deploy to multiple regions for testing quota availability
-- **Separate configurations:** Each environment can have different parameter settings
-- **Clean up unused environments:** Use `azd down` to remove environments you no longer need
-
-## Next Steps
-
-Now that your deployment is complete and tested, explore these resources to enhance your experience:
-
-🚀 **Get Started:**
-
-- [Sample Questions](./SampleQuestions.md) - Try these sample questions to explore the solution's capabilities
-- Test the application with your own documents and queries
-
-🔧 **Development & Customization:**
-
-- [Local Development Setup](./LocalDevelopmentSetup.md) - Set up your local development environment
-- Review [Test Case Flows](../src/TEST_CASE_FLOWS.md) for detailed testing scenarios
-
-📚 **Learn More:**
-
-- Explore the architecture and design principles
-- Understand the solution's components and workflows
-
-## Need Help?
-
-- 🐛 **Issues:** Check [Troubleshooting Guide](./TroubleShootingSteps.md)
-- 💬 **Support:** Review [Support Guidelines](../SUPPORT.md)
-- 🔧 **Development:** See [Contributing Guide](../CONTRIBUTING.md)
-
-## Advanced: Deploy Local Changes
-
-If you've made local modifications to the code and want to deploy them to Azure, follow these steps to swap the configuration files:
-
-> **Note:** To set up and run the application locally for development, see the [Local Development Setup Guide](./LocalDevelopmentSetup.md).
-
-### Step 1: Rename Azure Configuration Files
-
-**In the root directory:**
-1. Rename `azure.yaml` to `azure_custom2.yaml`
-2. Rename `azure_custom.yaml` to `azure.yaml`
-
-### Step 2: Rename Infrastructure Files
-
-**In the `infra` directory:**
-1. Rename `main.bicep` to `main_custom2.bicep`
-2. Rename `main_custom.bicep` to `main.bicep`
-
-### Step 3: Deploy Changes
-
-Run the deployment command:
-```shell
-azd up
-```
-
-> **Note:** These custom files are configured to deploy your local code changes instead of pulling from the GitHub repository.
diff --git a/archive-doc-gen/docs/LocalDevelopmentSetup.md b/archive-doc-gen/docs/LocalDevelopmentSetup.md
deleted file mode 100644
index 4635b89e8..000000000
--- a/archive-doc-gen/docs/LocalDevelopmentSetup.md
+++ /dev/null
@@ -1,506 +0,0 @@
-# Local Development Setup Guide
-
-This guide provides comprehensive instructions for setting up the Document Generation Solution Accelerator for local development across Windows and Linux platforms.
-
-## Important Setup Notes
-
-### Multi-Service Architecture
-
-This application consists of **two separate services** that run independently:
-
-1. **Backend API** - REST API server for the frontend
-2. **Frontend** - React-based user interface
-
-> **⚠️ Critical: Each service must run in its own terminal/console window**
->
-> - **Do NOT close terminals** while services are running
-> - Open **2 separate terminal windows** for local development
-> - Each service will occupy its terminal and show live logs
-
-
-### Path Conventions
-
-**All paths in this guide are relative to the repository root directory:**
-
-```bash
-document-generation-solution-accelerator/ ← Repository root (start here)
-├── src/
-│ ├── backend/
-│ │ ├── api/ ← API endpoints and routes
-│ │ ├── auth/ ← Authentication modules
-│ │ ├── helpers/ ← Utility and helper functions
-│ │ ├── history/ ← Chat/session history management
-│ │ ├── security/ ← Security-related modules
-│ │ └── settings.py ← Backend configuration
-│ ├── frontend/
-│ │ ├── src/ ← React/TypeScript source
-│ │ └── package.json ← Frontend dependencies
-│ ├── static/ ← Static web assets
-│ ├── tests/ ← Unit and integration tests
-│ ├── app.py ← Main Flask application entry point
-│ ├── .env ← Main application config file
-│ └── requirements.txt ← Python dependencies
-├── scripts/
-│ ├── prepdocs.py ← Document processing script
-│ ├── auth_init.py ← Authentication setup
-│ ├── data_preparation.py ← Data pipeline scripts
-│ └── config.json ← Scripts configuration
-├── infra/
-│ ├── main.bicep ← Main infrastructure template
-│ ├── scripts/ ← Infrastructure scripts
-│ └── main.parameters.json ← Deployment parameters
-├── docs/ ← Documentation (you are here)
-└── tests/ ← End-to-end tests
- └── e2e-test/
-```
-
-**Before starting any step, ensure you are in the repository root directory:**
-
-```bash
-# Verify you're in the correct location
-pwd # Linux/macOS - should show: .../document-generation-solution-accelerator
-Get-Location # Windows PowerShell - should show: ...\document-generation-solution-accelerator
-
-# If not, navigate to repository root
-cd path/to/document-generation-solution-accelerator
-```
-
-## Step 1: Prerequisites - Install Required Tools
-
-Install these tools before you start:
-- [Visual Studio Code](https://code.visualstudio.com/) with the following extensions:
- - [Azure Tools](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
- - [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep)
- - [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
-- [Python 3.11](https://www.python.org/downloads/). **Important:** Check "Add Python to PATH" during installation.
-- [PowerShell 7.0+](https://github.com/PowerShell/PowerShell#get-powershell).
-- [Node.js (LTS)](https://nodejs.org/en).
-- [Git](https://git-scm.com/downloads).
-- [Azure Developer CLI (azd) v1.18.0+](https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/install-azd).
-- [Microsoft ODBC Driver 17](https://learn.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver16) for SQL Server.
-
-
-### Windows Development
-
-#### Option 1: Native Windows (PowerShell)
-
-```powershell
-# Install Python 3.11+ and Git
-winget install Python.Python.3.11
-winget install Git.Git
-
-# Install Node.js for frontend
-winget install OpenJS.NodeJS.LTS
-
-# Install uv package manager
-py -3.11 -m pip install uv
-```
-
-**Note**: On Windows, use `py -3.11 -m uv` instead of `uv` for all commands to ensure you're using Python 3.11.
-
-#### Option 2: Windows with WSL2 (Recommended)
-
-```bash
-# Install WSL2 first (run in PowerShell as Administrator):
-# wsl --install -d Ubuntu
-
-# Then in WSL2 Ubuntu terminal:
-sudo apt update && sudo apt install python3.11 python3.11-venv git curl nodejs npm -y
-
-# Install uv
-curl -LsSf https://astral.sh/uv/install.sh | sh
-source ~/.bashrc
-```
-
-### Linux Development
-
-#### Ubuntu/Debian
-
-```bash
-# Install prerequisites
-sudo apt update && sudo apt install python3.11 python3.11-venv git curl nodejs npm -y
-
-# Install uv package manager
-curl -LsSf https://astral.sh/uv/install.sh | sh
-source ~/.bashrc
-```
-
-#### RHEL/CentOS/Fedora
-
-```bash
-# Install prerequisites
-sudo dnf install python3.11 python3.11-devel git curl gcc nodejs npm -y
-
-# Install uv
-curl -LsSf https://astral.sh/uv/install.sh | sh
-source ~/.bashrc
-```
-
-
-## Step 2: Clone the Repository
-
-Choose a location on your local machine where you want to store the project files. We recommend creating a dedicated folder for your development projects.
-
-#### Using Command Line/Terminal
-
-1. **Open your terminal or command prompt. Navigate to your desired directory and Clone the repository:**
- ```bash
- git clone https://github.com/microsoft/document-generation-solution-accelerator.git
- ```
-
-2. **Navigate to the project directory:**
- ```bash
- cd document-generation-solution-accelerator
- ```
-
-3. **Open the project in Visual Studio Code:**
- ```bash
- code .
- ```
-
-
-## Step 3: Development Tools Setup
-
-### Visual Studio Code (Recommended)
-
-#### Required Extensions
-
-Create `.vscode/extensions.json` in the workspace root and copy the following JSON:
-
-```json
-{
- "recommendations": [
- "ms-python.python",
- "ms-python.pylint",
- "ms-python.black-formatter",
- "ms-python.isort",
- "ms-vscode-remote.remote-wsl",
- "ms-vscode-remote.remote-containers",
- "redhat.vscode-yaml",
- "ms-vscode.azure-account",
- "ms-python.mypy-type-checker"
- ]
-}
-```
-
-VS Code will prompt you to install these recommended extensions when you open the workspace.
-
-#### Settings Configuration
-
-Create `.vscode/settings.json` and copy the following JSON:
-
-```json
-{
- "python.defaultInterpreterPath": "./.venv/bin/python",
- "python.terminal.activateEnvironment": true,
- "python.formatting.provider": "black",
- "python.linting.enabled": true,
- "python.linting.pylintEnabled": true,
- "python.testing.pytestEnabled": true,
- "python.testing.unittestEnabled": false,
- "files.associations": {
- "*.yaml": "yaml",
- "*.yml": "yaml"
- }
-}
-```
-
-## Step 4: Azure Authentication Setup
-
-Before configuring services, authenticate with Azure:
-
-```bash
-# Login to Azure CLI
-az login
-
-# Set your subscription
-az account set --subscription "your-subscription-id"
-
-# Verify authentication
-az account show
-```
-
-## Step 5: Local Setup/Deployment
-
-Follow these steps to set up and run the application locally:
-
-## Local Deployment:
-
-You can refer the local deployment guide here: [Local Deployment Guide](https://github.com/microsoft/document-generation-solution-accelerator/blob/main/docs/DeploymentGuide.md)
-
-### 5.1. Open the App Folder
-Navigate to the `src` directory of the repository using Visual Studio Code.
-
-### 5.2. Configure Environment Variables
-- Copy the `.env.sample` file to a new file named `.env`.
-- Update the `.env` file with the required values from your Azure resource group in Azure Portal App Service environment variables.
-- You can get all env value in your deployed resource group under App Service:
-
-- Alternatively, if resources were
-provisioned using `azd provision` or `azd up`, a `.env` file is automatically generated in the `.azure//.env`
-file. To get your `` run `azd env list` to see which env is default.
-
-> **Note**: After adding all environment variables to the .env file, update the value of **'APP_ENV'** from:
-```
-APP_ENV="Prod"
-```
-**to:**
-```
-APP_ENV="Dev"
-```
-
-This change is required for running the application in local development mode.
-
-
-### 5.3. Required Azure RBAC Permissions
-
-To run the application locally, your Azure account needs the following role assignments on the deployed resources:
-
-#### 5.3.1. App Configuration Access
-```bash
-# Get your principal ID
-PRINCIPAL_ID=$(az ad signed-in-user show --query id -o tsv)
-
-# Assign App Configuration Data Reader role
-az role assignment create \
- --assignee $PRINCIPAL_ID \
- --role "App Configuration Data Reader" \
- --scope "/subscriptions//resourceGroups//providers/Microsoft.AppConfiguration/configurationStores/"
-```
-
-#### 5.3.2. Cosmos DB Access
-```bash
-# Assign Cosmos DB Built-in Data Contributor role
-az cosmosdb sql role assignment create \
- --account-name \
- --resource-group \
- --role-definition-name "Cosmos DB Built-in Data Contributor" \
- --principal-id $PRINCIPAL_ID \
- --scope "/"
-```
-> **Note**: After local deployment is complete, you need to execute the post-deployment script so that all the required roles will be assigned automatically.
-
-### 5.4. Running with Automated Script
-
-For convenience, you can use the provided startup scripts that handle environment setup and start both services:
-
-**Windows:**
-```cmd
-cd src
-.\start.cmd
-```
-
-**macOS/Linux:**
-```bash
-cd src
-chmod +x start.sh
-./start.sh
-```
-### 5.5. Start the Application
-- Run `start.cmd` (Windows) or `start.sh` (Linux/Mac) to:
- - Install backend dependencies.
- - Install frontend dependencies.
- - Build the frontend.
- - Start the backend server.
-- Alternatively, you can run the backend in debug mode using the VS Code debug configuration defined in `.vscode/launch.json`.
-
-
-## Step 6: Running Backend and Frontend Separately
-
-> **📋 Terminal Reminder**: This section requires **two separate terminal windows** - one for the Backend API and one for the Frontend. Keep both terminals open while running. All commands assume you start from the **repository root directory**.
-
-### 6.1. Create Virtual Environment (Recommended)
-
-Open your terminal and navigate to the root folder of the project, then create the virtual environment:
-
-```bash
-# Navigate to the project root folder
-cd document-generation-solution-accelerator
-
-# Create virtual environment in the root folder
-python -m venv .venv
-
-# Activate virtual environment (Windows)
-.venv/Scripts/activate
-
-# Activate virtual environment (macOS/Linux)
-source .venv/bin/activate
-```
-
-> **Note**: After activation, you should see `(.venv)` in your terminal prompt indicating the virtual environment is active.
-
-### 6.2. Install Dependencies and Run
-
-To develop and run the backend API locally:
-
-```bash
-# Navigate to the API folder (while virtual environment is activated)
-cd src/
-
-# Upgrade pip
-python -m pip install --upgrade pip
-
-# Install Python dependencies
-pip install -r requirements.txt
-
-# Install Frontend Packages
-cd frontend
-
-npm install
-npm run build
-
-# Run the backend API (Windows)
-cd ..
-
-start http://127.0.0.1:50505
-call python -m uvicorn app:app --port 50505 --reload
-
-# Run the backend API (MacOs)
-cd ..
-
-open http://127.0.0.1:50505
-python -m uvicorn app:app --port 50505 --reload
-
-# Run the backend API (Linux)
-cd ..
-
-xdg-open http://127.0.0.1:50505
-python -m uvicorn app:app --port 50505 --reload
-
-```
-
-> **Note**: Make sure your virtual environment is activated before running these commands. You should see `(.venv)` in your terminal prompt when the virtual environment is active.
-
-The App will run on `http://127.0.0.1:50505/#/` by default.
-
-## Step 7: Verify All Services Are Running
-
-Before using the application, confirm all services are running correctly:
-
-### 7.1. Terminal Status Checklist
-
-| Terminal | Service | Command | Expected Output | URL |
-|----------|---------|---------|-----------------|-----|
-| **Terminal 1** | Backend API | `python -m uvicorn app:app --port 50505 --reload` | `INFO: Application startup complete` | http://127.0.0.1:50505 |
-| **Terminal 2** | Frontend (Dev) | `npm run dev` | `Local: http://localhost:5173/` | http://localhost:5173 |
-
-### 7.2. Quick Verification
-
-**1. Check Backend API:**
-```bash
-# In a new terminal
-curl http://127.0.0.1:50505/health
-# Expected: {"status":"healthy"} or similar JSON response
-```
-
-**2. Check Frontend:**
-- Open browser to http://127.0.0.1:50505 (production build) or http://localhost:5173 (dev server)
-- Should see the Document Generation UI
-- If authentication is configured, you'll be redirected to Azure AD login
-
-### 7.3. Common Issues
-
-**Service not starting?**
-- Ensure you're in the correct directory (`src/` for backend)
-- Verify virtual environment is activated (you should see `(.venv)` in prompt)
-- Check that port is not already in use (50505 for API, 5173 for frontend dev)
-- Review error messages in the terminal
-
-**Can't access services?**
-- Verify firewall isn't blocking ports 50505 or 5173
-- Try `http://localhost:port` instead of `http://127.0.0.1:port`
-- Ensure services show "startup complete" messages
-
-## Step 8: Next Steps
-
-Once all services are running (as confirmed in Step 7), you can:
-
-1. **Access the Application**: Open `http://127.0.0.1:50505` in your browser to explore the Document Generation UI
-2. **Explore Sample Questions**: Follow [SampleQuestions.md](SampleQuestions.md) for example prompts and use cases
-3. **Understand the Architecture**: Review the codebase starting with `src/backend/` directory
-
-## Troubleshooting
-
-### Common Issues
-
-#### Python Version Issues
-
-```bash
-# Check available Python versions
-python3 --version
-python3.11 --version
-
-# If python3.11 not found, install it:
-# Ubuntu: sudo apt install python3.11
-# macOS: brew install python@3.11
-# Windows: winget install Python.Python.3.11
-```
-
-#### Virtual Environment Issues
-
-```bash
-# Recreate virtual environment
-rm -rf .venv # Linux/macOS
-# or Remove-Item -Recurse .venv # Windows PowerShell
-
-uv venv .venv
-# Activate and reinstall
-source .venv/bin/activate # Linux/macOS
-# or .\.venv\Scripts\Activate.ps1 # Windows
-uv sync --python 3.11
-```
-
-#### Permission Issues (Linux/macOS)
-
-```bash
-# Fix ownership of files
-sudo chown -R $USER:$USER .
-
-# Fix uv permissions
-chmod +x ~/.local/bin/uv
-```
-
-#### Windows-Specific Issues
-
-```powershell
-# PowerShell execution policy
-Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
-
-# Long path support (Windows 10 1607+, run as Administrator)
-New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" -Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force
-
-# SSL certificate issues
-python -m pip install uv
-```
-
-### Azure Authentication Issues
-
-```bash
-# Login to Azure CLI
-az login
-
-# Set subscription
-az account set --subscription "your-subscription-id"
-
-# Test authentication
-az account show
-```
-
-### Environment Variable Issues
-
-```bash
-# Check environment variables are loaded
-env | grep AZURE # Linux/macOS
-Get-ChildItem Env:AZURE* # Windows PowerShell
-
-# Validate .env file format
-cat .env | grep -v '^#' | grep '=' # Should show key=value pairs
-```
-
-## Related Documentation
-
-- [Deployment Guide](DeploymentGuide.md) - Instructions for production deployment.
-- [Delete Resource Group](DeleteResourceGroup.md) - Steps to safely delete the Azure resource group created for the solution.
-- [App Authentication Setup](AppAuthentication.md) - Guide to configure application authentication and add support for additional platforms.
-- [Powershell Setup](PowershellSetup.md) - Instructions for setting up PowerShell and required scripts.
-- [Quota Check](QuotaCheck.md) - Steps to verify Azure quotas and ensure required limits before deployment.
diff --git a/archive-doc-gen/docs/LogAnalyticsReplicationDisable.md b/archive-doc-gen/docs/LogAnalyticsReplicationDisable.md
deleted file mode 100644
index f4379a84a..000000000
--- a/archive-doc-gen/docs/LogAnalyticsReplicationDisable.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# 🛠 Handling Log Analytics Workspace Deletion with Replication Enabled
-
-If redundancy (replication) is enabled for your Log Analytics workspace, you must disable it before deleting the workspace or resource group. Otherwise, deletion will fail.
-
-## ✅ Steps to Disable Replication Before Deletion
-Run the following Azure CLI command. Note: This operation may take about 5 minutes to complete.
-
-```bash
-az resource update --ids "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{logAnalyticsName}" --set properties.replication.enabled=false
-```
-
-Replace:
-- `{subscriptionId}` → Your Azure subscription ID
-- `{resourceGroupName}` → The name of your resource group
-- `{logAnalyticsName}` → The name of your Log Analytics workspace
-
-Optional: Verify replication disabled (should output `false`):
-```bash
-az resource show --ids "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{logAnalyticsName}" --query properties.replication.enabled -o tsv
-```
-
-## ✅ After Disabling Replication
-You can safely delete:
-- The Log Analytics workspace (manual)
-- The resource group (manual), or
-- All provisioned resources via `azd down`
-
-Return to: [Deployment Guide](./DeploymentGuide.md)
diff --git a/archive-doc-gen/docs/PowershellSetup.md b/archive-doc-gen/docs/PowershellSetup.md
deleted file mode 100644
index 76d3de4c1..000000000
--- a/archive-doc-gen/docs/PowershellSetup.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Add PowerShell 7 to PATH in Windows
-
-This guide will help you add **PowerShell 7** (PowerShell Core) to your system’s PATH variable on Windows, so you can easily run it from any Command Prompt or Run dialog.
-
-## Prerequisites
-
-- You should have **PowerShell 7** installed on your machine. If you haven’t installed it yet, you can download it following the guide here: [Installing PowerShell on Windows | Microsoft Learn](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.5).
-- **Administrative privileges are not required** unless you're modifying system-wide environment variables. You can modify your **user-specific PATH** without admin rights.
-
-## Steps to Add PowerShell 7 to PATH
-
-### 1. Open **System Properties**
- - Press `Win + X` and choose **System**.
- - Click on **Advanced system settings** on the left sidebar. This will open the **System Properties** window.
- - In the **System Properties** window, click on the **Environment Variables** button at the bottom.
-
-### 2. Edit User Environment Variables
- - In the **Environment Variables** window, under **User variables**, find the `Path` variable.
- - Select the `Path` variable and click **Edit**. (If the `Path` variable doesn’t exist, click **New** and name it `Path`.)
-
-### 3. Check if PowerShell 7 Path is Already in PATH
- - Before adding the path, make sure the following path is not already present in the list:
- ```
- C:\Program Files\PowerShell\7\
- ```
- - If the path is already there, you don't need to add it again.
-### 4. Add PowerShell 7 Path
- - If the path is not already in the list, click **New** in the **Edit Environment Variable** window.
- - Add the following path to the list:
- ```
- C:\Program Files\PowerShell\7\
- ```
- > **Note:** If you installed PowerShell 7 in a custom location, replace the above path with the correct one.
-### 5. Save Changes
- - After adding the path, click **OK** to close the **Edit Environment Variable** window.
- - Click **OK** again to close the **Environment Variables** window.
- - Finally, click **OK** to exit the **System Properties** window.
-### 6. Verify PowerShell 7 in PATH
- - Open **Command Prompt** or **Run** (press `Win + R`).
- - Type `pwsh` and press Enter.
- - If PowerShell 7 opens, you've successfully added it to your PATH!
----
-## Troubleshooting
-- **PowerShell 7 not opening:** Ensure the path to PowerShell 7 is entered correctly. If you're using a custom installation folder, check that the correct path is added to the `Path` variable.
-- **Changes not taking effect:** Try restarting your computer or logging out and logging back in for the changes to apply.
\ No newline at end of file
diff --git a/archive-doc-gen/docs/QuotaCheck.md b/archive-doc-gen/docs/QuotaCheck.md
deleted file mode 100644
index 7cde62681..000000000
--- a/archive-doc-gen/docs/QuotaCheck.md
+++ /dev/null
@@ -1,103 +0,0 @@
-## Check Quota Availability Before Deployment
-
-Before deploying the accelerator, **ensure sufficient quota availability** for the required model.
-
-> **For Global Standard |GPT-4.1- the capacity to at least 150k tokens post-deployment for optimal performance.**
-
-> **For Standard | GPT-4 - ensure a minimum of 30k–40k tokens for best results.**
-
-### Login if you have not done so already
-```
-azd auth login
-```
-
-
-### 📌 Default Models & Capacities:
-```
-gpt4.1:150, text-embedding-ada-002:80, gpt-4:150
-```
-### 📌 Default Regions:
-```
-francecentral, australiaeast, uksouth, eastus2, northcentralus, swedencentral, westus, westus2, southcentralus
-```
-### Usage Scenarios:
-- No parameters passed → Default models and capacities will be checked in default regions.
-- Only model(s) provided → The script will check for those models in the default regions.
-- Only region(s) provided → The script will check default models in the specified regions.
-- Both models and regions provided → The script will check those models in the specified regions.
-- `--verbose` passed → Enables detailed logging output for debugging and traceability.
-
-### **Input Formats**
-> Use the --models, --regions, and --verbose options for parameter handling:
-
-✔️ Run without parameters to check default models & regions without verbose logging:
- ```
- ./quota_check_params.sh
- ```
-✔️ Enable verbose logging:
- ```
- ./quota_check_params.sh --verbose
- ```
-✔️ Check specific model(s) in default regions:
- ```
- ./quota_check_params.sh --models gpt4.1:150,text-embedding-ada-002:80
- ```
-✔️ Check default models in specific region(s):
- ```
-./quota_check_params.sh --regions eastus2,westus
- ```
-✔️ Passing Both models and regions:
- ```
- ./quota_check_params.sh --models gpt4.1:150 --regions eastus2,westus2
- ```
-✔️ All parameters combined:
- ```
- ./quota_check_params.sh --models gpt-4:150,text-embedding-ada-002:80 --regions eastus2,westus --verbose
- ```
-
-### **Sample Output**
-The final table lists regions with available quota. You can select any of these regions for deployment.
-
-
-
----
-### **If using Azure Portal and Cloud Shell**
-
-1. Navigate to the [Azure Portal](https://portal.azure.com).
-2. Click on **Azure Cloud Shell** in the top right navigation menu.
-3. Run the appropriate command based on your requirement:
-
- **To check quota for the deployment**
-
- ```sh
- curl -L -o quota_check_params.sh "https://raw.githubusercontent.com/microsoft/document-generation-solution-accelerator/main/scripts/quota_check_params.sh"
- chmod +x quota_check_params.sh
- ./quota_check_params.sh
- ```
- - Refer to [Input Formats](#input-formats) for detailed commands.
-
-### **If using VS Code or Codespaces**
-1. Open the terminal in VS Code or Codespaces.
-2. If you're using VS Code, click the dropdown on the right side of the terminal window, and select `Git Bash`.
- 
-3. Navigate to the `scripts` folder where the script files are located and make the script as executable:
- ```sh
- cd scripts
- chmod +x quota_check_params.sh
- ```
-4. Run the appropriate script based on your requirement:
-
- **To check quota for the deployment**
-
- ```sh
- ./quota_check_params.sh
- ```
- - Refer to [Input Formats](#input-formats) for detailed commands.
-
-5. If you see the error `_bash: az: command not found_`, install Azure CLI:
-
- ```sh
- curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
- az login
- ```
-6. Rerun the script after installing Azure CLI.
diff --git a/archive-doc-gen/docs/README_LOCAL.md b/archive-doc-gen/docs/README_LOCAL.md
deleted file mode 100644
index 26def2e2c..000000000
--- a/archive-doc-gen/docs/README_LOCAL.md
+++ /dev/null
@@ -1,218 +0,0 @@
-### Deploy from your local machine
-
-#### Local Setup: Basic Chat Experience
-1. Copy `.env.sample` present in `src` folder to a new file called `.env` and configure the settings as described in the [Environment variables](#environment-variables) section.
-
- These variables are required:
- - `AZURE_OPENAI_RESOURCE`
- - `AZURE_OPENAI_MODEL`
-
- These variables are optional:
- - `AZURE_OPENAI_TEMPERATURE`
- - `AZURE_OPENAI_TOP_P`
- - `AZURE_OPENAI_MAX_TOKENS`
- - `AZURE_OPENAI_STOP_SEQUENCE`
- - `AZURE_OPENAI_SYSTEM_MESSAGE`
-
- See the [documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference#example-response-2) for more information on these parameters.
-
-2. Start the app with `start.cmd` or `start.sh`. This will build the frontend, install backend dependencies, and then start the app. Or, just run the backend in debug mode using the VSCode debug configuration in `.vscode/launch.json`.
-
-3. You can see the local running app at http://127.0.0.1:50505. If you experience a port conflict and the app does not load, stop the application in the terminal (CTRL-C on Windows), edit the `start.cmd` file and change the port to a value not in use (i.e., 5000).
-
-NOTE: You may find you need to set: MacOS: `export NODE_OPTIONS="--max-old-space-size=8192"` or Windows: `set NODE_OPTIONS=--max-old-space-size=8192` to avoid running out of memory when building the frontend.
-
-#### Local Setup: Chat with your data using Azure Cognitive Search
-[More information about Azure OpenAI on your data](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/use-your-data)
-
-1. Update the `AZURE_OPENAI_*` environment variables as described above.
-2. To connect to your data, you need to specify an Azure Cognitive Search index to use. You can [create this index yourself](https://learn.microsoft.com/en-us/azure/search/search-get-started-portal) or use the [Azure AI Foundry](https://oai.azure.com/portal/chat) to create the index for you.
-
- These variables are required when adding your data with Azure AI Search:
- - `DATASOURCE_TYPE` (should be set to `AzureCognitiveSearch`)
- - `AZURE_SEARCH_SERVICE`
- - `AZURE_SEARCH_INDEX`
-
- These variables are optional:
- - `AZURE_SEARCH_USE_SEMANTIC_SEARCH`
- - `AZURE_SEARCH_SEMANTIC_SEARCH_CONFIG`
- - `AZURE_SEARCH_INDEX_TOP_K`
- - `AZURE_SEARCH_ENABLE_IN_DOMAIN`
- - `AZURE_SEARCH_CONTENT_COLUMNS`
- - `AZURE_SEARCH_FILENAME_COLUMN`
- - `AZURE_SEARCH_TITLE_COLUMN`
- - `AZURE_SEARCH_URL_COLUMN`
- - `AZURE_SEARCH_VECTOR_COLUMNS`
- - `AZURE_SEARCH_QUERY_TYPE`
- - `AZURE_SEARCH_PERMITTED_GROUPS_COLUMN`
- - `AZURE_SEARCH_STRICTNESS`
- - `AZURE_OPENAI_EMBEDDING_NAME`
-
-3. Start the app with `start.cmd` or `start.sh`. This will build the frontend, install backend dependencies, and then start the app. Or, just run the backend in debug mode using the VSCode debug configuration in `.vscode/launch.json`.
-4. You can see the local running app at http://127.0.0.1:50505. If you experience a port conflict and the app does not load, stop the application in the terminal (CTRL-C on Windows), edit the `start.cmd` file and change the port to a value not in use (i.e., 5000).
-
-NOTE: You may find you need to set: MacOS: `export NODE_OPTIONS="--max-old-space-size=8192"` or Windows: `set NODE_OPTIONS=--max-old-space-size=8192` to avoid running out of memory when building the frontend.
-
-#### Local Setup: Enable Chat History
-To enable chat history, you will need to set up CosmosDB resources. The ARM template in the `infrastructure` folder can be used to deploy an app service and a CosmosDB with the database and container configured. Then specify these additional environment variables:
-- `AZURE_COSMOSDB_ACCOUNT`
-- `AZURE_COSMOSDB_DATABASE`
-- `AZURE_COSMOSDB_CONVERSATIONS_CONTAINER`
-- `AZURE_COSMOSDB_ACCOUNT_KEY`
-
-As above, start the app with `start.cmd` or `start.sh`, then visit the local running app at http://127.0.0.1:50505. Or, just run the backend in debug mode using the VSCode debug configuration in `.vscode/launch.json`. If you experience a port conflict and the app does not load, stop the application in the terminal (CTRL-C on Windows), edit the `start.cmd` file and change the port to a value not in use (i.e., 5000).
-
-#### Local Setup: Enable Message Feedback
-To enable message feedback, you will need to set up CosmosDB resources. Then specify these additional environment variable:
-
-/.env
-- `AZURE_COSMOSDB_ENABLE_FEEDBACK=True`
-
-#### Deploy with the Azure CLI
-**NOTE**: If you've made code changes, be sure to **build the app code** with `start.cmd` or `start.sh` before you deploy, otherwise your changes will not be picked up. If you've updated any files in the `frontend` folder, make sure you see updates to the files in the `static` folder before you deploy.
-
-You can use the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) to deploy the app from your local machine. Make sure you have version 2.48.1 or later.
-
-If this is your first time deploying the app, you can use [az webapp up](https://learn.microsoft.com/en-us/cli/azure/webapp?view=azure-cli-latest#az-webapp-up). Run the following two commands from the `src` folder of the repo, updating the placeholder values to your desired app name, resource group, location, and subscription. You can also change the SKU if desired.
-
-1. `az webapp up --runtime PYTHON:3.11 --sku B1 --name --resource-group --location --subscription `
-1. `az webapp config set --startup-file "python3 -m gunicorn app:app" --name --resource-group `
-
-If you've deployed the app previously, first run this command to update the appsettings to allow local code deployment:
-
-`az webapp config appsettings set -g -n --settings WEBSITE_WEBDEPLOY_USE_SCM=false`
-
-Check the runtime stack for your app by viewing the app service resource in the Azure Portal. If it shows "Python - 3.10", use `PYTHON:3.10` in the runtime argument below. If it shows "Python - 3.11", use `PYTHON:3.11` in the runtime argument below.
-
-Check the SKU in the same way. Use the abbreviated SKU name in the argument below, e.g. for "Basic (B1)" the SKU is `B1`.
-
-Then, use these commands from `src` folder to deploy your local code to the existing app:
-
-1. `az webapp up --runtime --sku --name --resource-group `
-1. `az webapp config set --startup-file "python3 -m gunicorn app:app" --name --resource-group `
-
-Make sure that the app name and resource group match exactly for the app that was previously deployed.
-
-Deployment will take several minutes. When it completes, you should be able to navigate to your app at {app-name}.azurewebsites.net.
-
-### Add an identity provider
-After deployment, you will need to add an identity provider to provide authentication support in your app. See [this tutorial](https://learn.microsoft.com/en-us/azure/app-service/scenario-secure-app-authentication-app-service) for more information.
-
-If you don't add an identity provider, the chat functionality of your app will be blocked to prevent unauthorized access to your resources and data.
-
-To remove this restriction, you can add `AUTH_ENABLED=False` to the environment variables. This will disable authentication and allow anyone to access the chat functionality of your app. **This is not recommended for production apps.**
-
-To add further access controls, update the logic in `getUserInfoList` in `frontend/src/pages/chat/Chat.tsx`.
-
-### Common Customization Scenarios (e.g. updating the default chat logo and headers)
-
-The interface allows for easy adaptation of the UI by modifying certain elements, such as the title and logo, through the use of [environment variables](#environment-variables).
-
-- `UI_TITLE`
-- `UI_LOGO`
-- `UI_CHAT_TITLE`
-- `UI_CHAT_LOGO`
-- `UI_CHAT_DESCRIPTION`
-- `UI_FAVICON`
-- `UI_SHOW_SHARE_BUTTON`
-
-Feel free to fork this repository and make your own modifications to the UX or backend logic. You can modify the source (`frontend/src`). For example, you may want to change aspects of the chat display, or expose some of the settings in `app.py` in the UI for users to try out different behaviors. After your code changes, you will need to rebuild the front-end via `start.sh` or `start.cmd`.
-
-### Scalability
-You can configure the number of threads and workers in `gunicorn.conf.py`. After making a change, redeploy your app using the commands listed above.
-
-See the [Oryx documentation](https://github.com/microsoft/Oryx/blob/main/doc/configuration.md) for more details on these settings.
-
-### Debugging your deployed app
-First, add an environment variable on the app service resource called "DEBUG". Set this to "true".
-
-Next, enable logging on the app service. Go to "App Service logs" under Monitoring, and change Application logging to File System. Save the change.
-
-Now, you should be able to see logs from your app by viewing "Log stream" under Monitoring.
-
-### Configuring vector search
-When using your own data with a vector index, ensure these settings are configured on your app:
-- `AZURE_SEARCH_QUERY_TYPE`: can be `vector`, `vectorSimpleHybrid`, or `vectorSemanticHybrid`,
-- `AZURE_OPENAI_EMBEDDING_NAME`: the name of your Ada (text-embedding-ada-002) model deployment on your Azure OpenAI resource.
-- `AZURE_SEARCH_VECTOR_COLUMNS`: the vector columns in your index to use when searching. Join them with `|` like `contentVector|titleVector`.
-
-### Changing Citation Display
-The Citation panel is defined at the end of `frontend/src/pages/chat/Chat.tsx`. The citations returned from Azure OpenAI On Your Data will include `content`, `title`, `filepath`, and in some cases `url`. You can customize the Citation section to use and display these as you like. For example, the title element is a clickable hyperlink if `url` is not a blob URL.
-
-```
-
-
- const onViewSource = (citation: Citation) => {
- if (citation.url && !citation.url.includes("blob.core")) {
- window.open(citation.url, "_blank");
- }
- };
-
-```
-
-
-### Best Practices
-We recommend keeping these best practices in mind:
-
-- Reset the chat session (clear chat) if the user changes any settings. Notify the user that their chat history will be lost.
-- Clearly communicate to the user what impact each setting will have on their experience.
-- When you rotate API keys for your AOAI or ACS resource, be sure to update the app settings for each of your deployed apps to use the new key.
-- Pull in changes from `main` frequently to ensure you have the latest bug fixes and improvements, especially when using Azure OpenAI on your data.
-
-**A note on Azure OpenAI API versions**: The application code in this repo will implement the request and response contracts for the most recent preview API version supported for Azure OpenAI. To keep your application up-to-date as the Azure OpenAI API evolves with time, be sure to merge the latest API version update into your own application code and redeploy using the methods described in this document.
-
-## Environment variables
-
-Note: settings starting with `AZURE_SEARCH` are only needed when using Azure OpenAI on your data with Azure AI Search. If not connecting to your data, you only need to specify `AZURE_OPENAI` settings.
-
-| App Setting | Value | Note |
-| --- | --- | ------------- |
-|AZURE_AI_AGENT_API_VERSION|2025-01-01-preview| API version when using the Azure Foundry agent on your data.|
-|AZURE_AI_AGENT_ENDPOINT||The endpoint of the Azure AI foundry project|
-|AZURE_AI_AGENT_MODEL_DEPLOYMENT_NAME||The name of the gpt model|
-|AZURE_SEARCH_SERVICE||The name of your Azure AI Search resource|
-|AZURE_SEARCH_INDEX||The name of your Azure AI Search Index|
-|AZURE_SEARCH_USE_SEMANTIC_SEARCH|False|Whether or not to use semantic search|
-|AZURE_SEARCH_QUERY_TYPE|simple|Query type: simple, semantic, vector, vectorSimpleHybrid, or vectorSemanticHybrid. Takes precedence over AZURE_SEARCH_USE_SEMANTIC_SEARCH|
-|AZURE_SEARCH_SEMANTIC_SEARCH_CONFIG||The name of the semantic search configuration to use if using semantic search.|
-|AZURE_SEARCH_TOP_K|5|The number of documents to retrieve from Azure AI Search.|
-|AZURE_SEARCH_ENABLE_IN_DOMAIN|True|Limits responses to only queries relating to your data.|
-|AZURE_SEARCH_CONTENT_COLUMNS||List of fields in your Azure AI Search index that contains the text content of your documents to use when formulating a bot response. Represent these as a string joined with "|", e.g. `"product_description|product_manual"`|
-|AZURE_SEARCH_FILENAME_COLUMN|| Field from your Azure AI Search index that gives a unique identifier of the source of your data to display in the UI.|
-|AZURE_SEARCH_TITLE_COLUMN||Field from your Azure AI Search index that gives a relevant title or header for your data content to display in the UI.|
-|AZURE_SEARCH_URL_COLUMN||Field from your Azure AI Search index that contains a URL for the document, e.g. an Azure Blob Storage URI. This value is not currently used.|
-|AZURE_SEARCH_VECTOR_COLUMNS||List of fields in your Azure AI Search index that contain vector embeddings of your documents to use when formulating a bot response. Represent these as a string joined with "|", e.g. `"product_description|product_manual"`|
-|AZURE_SEARCH_PERMITTED_GROUPS_COLUMN||Field from your Azure AI Search index that contains AAD group IDs that determine document-level access control.|
-|AZURE_SEARCH_STRICTNESS|3|Integer from 1 to 5 specifying the strictness for the model limiting responses to your data.|
-|AZURE_OPENAI_RESOURCE||the name of your Azure OpenAI resource|
-|AZURE_OPENAI_MODEL||The name of your model deployment|
-|AZURE_OPENAI_ENDPOINT||The endpoint of your Azure OpenAI resource.|
-|AZURE_OPENAI_MODEL_NAME|gpt-35-turbo-16k|The name of the model|
-|AZURE_OPENAI_TEMPERATURE|0|What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. A value of 0 is recommended when using your data.|
-|AZURE_OPENAI_TOP_P|1.0|An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. We recommend setting this to 1.0 when using your data.|
-|AZURE_OPENAI_MAX_TOKENS|1000|The maximum number of tokens allowed for the generated answer.|
-|AZURE_OPENAI_STOP_SEQUENCE||Up to 4 sequences where the API will stop generating further tokens. Represent these as a string joined with "|", e.g. `"stop1|stop2|stop3"`|
-|AZURE_OPENAI_SYSTEM_MESSAGE|You are an AI assistant that helps people find information.|A brief description of the role and tone the model should use|
-|AZURE_OPENAI_PREVIEW_API_VERSION|2024-02-15-preview|API version when using Azure OpenAI on your data|
-|AZURE_OPENAI_STREAM|True|Whether or not to use streaming for the response|
-|AZURE_OPENAI_EMBEDDING_NAME||The name of your embedding model deployment if using vector search.
-|UI_TITLE|Contoso| Chat title (left-top) and page title (HTML)
-|UI_LOGO|| Logo (left-top). Defaults to Contoso logo. Configure the URL to your logo image to modify.
-|UI_CHAT_LOGO|| Logo (chat window). Defaults to Contoso logo. Configure the URL to your logo image to modify.
-|UI_CHAT_TITLE|Start chatting| Title (chat window)
-|UI_CHAT_DESCRIPTION|This chatbot is configured to answer your questions| Description (chat window)
-|UI_FAVICON|| Defaults to Contoso favicon. Configure the URL to your favicon to modify.
-|UI_SHOW_SHARE_BUTTON|True|Share button (right-top)
-|SANITIZE_ANSWER|False|Whether to sanitize the answer from Azure OpenAI. Set to True to remove any HTML tags from the response.|
-|USE_PROMPTFLOW|False|Use existing Promptflow deployed endpoint. If set to `True` then both `PROMPTFLOW_ENDPOINT` and `PROMPTFLOW_API_KEY` also need to be set.|
-|PROMPTFLOW_ENDPOINT||URL of the deployed Promptflow endpoint e.g. https://pf-deployment-name.region.inference.ml.azure.com/score|
-|PROMPTFLOW_API_KEY||Auth key for deployed Promptflow endpoint. Note: only Key-based authentication is supported.|
-|PROMPTFLOW_RESPONSE_TIMEOUT|120|Timeout value in seconds for the Promptflow endpoint to respond.|
-|PROMPTFLOW_REQUEST_FIELD_NAME|query|Default field name to construct Promptflow request. Note: chat_history is auto constucted based on the interaction, if your API expects other mandatory field you will need to change the request parameters under `promptflow_request` function.|
-|PROMPTFLOW_RESPONSE_FIELD_NAME|reply|Default field name to process the response from Promptflow request.|
-|PROMPTFLOW_CITATIONS_FIELD_NAME|documents|Default field name to process the citations output from Promptflow request.|
diff --git a/archive-doc-gen/docs/SampleQuestions.md b/archive-doc-gen/docs/SampleQuestions.md
deleted file mode 100644
index 6c569607b..000000000
--- a/archive-doc-gen/docs/SampleQuestions.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# Sample Questions
-
-To help you get started, here are some **Sample Prompts** you can ask in the app:
-
-> _Note: Average response time is 07 -16 seconds._
-
-## **Sections**
-
-### **Browse**
-The Browse section allows users to explore and retrieve information related to promissory notes. Key functionalities include:
-
-_Sample Questions:_
-
-- What are typical sections in a promissory note?
-- List the details of two promissory notes governed by the laws of the state of California.
-
-### **Generate**
-The Generate section enables users to create new promissory notes with customizable options. Key features include:
-
-_Sample Questions:_
-
-- Generate a promissory note with a proposed $100,000 for Washington State.
-- Remove (section) (Any displayed section you can add).
-- Add a Payment acceleration clause after the payment terms section.
-- Click on Generate Draft.
-
-
-
-### **Draft**
-The Draft section ensures accuracy and completeness of the generated promissory notes. Key tasks include:
-
-_Sample operation:_
-
-- Task: Re-generate text boxes if they did not populate for any section.
-- Task: Re-generate text box for Borrower with the name: Jane Smith.
-
-This structured approach ensures that users can efficiently browse, create, and refine promissory notes while maintaining legal compliance and document accuracy.
diff --git a/archive-doc-gen/docs/TRANSPARENCY_FAQ.md b/archive-doc-gen/docs/TRANSPARENCY_FAQ.md
deleted file mode 100644
index ace333547..000000000
--- a/archive-doc-gen/docs/TRANSPARENCY_FAQ.md
+++ /dev/null
@@ -1,17 +0,0 @@
-## Document Generation Solution Accelerator: Responsible AI FAQ
-- ### What is Build your own copilot - Generic Solution Accelerator?
- This solution accelerator is an open-source GitHub Repository to help create AI assistants using Azure OpenAI Service and Azure AI Search. This can be used by anyone looking for reusable architecture and code snippets to build AI assistants with their own enterprise data. The repository showcases a generic scenario of a user who wants to generate a document template based on a sample set of data.
-
-- ### What can Document Generation Solution Accelerator do?
- The sample solution included focuses on a generic use case - chat with your own data, generate a document template using your own data, and exporting the document in a docx format. The sample data is sourced from generic AI-generated promissory notes. The documents are intended for use as sample data only. The sample solution takes user input in text format and returns LLM responses in text format up to 800 tokens. It uses prompt flow to search data from AI search vector store, summarize the retrieved documents with Azure OpenAI.
-
-- ### What is/are Document Generation Solution Accelerator’s intended use(s)?
- This repository is to be used only as a solution accelerator following the open-source license terms listed in the GitHub repository. The example scenario’s intended purpose is to help users generate a document template to perform their work more efficiently.
-
-- ### How was Document Generation Solution Accelerator evaluated? What metrics are used to measure performance?
- We have used AI Foundry Prompt flow evaluation SDK to test for harmful content, groundedness, and potential security risks.
-
-- ### What are the limitations of Document Generation Solution Accelerator? How can users minimize the impact of Document Generation Solution Accelerator’s limitations when using the system?
- This solution accelerator can only be used as a sample to accelerate the creation of AI assistants. The repository showcases a sample scenario of a user generating a document template. Users should review the system prompts provided and update as per their organizational guidance. Users should run their own evaluation flow either using the guidance provided in the GitHub repository or their choice of evaluation methods. AI-generated content may be inaccurate and should be manually reviewed. Currently, the sample repo is available in English only.
-- ### What operational factors and settings allow for effective and responsible use of Document Generation Solution Accelerator?
- Users can try different values for some parameters like system prompt, temperature, max tokens etc. shared as configurable environment variables while running run evaluations for AI assistants. Please note that these parameters are only provided as guidance to start the configuration but not as a complete available list to adjust the system behavior. Please always refer to the latest product documentation for these details or reach out to your Microsoft account team if you need assistance.
diff --git a/archive-doc-gen/docs/TroubleShootingSteps.md b/archive-doc-gen/docs/TroubleShootingSteps.md
deleted file mode 100644
index 28eb59885..000000000
--- a/archive-doc-gen/docs/TroubleShootingSteps.md
+++ /dev/null
@@ -1,157 +0,0 @@
-# 🛠️ Troubleshooting
-
-When deploying Azure resources, you may come across different error codes that stop or delay the deployment process. This section lists some of the most common errors along with possible causes and step-by-step resolutions.
-
-Use these as quick reference guides to unblock your deployments.
-
-## ⚡ Most Frequently Encountered Errors
-
-| Error Code | Common Cause | Full Details |
-|------------|--------------|--------------|
-| **InsufficientQuota** | Not enough quota available in subscription | [View Solution](#quota--capacity-limitations) |
-| **MissingSubscriptionRegistration** | Required feature not registered in subscription | [View Solution](#subscription--access-issues) |
-| **ResourceGroupNotFound** | RG doesn't exist or using old .env file | [View Solution](#resource-group--deployment-management) |
-| **DeploymentModelNotSupported** | Model not available in selected region | [View Solution](#regional--location-issues) |
-| **DeploymentNotFound** | Deployment record not found or was deleted | [View Solution](#resource-group--deployment-management) |
-| **ResourceNotFound** | Resource does not exist or cannot be found | [View Solution](#resource-identification--references) |
-| **SpecialFeatureOrQuotaIdRequired** | Subscription lacks access to specific model | [View Solution](#subscription--access-issues) |
-| **ContainerAppOperationError** | Improperly built container image | [View Solution](#miscellaneous) |
-| **ServiceUnavailable** | Service not available in selected region | [View Solution](#regional--location-issues) |
-| **BadRequest - DatabaseAccount is in a failed provisioning state** | Previous deployment failed | [View Solution](#resource-state--provisioning) |
-| **Unauthorized - Operation cannot be completed without additional quota** | Insufficient quota for requested operation | [View Solution](#subscription--access-issues) |
-| **ResourceGroupBeingDeleted** | Resource group deletion in progress | [View Solution](#resource-group--deployment-management) |
-| **FlagMustBeSetForRestore** | Soft-deleted resource requires restore flag or purge | [View Solution](#miscellaneous) |
-| **ParentResourceNotFound** | Parent resource does not exist or cannot be found | [View Solution](#resource-identification--references) |
-| **AccountProvisioningStateInvalid** | Resource used before provisioning completed | [View Solution](#resource-state--provisioning) |
-| **InternalSubscriptionIsOverQuotaForSku** | Subscription quota exceeded for the requested SKU | [View Solution](#quota--capacity-limitations) |
-| **InvalidResourceGroup** | Invalid resource group configuration | [View Solution](#resource-group--deployment-management) |
-| **RequestDisallowedByPolicy** | Azure Policy blocking the requested operation | [View Solution](#subscription--access-issues) |
-
-## 📖 Table of Contents
-
-- [Subscription & Access Issues](#subscription--access-issues)
-- [Quota & Capacity Limitations](#quota--capacity-limitations)
-- [Regional & Location Issues](#regional--location-issues)
-- [Resource Naming & Validation](#resource-naming--validation)
-- [Resource Identification & References](#resource-identification--references)
-- [Network & Infrastructure Configuration](#network--infrastructure-configuration)
-- [Configuration & Property Errors](#configuration--property-errors)
-- [Resource State & Provisioning](#resource-state--provisioning)
-- [Miscellaneous](#miscellaneous)
-
-## Subscription & Access Issues
-
-| Issue/Error Code | Description | Steps to Resolve |
-|-----------|-------------|------------------|
-| **ReadOnlyDisabledSubscription** | Subscription is disabled or in read-only state |
Check if you have an active subscription before starting the deployment
Depending on the type of the Azure Subscription, the expiration date might have been reached
You have to activate the Azure Subscription before creating any Azure resource
Refer to [Reactivate a disabled Azure subscription](https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/subscription-disabled) documentation
|
-| **MissingSubscriptionRegistration/ AllowBringYourOwnPublicIpAddress** | Required feature not registered in subscription | **Enable `AllowBringYourOwnPublicIpAddress` Feature**
Before deploying the resources, you may need to enable the **Bring Your Own Public IP Address** feature in Azure. This is required only once per subscription.
**Steps:**
Run the following command to register the feature: `az feature register --namespace Microsoft.Network --name AllowBringYourOwnPublicIpAddress`
Wait for the registration to complete. Check the status using: `az feature show --namespace Microsoft.Network --name AllowBringYourOwnPublicIpAddress --query properties.state`
The output should show: "Registered"
Once the feature is registered, refresh the provider: `az provider register --namespace Microsoft.Network`
💡 Note: Feature registration may take several minutes to complete. This needs to be done only once per Azure subscription. |
-| **Unauthorized - Operation cannot be completed without additional quota** | Insufficient quota for requested operation |
Check your quota usage using: `az vm list-usage --location "" -o table`
To request more quota refer to [VM Quota Request](https://techcommunity.microsoft.com/blog/startupsatmicrosoftblog/how-to-increase-quota-for-specific-types-of-azure-virtual-machines/3792394)
|
-| **CrossTenantDeploymentNotPermitted** | Deployment across different Azure AD tenants not allowed |
**Check tenant match:** Ensure your deployment identity (user/SP) and the target resource group are in the same tenant: `az account show` `az group show --name `
**Verify pipeline/service principal:** If using CI/CD, confirm the service principal belongs to the same tenant and has permissions on the resource group
**Avoid cross-tenant references:** Make sure your Bicep doesn't reference subscriptions, resource groups, or resources in another tenant
**Test minimal deployment:** Deploy a simple resource to the same resource group to confirm identity and tenant are correct
**Guest/external accounts:** Avoid using guest users from other tenants; use native accounts or SPs in the tenant
This typically indicates that an Azure Policy is preventing the requested action due to policy restrictions in your subscription
For more details and guidance on resolving this issue, refer to: [RequestDisallowedByPolicy](https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/create-upgrade-delete/error-code-requestdisallowedbypolicy)
|
-| **SpecialFeatureOrQuotaIdRequired** | Subscription lacks access to specific Azure OpenAI models | This error occurs when your subscription does not have access to certain Azure OpenAI models.
**Example error message:** `SpecialFeatureOrQuotaIdRequired: The current subscription does not have access to this model 'Format:OpenAI,Name:o3,Version:2025-04-16'.`
**Resolution:** To gain access, submit a request using the official form: 👉 [Azure OpenAI Model Access Request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQ1VGQUEzRlBIMVU2UFlHSFpSNkpOR0paRSQlQCN0PWcu)
You'll need to use this form if you require access to the following restricted models:
gpt-5
o3
o3-pro
deep research
reasoning summary
gpt-image-1
Once your request is approved, redeploy your resource. |
-| **ResourceProviderError** | Resource provider not registered in subscription |
This error occurs when the resource provider is not registered in your subscription
To register it, refer to [Register Resource Provider](https://learn.microsoft.com/en-us/azure/azure-resource-manager/troubleshooting/error-register-resource-provider?tabs=azure-cli) documentation
|
-
---------------------------------
-
-## Quota & Capacity Limitations
-
-| Issue/Error Code | Description | Steps to Resolve |
-|-----------------|-------------|------------------|
-| **InternalSubscriptionIsOverQuotaForSku/ ManagedEnvironmentProvisioningError** | Subscription quota exceeded for the requested SKU | Quotas are applied per resource group, subscriptions, accounts, and other scopes. For example, your subscription might be configured to limit the number of vCPUs for a region. If you attempt to deploy a virtual machine with more vCPUs than the permitted amount, you receive an error that the quota was exceeded.
For PowerShell, use the `Get-AzVMUsage` cmdlet to find virtual machine quotas: `Get-AzVMUsage -Location "West US"`
Based on available quota you can deploy application otherwise, you can request for more quota |
-| **InsufficientQuota** | Not enough quota available in subscription |
Check if you have sufficient quota available in your subscription before deployment
To verify, refer to the [quota_check](../docs/QuotaCheck.md) file for details
|
-| **MaxNumberOfRegionalEnvironmentsInSubExceeded** | Maximum Container App Environments limit reached for region |This error occurs when you attempt to create more **Azure Container App Environments** than the regional quota limit allows for your subscription. Each Azure region has a specific limit on the number of Container App Environments that can be created per subscription.
**Common Causes:**
Deploying to regions with low quota limits (e.g., Sweden Central allows only 1 environment)
Multiple deployments without cleaning up previous environments
Exceeding the standard limit of 15 environments in most major regions
**Resolution:**
**Delete unused environments** in the target region, OR
**Deploy to a different region** with available capacity, OR
**Request quota increase** via [Azure Support](https://go.microsoft.com/fwlink/?linkid=2208872)
[Azure subscription and service limits](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits)
|
-| **SkuNotAvailable** | Requested SKU not available in selected location or zone | You receive this error in the following scenarios:
When the resource SKU you've selected, such as VM size, isn't available for a location or zone
If you're deploying an Azure Spot VM or Spot scale set instance, and there isn't any capacity for Azure Spot in this location. For more information, see Spot error messages
|
-| **Conflict - No available instances to satisfy this request** | Azure App Service has insufficient capacity in the region | This error occurs when Azure App Service doesn't have enough available compute instances in the selected region to provision or scale your app.
**Common Causes:**
High demand in the selected region (e.g., East US, West Europe)
Specific SKUs experiencing capacity constraints (Free, Shared, or certain Premium tiers)
Multiple rapid deployments in the same region
**Resolution:**
**Wait and Retry** (15-30 minutes): `azd up`
**Deploy to a New Resource Group** (Recommended for urgent cases): ``` azd down --force --purge azd up ```
**Try a Different Region:** Update region in `main.bicep` or `azure.yaml` to a less congested region (e.g., `westus2`, `centralus`, `northeurope`)
**Use a Different SKU/Tier:** If using Free/Shared tier, upgrade to Basic or Standard Check SKU availability: `az appservice list-locations --sku `
**Reference:** [Azure App Service Plans](https://learn.microsoft.com/en-us/azure/app-service/overview-hosting-plans) |
-
---------------------------------
-
-## Resource Group & Deployment Management
-
-| Issue/Error Code | Description | Steps to Resolve |
-|-----------------|-------------|------------------|
-| **ResourceGroupNotFound** | Specified resource group does not exist | **Option 1:**
Go to [Azure Portal](https://portal.azure.com/#home)
Click on **"Resource groups"** option 
Search for the resource group in the search bar. If it exists, you can proceed 
**Option 2:**
This error can occur if you deploy using the same .env file from a previous deployment
Create a new environment before redeploying: `azd env new `
|
-| **ResourceGroupBeingDeleted** | Resource group is currently being deleted | **Steps:**
Go to [Azure Portal](https://portal.azure.com/#home)
Go to resource group option and search for targeted resource group
If the resource group is being deleted, you cannot use it. Create a new one or use a different resource group
|
-| **DeploymentActive** | Another deployment is already in progress in this resource group |
This occurs when a deployment is already in progress and another deployment is triggered in the same resource group
Cancel the ongoing deployment before starting a new one
Do not initiate a new deployment until the previous one is completed
|
-| **DeploymentCanceled** | Deployment was canceled before completion |
**Check deployment history:** Go to Azure Portal → Resource Group → Deployments Review the detailed error message
**Identify the root cause:** Dependent resource failed to deploy Validation error occurred Manual cancellation was triggered
**Validate template:** `az deployment group validate --resource-group --template-file main.bicep`
**Check resource limits/quotas**
**Fix the failed dependency**
**Retry deployment:** `az deployment group create --resource-group --template-file main.bicep`
💡 **Note:** DeploymentCanceled is a wrapper error — check inner errors in deployment logs |
-| **DeploymentCanceled(user.canceled)** | User manually canceled the deployment |
Deployment was manually canceled by the user (Portal, CLI, or pipeline)
Check deployment history and logs to confirm who/when it was canceled
If accidental, retry the deployment
For pipelines, ensure no automation or timeout is triggering cancellation
Use deployment locks or retry logic to prevent accidental cancellations
|
-| **DeploymentNotFound** | Deployment record not found or was deleted |
This occurs when the user deletes a previous deployment along with the resource group, then redeploys the same RG with the same environment name but in a different location
Do not change the location when redeploying a deleted RG, OR
Use new names for the RG and environment during redeployment
Some resources may be stuck deleting or have dependencies; check RG resources and status
Ensure no resource locks or Azure Policies are blocking deletion
Retry deletion via CLI/PowerShell: `az group delete --name --yes --no-wait`
Check Activity Log to identify failing resources
Escalate to Azure Support if deletion is stuck
|
-
---------------------------------
-
-## Regional & Location Issues
-
-| Issue/Error Code | Description | Steps to Resolve |
-|-----------------|-------------|------------------|
-| **LocationNotAvailableForResourceType** | Resource type not supported in selected region | This error occurs when you attempt to deploy a resource to a region that does not support that specific resource type or SKU.
**Resolution:**
**Verify resource availability by region:** `az provider show --namespace --query "resourceTypes[?resourceType==''].locations" -o table`
**Check Azure Products by Region:** [Azure Products by Region](https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/)
**Supported regions for this deployment:**
`australiaeast`
`centralus`
`eastasia`
`eastus2`
`japaneast`
`northeurope`
`southeastasia`
`uksouth`
**Redeploy:** `azd up`
|
-| **InvalidResourceLocation** | Cannot change region for already deployed resources | This error occurs when you attempt to modify the location/region of a resource that has already been deployed. Azure resources **cannot change regions** after creation.
**Resolution:**
**Option 1: Delete and Redeploy:** `azd down --force --purge` after purge redeploy app `azd up`
**Option 2: Create new environment with different region:** `azd env new ` `azd env set AZURE_LOCATION ` `azd up`
**Option 3: Keep existing deployment:** Revert configuration files to use the original region
⚠️ **Important:** Backup critical data before deleting resources.
**Reference:** [Move Azure resources across regions](https://learn.microsoft.com/en-us/azure/resource-mover/overview) |
-| **ServiceUnavailable/ResourceNotFound** | Service unavailable or restricted in selected region |
Regions are restricted to guarantee compatibility with paired regions and replica locations for data redundancy and failover scenarios based on articles [Azure regions list](https://learn.microsoft.com/en-us/azure/reliability/regions-list) and [Azure Database for MySQL Flexible Server - Azure Regions](https://learn.microsoft.com/azure/mysql/flexible-server/overview#azure-regions)
You can request more quota, refer [Quota Request](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/create-support-request-quota-increase) Documentation
|
-| **ResourceOperationFailure/ ProvisioningDisabled** | Resource provisioning restricted or disabled in region |
This error occurs when provisioning of a resource is restricted in the selected region. It usually happens because the service is not available in that region or provisioning has been temporarily disabled
Regions are restricted to guarantee compatibility with paired regions and replica locations for data redundancy and failover scenarios based on articles [Azure regions list](https://learn.microsoft.com/en-us/azure/reliability/regions-list) and [Azure Database for MySQL Flexible Server - Azure Regions](https://learn.microsoft.com/azure/mysql/flexible-server/overview#azure-regions)
If you need to use the same region, you can request a quota or provisioning exception. Refer [Quota Request](https://docs.microsoft.com/en-us/azure/sql-database/quota-increase-request) for more details
|
-| **RedundancyConfigurationNotAvailableInRegion** | Redundancy configuration not supported in selected region |
This issue happens when you try to create a **Storage Account** with a redundancy configuration (e.g., `Standard_GRS`) that is **not supported in the selected Azure region**
Example: Creating a storage account with **GRS** in **italynorth** will fail with error: `az storage account create -n mystorageacct123 -g myResourceGroup -l italynorth --sku Standard_GRS --kind StorageV2`
To check supported SKUs for your region: `az storage account list-skus -l italynorth -o table`
Use a supported redundancy option (e.g., Standard_LRS) in the same region or deploy the Storage Account in a region that supports your chosen redundancy
For more details, refer to [Azure Storage redundancy documentation](https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy?utm_source=chatgpt.com)
Ensure the resource name is within the allowed length and naming rules defined for that specific resource type, you can refer [Resource Naming Convention](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules) document
|
-| **Workspace Name - InvalidParameter** | Workspace name does not meet required format | To avoid this errors in workspace ID follow below rules:
Must start and end with an alphanumeric character (letter or number)
Allowed characters: `a–z`, `0–9`, `- (hyphen)`
Cannot start or end with a hyphen -
No spaces, underscores (_), periods (.), or special characters
Must be unique within the Azure region & subscription
Length: 3–33 characters (for AML workspaces)
|
-| **VaultNameNotValid** | Key Vault name does not meet naming requirements | In this template Vault name will be unique everytime, but if you trying to hard code the name then please make sure below points:
**Check name length** - Ensure the Key Vault name is between 3 and 24 characters
**Validate allowed characters** - The name can only contain letters (a–z, A–Z) and numbers (0–9). Hyphens are allowed, but not at the beginning or end, and not consecutive (--)
**Ensure proper start and end** - The name must start with a letter. The name must end with a letter or digit (not a hyphen)
**Test with a new name** - Example of a valid vault name: ✅ `cartersaikeyvault1`, ✅ `securevaultdemo`, ✅ `kv-project123`
|
-| **BadRequest: Dns record under zone Document is already taken** | DNS record name already in use | This error can occur only when user hardcoding the CosmosDB Service name. To avoid this you can try few below suggestions:
Verify resource names are globally unique
If you already created an account/resource with same name in another subscription or resource group, check and delete it before reusing the name
By default in this template we are using unique prefix with every resource/account name to avoid this kind for errors
|
-
----------------------------------
-
-## Resource Identification & References
-
-| Issue/Error Code | Description | Steps to Resolve |
-|-----------------|-------------|------------------|
-| **LinkedInvalidPropertyId/ ResourceNotFound/ DeploymentOutputEvaluationFailed/ CanNotRestoreANonExistingResource/ The language expression property array index is out of bounds** | Invalid or non-existent resource ID reference |
Before using any resource ID, ensure it follows the correct format
Verify that the resource ID you are passing actually exists
Make sure there are no typos in the resource ID
Verify that the provisioning state of the existing resource is `Succeeded` by running the following command to avoid this error while deployment or restoring the resource: `az resource show --ids --query "properties.provisioningState"`
You may encounter the error `The language expression property array index '8' is out of bounds` if the resource ID is incomplete. Please ensure your resource ID is correct and contains all required information, as shown in sample resource IDs
For more information refer [Resource Not Found errors solutions](https://learn.microsoft.com/en-us/azure/azure-resource-manager/troubleshooting/error-not-found?tabs=bicep)
|
-| **ParentResourceNotFound** | Parent resource does not exist or cannot be found |
You can refer to the [Parent Resource Not found](https://learn.microsoft.com/en-us/azure/azure-resource-manager/troubleshooting/error-parent-resource?tabs=bicep) documentation if you encounter this error
|
-| **PrincipalNotFound** | Principal ID does not exist in Azure AD tenant | This error occurs when the **principal ID** (Service Principal, User, or Group) specified in a role assignment or deployment does not exist in the Azure Active Directory tenant. It can also happen due to **replication delays** right after creating a new principal.
**Example causes:**
The specified **Object ID** is invalid or belongs to another tenant
The principal was recently created but Azure AD has not yet replicated it
Attempting to assign a role to a non-existing or deleted Service Principal/User/Group
**How to fix:**
Verify that the **principal ID is correct** and exists in the same directory/tenant: `az ad sp show --id `
If the principal was just created, wait a few minutes and retry
Explicitly set the principalType property (ServicePrincipal, User, or Group) in your ARM/Bicep template to avoid replication delays
If the principal does not exist, create it again before assigning roles
For more details, see [Azure PrincipalType documentation](https://learn.microsoft.com/en-us/azure/role-based-access-control/troubleshooting?tabs=bicep)
|
-| **SubscriptionDoesNotHaveServer** | Referenced SQL Server does not exist in subscription | This issue happens when you try to reference an **Azure SQL Server** (`Microsoft.Sql/servers`) that does not exist in the selected subscription.
**It can occur if:**
The SQL server name is typed incorrectly
The SQL server was **deleted** but is still being referenced
You are working in the **wrong subscription context**
The server exists in a **different subscription/tenant** where you don't have access
**Reproduce:** Run an Azure CLI command with a non-existent server name: `az sql db list --server sql-doesnotexist --resource-group myResourceGroup` or `az sql server show --name sql-caqfrhxr4i3hyj --resource-group myResourceGroup`
**Resolution:**
Verify the SQL Server name exists in your subscription: `az sql server list --output table`
Make sure you are targeting the correct subscription: `az account show` `az account set --subscription `
If the server was deleted, either restore it (if possible) or update references to use a valid existing server
|
-
----------------------------------
-
-## Network & Infrastructure Configuration
-
-| Issue/Error Code | Description | Steps to Resolve |
-|-----------------|-------------|------------------|
-| **NetcfgSubnetRangeOutsideVnet** | Subnet IP range outside virtual network address space |
Ensure the subnet's IP address range falls within the virtual network's address space
Always validate that the subnet CIDR block is a subset of the VNet range
For Azure Bastion, the AzureBastionSubnet must be at least /27
Confirm that the AzureBastionSubnet is deployed inside the VNet
|
-| **DisableExport_PublicNetworkAccessMustBeDisabled** | Public network access must be disabled when export is disabled |
**Check container source:** Confirm whether the deployment is using a Docker image or Azure Container Registry (ACR)
**Verify ACR configuration:** If ACR is included, review its settings to ensure they comply with Azure requirements
**Check export settings:** If export is disabled in ACR, make sure public network access is also disabled
**Redeploy after fix:** Correct the configuration and redeploy. This will prevent the Conflict error during deployment
For more information refer [ACR Data Loss Prevention](https://learn.microsoft.com/en-us/azure/container-registry/data-loss-prevention) document
The deployment values either include values that aren't recognized, or required values are missing. Confirm the values for your resource type
You can refer [Invalid Request Content error](https://learn.microsoft.com/en-us/azure/azure-resource-manager/troubleshooting/common-deployment-errors#:~:text=InvalidRequestContent,Template%20reference) documentation
|
-| **Conflict - Cannot use the SKU Basic with File Change Audit for site** | File Change Audit not supported on Basic SKU |
This error happens because File Change Audit logs aren't supported on Basic SKU App Service Plans
Upgrading to Premium/Isolated SKU (supports File Change Audit), or
Disabling File Change Audit in Diagnostic Settings if you must stay on Basic
Always cross-check the [supported log types](https://aka.ms/supported-log-types) before adding diagnostic logs to your Bicep templates
|
-| **AccountPropertyCannotBeUpdated** | Read-only property cannot be modified after creation | The property **`isHnsEnabled`** (Hierarchical Namespace for Data Lake Gen2) is **read-only** and can only be set during **storage account creation**. Once a storage account is created, this property **cannot be updated**. Trying to update it via ARM template, Bicep, CLI, or Portal will fail.
**Resolution:**
Create a **new storage account** with `isHnsEnabled=true` if you require hierarchical namespace
Migration may be needed if you already have data
Refer to [Storage Account Update Restrictions](https://aka.ms/storageaccountupdate) for more details
|
-
-
-----------------------------------
-
-## Resource State & Provisioning
-
-| Issue/Error Code | Description | Steps to Resolve |
-|-----------------|-------------|------------------|
-| **AccountProvisioningStateInvalid** | Resource used before provisioning completed |
The AccountProvisioningStateInvalid error occurs when you try to use resources while they are still in the Accepted provisioning state
This means the deployment has not yet fully completed
To avoid this error, wait until the provisioning state changes to Succeeded
Only use the resources once the deployment is fully completed
|
-| **BadRequest - DatabaseAccount is in a failed provisioning state because the previous attempt to create it was not successful** | Database account failed to provision previously |
This error occurs when a user attempts to redeploy a resource that previously failed to provision
To resolve the issue, delete the failed deployment first, then start a new deployment
For guidance on deleting a resource from a Resource Group, refer to the following link: [Delete an Azure Cosmos DB account](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/manage-with-powershell#delete-account:~:text=%3A%24enableMultiMaster-,Delete%20an%20Azure%20Cosmos%20DB%20account,-This%20command%20deletes)
|
-| **ServiceDeleting** | Cannot provision service because deletion is still in progress | This error occurs when you attempt to create an Azure Search service with the same name as one that is currently being deleted. Azure Search services have a **soft-delete period** during which the service name remains reserved.
**Common causes:**
Deleting a Search service and immediately trying to recreate it with the same name
Rapid redeployments using the same service name in Bicep/ARM templates
The deletion operation is asynchronous and takes several minutes to complete
**Resolution:**
**Wait for deletion to complete** (10-15 minutes) before redeploying
**Use a different service name** - append timestamp or unique identifier to the name
**Implement retry logic** with exponential backoff as suggested in the error message
**Check deletion status** before recreating: `az search service show --name --resource-group `
For Bicep deployments, ensure your naming strategy includes unique suffixes to avoid conflicts
For more details, refer to [Azure Search service limits](https://learn.microsoft.com/en-us/azure/search/search-limits-quotas-capacity)
|
-
----------------------------------
-
-## Miscellaneous
-
-| Issue/Error Code | Description | Steps to Resolve |
-|-------------|-------------|------------------|
-| **DeploymentModelNotSupported/ ServiceModelDeprecated/ InvalidResourceProperties** | Model not supported or deprecated in selected region |
The updated model may not be supported in the selected region. Please verify its availability in the [Azure AI Foundry models](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions) document
|
-| **FlagMustBeSetForRestore/ NameUnavailable/ CustomDomainInUse** | Soft-deleted resource requires restore flag or purge | This error occurs when you try to deploy a Cognitive Services resource that was **soft-deleted** earlier. Azure requires you to explicitly set the **`restore` flag** to `true` if you want to recover the soft-deleted resource. If you don't want to restore the resource, you must **purge the deleted resource** first before redeploying.
**Example causes:**
Trying to redeploy a Cognitive Services account with the same name as a previously deleted one
The deleted resource still exists in a **soft-delete retention state**
**How to fix:**
If you want to restore → add `"restore": true` in your template properties
If you want a fresh deployment → purge the resource using: `az cognitiveservices account purge --name --resource-group --location `
For more details, refer to [Soft delete and resource restore](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-powershell)
The error is likely due to an improperly built container image. For resolution steps, refer to the [Azure Container Registry (ACR) – Build & Push Guide](./ACRBuildAndPushGuide.md)
|
-
----------------------------------
-
-💡 Note: If you encounter any other issues, you can refer to the [Common Deployment Errors](https://learn.microsoft.com/en-us/azure/azure-resource-manager/troubleshooting/common-deployment-errors) documentation.
-If the problem persists, you can also raise an bug in our [Document Generation Github Issues](https://github.com/microsoft/document-generation-solution-accelerator/issues) for further support.
\ No newline at end of file
diff --git a/archive-doc-gen/docs/container_registry_migration.md b/archive-doc-gen/docs/container_registry_migration.md
deleted file mode 100644
index f78784716..000000000
--- a/archive-doc-gen/docs/container_registry_migration.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# Guide: Migrating Azure Web App Service to a New Container Registry
-
-## Overview
-
-### Current Problem:
-- The **Document Generator Container Image** is being published in the **External ACR** (Azure Container Registry).
-
-### Goal:
-- The goal is to **migrate container images** from various applications to a common **CSA CTO Production Azure Container Registry**, ensuring all the different images are consolidated in one centralized location.
-
----
-
-## Step-by-Step Guide: Migrating Azure Web App Service to a New Container Registry
-
-This guide will help you seamlessly switch the container registry for your **Azure Web App Service** from Azure Container Registry (ACR) to the new registry **`byocgacontainerreg`**.
-
-Follow the steps below to ensure a smooth migration.
-
-### Prerequisites:
-Before you begin, ensure you have the following:
-- Access to the **Azure Portal**.
-- The **container image** in the new registry is ready and accessible.
-
----
-
-### Step 1: Obtain Details for the New Registry
-
-Before you begin, ensure you have the following information:
-- **Registry URL**: The URL of the new registry (`https://byocgacontainerreg.azurecr.io`).
-- **Image Name and Tag**: The full name and tag of the image you want to use:
- - **Web App Image**: `webapp:latest`
----
-
-### Step 2: Update Azure Web App Service Configuration Using Azure Portal
-
-1. **Log in to Azure Portal**:
- - Open [Azure Portal](https://portal.azure.com/).
-
-2. **Locate Your Resource Group and Web App Service**:
- - Navigate to resource group which you have created for Document Generator.
- - Navigate to **Web App Service**: From the list of resources, find and select **App Service**
-
-3. **Go to the Deployment Center**:
- - In the left-hand menu, click on **Deployment**.
-
- 
-
-
-4. **Update Image Source**:
- - Change the **Registry Source** to **Private**.
- - Set the **Server URL** to the new container registry (`https://byocgacontainerreg.azurecr.io`), as shown in the screenshot below.
- - Set the **Full Image name** to the relevant image name and tag:
- - For Web App: `webapp:latest`
-
- 
-
-5. **Save Changes**:
- - Click **Save** to save the configuration.
-
----
-
-### Step 3: Restart the Web App Service
-
-After updating the configuration, restart your **Web App Service** to apply the changes:
-
-1. In the **Web App Service overview page**, click on **Restart**.
-2. Confirm the restart operation.
-
----
-
-### Step 8: Validate the Deployment
-
-1. **Access Your Web App**:
- - Open the **Web App URL** in a browser to ensure it’s running correctly.
----
-
-By following these steps, your **Azure Web App Service** will now use the new container from the **Document Generator registry**.
-
-For further assistance, feel free to reach out to your support team or log an issue on GitHub.
-
----
diff --git a/archive-doc-gen/docs/create_new_app_registration.md b/archive-doc-gen/docs/create_new_app_registration.md
deleted file mode 100644
index 5de59f879..000000000
--- a/archive-doc-gen/docs/create_new_app_registration.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Creating a new App Registration
-
-1. Click on `Home` and select `Microsoft Entra ID`.
-
-
-
-2. Click on `App registrations`.
-
-
-
-3. Click on `+ New registration`.
-
-
-
-4. Provide the `Name`, select supported account types as `Accounts in this organizational directory only(Contoso only - Single tenant)`, select platform as `Web`, enter/select the `URL` and register.
-
-
-
-5. After application is created successfully, then click on `Add a Redirect URL`.
-
-
-
-6. Click on `+ Add a platform`.
-
-
-
-7. Click on `Web`.
-
-
-
-8. Enter the `web app URL` (Provide the app service name in place of XXXX) and Save. Then go back to [Set Up Authentication in Azure App Service](/docs/AppAuthentication.md) Step 1 page and follow from _Point 4_ choose `Pick an existing app registration in this directory` from the Add an Identity Provider page and provide the newly registered App Name.
-
-E.g. <>.azurewebsites.net/.auth/login/aad/callback>>
-
-
diff --git a/archive-doc-gen/docs/images/AddDetails.png b/archive-doc-gen/docs/images/AddDetails.png
deleted file mode 100644
index f36b596f208c376d8dbe5dd70e947a0edbaa728a..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 358587
zcmV)`Kz_f8P)Px#1ZP1_K>z@;j|==^1poj532;bRa{vGi!vFvd!vV){sAK>D|D{PpK~#8N?7atI
zUB`7M{Js9rdm&f=_9CiFwq!M1?v{HJ$B7f$X^ykm>}K;P&L*4KPTS4eNu0ik<2Z5b
zxK~+{Ey-$@L`kGXioE~?h~9g9{r%rJ_g&!O5x@iR0D=_nM-J}2Q_h?@Gk0drc{6ie
z;i<`_D>eu(rtdzAOe+eXi}t>P#K17`ePZtI(I_(tcL!
z-Crd#mjyEW^UU!fm;U}DP;jfz1Xc`FZ|lrJPVM&!N3Q+%<$%?mk1YpkIyudMuHv?5
z!+HM}M$YYPt+9l<6Z;EA%T|5|xx*Em8c%woBY@Y7R!=RS9|t-HfZ-vaZxk4s1mcWn
z<`t}|gfk&uS7e(2W@5P$#!jEOpSbmpccTz6>%cVy$Wp}!?^@J;j2B@0L
zx%yZ0eY)C^^ZP>rIlg|m8f0^W8c)I!L+bA=^t{%6i}JADO7hC&nWf3l(A*jC=iKmI
z%cu0Y(#QoW+j=&j1m&B$y&@szZMXb-OfsF_0#+slzJKeiaU~M#m%o|+n13sMi)`s%
zt#I-E)BC@239bxwe(NiL>;1Ew^(M}R@mbl6xk|{ExwMDH<)=`K%Iux}EFUv^%VRz=
zR>ZZ#;;g?S;F^lWl4H{`48#IB76-n36gbrk3=Rxpbc!8#NU?nUsqihEW&?Qx
zja+578n~L}pB35OuPou}YfP}lX!TRt+n_TVQsc70V(C-YwL;DY#a#Q3T=UNk*TL~*
z@_SvtXC4QpI>#vgX&@W|VzE@`YcoM-3>r$NXXnjW7p(T4A9uzCArqPTWsAQU)NFFb
zQix<}n|8Cdk@@VvfddDt27A1;-AWN)f9zo&tg%@uCDj*uNm6|@^DxR5!`BSCc9Ke;
z3yoamN34q>AqCabs_x;u(qPx0+zQ))JEMJPlV_$uV^!K;4*ciz3w`Q6UopgYF8Sw@
zw+-=dwPH2omn+s&{+C1fbAkA$(`7y}>VQdOu+}@K2zp;RR-1QDPS4r;tJHNZa5We-
zzTKNle*OjEffomWX-<+75gh)1)?}>%a?U9)Ml9?y2N`r=7-rVX>@MIV?G7pu_L~>5J
z?rpamKJ56p^PkJ1)U&*=6uI;7z`-jQdN&p}Xt>yy%y*cnQpq>&igP?pkV1lS-H5o0
z8CaN#p{*xNcz<(wy|ZB^f4SxK1F856>Q$EyTyOe*GWqCB-AIm%@iDcWPlJHn@uqjc
z5{L$5d8v$7BC-sCS+U*<{iR0cij37>A+juxms}?Vxz@h}2M!#p2A#pZ9zsdH
z3;T+uQBf46mm3~>OVM-jeyw*@B;t$z!sOPf&IO^qH_MXSMWM;&$_uFJ%NmPkQOKZ?
z%&GiWI&!N&A=4l**I;omlE#3fG>S)sB&Y0i=RY^B@B8Dx!7Cejw-)YC>~FMKanj!-
z>)HN-OClOZIPSvnu>y1?su`p)QR61xzihAX!rA8zW6;R4|5(p0n@fPJ;d9C4h2}mW
zsudb6u+B_a%%EWlWTk=^xtX--L@tq_E*b#s=OyNEEBHz_lY}=j&uR2Cv&x1gvS5%-
zJ3XE4pGj+}yx8gN_@=yR#L~~y$jl=fmL-pL;%S9i(jL@>GNQj@&~V_u!MY%$FI_T}
zhW(Z~gJBBwkrw>Y?e*BdV-wt7%b?+9KP_d?m3e4l1~lPXDa>?jWAzeqVW*ul|;Y^w0WAk{25^=C<(6&;F)QXO|B*Y1;(m
zugCP<)3aJytm;%71|-{rnP=@7l8%%2=MxEi5%OHS`u=DVjiM;~2X_)U((!|~XU
zM0g6lBNO<3iyPlOK1EX6CaN>9bcDlV27h8{9mUFE^!FSxxDr-qRQ_E&IndriX2U=@
z!djEiS!lcGPagr6V&PeALX%F*L`#o*i3BxcvA3~I9x_)f7|>6ZJzs;#cwF;hD!(~-
z=z*S@d>CXzo-^8~(#$OJOoe4NXv~|Dg++qqLWI+a+j-J&S%+$6W~eePnF$9D95`4-
zWcF)Ih0?I!l0hRrft|^AeDXc7LS;$NOp|0}uvTcOdtQ76MJ~OcxR63QOJilxT{i
z-AvMRunijKFC)x5eaix+HOZLKtU<%Z3^%;9!>sFUOjGMAWTsDtC|J2=#y7t9;lHUk
zOGhgH6`+tfaIk(@{M}f{prKUqjZim9`26rCLx{nH?=-scZ$Iv31nJb&ua{!by_oze
zBbN-sdoBi1<-N8AbXEUdB-wYNi+4IXoq~_>x|l&@(FseI%9>;SN;zFj8P~R;GcW_3
z?a~|AV&zJMQK)TR?X;RiD!syNnwO|N^FsN{Wy)JfJ*U^}9GM7d@lA2l!gNU|WsvTf
zMt9k;(~9p*-cxc-=X*&=i=SQ&2M!!KxTLWAvgJZ)*l(H36t+d1@Tm{njXa+VVNRU1
z)kIr8)HS-bTw@^^5-acjlFT5M(Ei{|I@Z0EJ$L4zjdP%QS@I|Qu9V_swr7|*howlG
zl(yEYh%*YBoQ;L)+p%=b!o{j$Ha{ufYnk&>Xs65S5@Qt<>hvuqMy_T|pM{>3jjkp<
zVVN1MR!QI4`_o?|`#Hko37;3q&?sEq03K-ZAQn#Lm9au2W56)}W>C%r5MF1(u)UM|p|yQOj<0Lc6YP
z1*Vc^6v(8x+8`gMJj}~1-_=qe@saW~VT&|4GeXfRzMOtBQWA6GJe@g(J}3`Gwl2$o
zXe|aiTy)cE+bkQtbfY@d%c)a$)8sv4c;U10OYy1C4Vm)GrNV8wm~zTrbQoU-9XoIF
zGn2ReC{MW*-Q~ijos7`QqA42kqg>?-$wlhtB8ZsHYzk=;A_oo}IJk6R-;-1SqqJc(
zM9$#TAHExf0Usjpr~F;)8QDQ*`~;ndTm7Zkj=~sj(}_~CPL1pC%+FwEz)ELzy)lj%
zQDwm~!fVEiWXgb&QW`ln>M*o)xaRPqd>8J&t>3X-q~oD;BkqnNLFeW;?M}FnaK$*9
zacnTTk#Dnnm*bSVDqoh*6vv_?#N!$RE$C3UmHa}VJCvzq%&^M1RuJvQ=(B#z4>A@%
zLP7>A*0I;?HP@&Pgik}d0U>u#B{1!7Hvi>9>6O}^>Uh*XQ>5a&*SYq(+H!U!lJN01
zp!tr%%p7PImON&W**??5MLVh$s>;|{7PKl#hD6io=oh6t`9@Nfu~-BNkDD{zB;OS`
zd^Xv++09u|#kY##ev(%i<8;>hs`pK~#h7O_lfbfI%UoAcbl||j(qX-snK0b}U4k8v
zFX-op97QSE==cy9K$@hn?Z(hV9>mk(1h
zDrg*6{H7mH!sGKXoYOqf=h)~GFSgO2n5!^S%`}FP`Jqur6Gkt`FAvABn49BHkom_+
zBkE?ZVNDeoOEftSC4Ic=%P+han5{-1@?d+n;JHDcA--8}5uVXwt)D%qUJ-DR<^dAHF|q&VIj)ifp2h5coR4$Dmj=
zilpDm_cH+>-wVGOQr%>%pt9(z0HVvpe8==GgNEK$kKPKl;dm^D2x+(jng^w5EgMRc
zoBSO(xLT29|6=t#3qb~TjbAcoc=3Z)AO7XhX6v0c1`X3AnV9oK@8?S1>q~`f{Aa`3
zm(Nx97lX-Rdxgf53>x}lGnVR%?LkTvXF6m>%19>|>y!op#n;5fjw3&%k-}>G=OVa;
zzQyRXE8elb*wdzZP>V18~oV?c3HrZQ-_Wh~MvqN!BebO9`ej7(Q`g66=10|!fl>3eeN
zf7Iut8#Me>nDmRTweG_Fj%yUI*<7Tc#i=xzl?rrHlBA@1JYG60Y%#cH0Z0*yurHpL
zK_eNVlE!h8ejabipg|{thMd$r`xkPPUJ=c+{n^s_Va2cv8ln?(#gO!pUyi%(1XVj%
zoFjC~Kw+=0G*{6Vk}T8Q`GaN9uwZyx$Nt|KGz@RE{H+dbdTJkwjfW41JCs|W)*X)@a@
z6gI<$_)ZPM@hXfG`Vf
zC6LiVN?FaNWK&qvf1ywCeq?N%N}7xdE-H&LR21d0MAnjcDP-1`
z6Fpf2!14?lRvZt7Kf3yT)@lPAbGHoT3K>pT
zZqQK6iN)iXo}M<-(1N@?_;q%S70hrc{MONZ5{UkWrl#Tbco7T);PrXt7`OzLIUSiR
zQmk;f)Fu!Uo^ou1Nw(Ec$odg1FDSD~43TI8kysM>g@p{``^GYRWHe%0e^4uUXCZ@z
zMV&RLcZ22Az2%ogI3)f(SFXPE#^_vZLSsXZUuw3ZY7ngtOSo`w1NumXWEwQ=s{scC
zqww?ADDUiCjYU6IP<#7ES``~wGf;=kidCPifl{=M#XQwXo5{|?6P}!*QBa4LDGPm~
z)|m2n4RR1Sh5(hF&P|?T)4W$%8OO^($jb>;oHBBYe>PCD9pk-tHpUIc({jRu224KB
z$j`dWKvC170akIfl|wSg7+~i>LB^u58#=|+8b@1XZqo9x4#G0dl#)LVv;}-lPI9g{
z&MWC_>4dB~I6iFh;vxERuv&zbFmgZ&(2(w~8y$3o!_95`^WhA@3o
zF8vStz+?!UBbGs1h}ZQboHc;w-SaI|$0vAjyWx3$b^PB0wIJu^1*JQJgq)7SZrDZoBzr
zBovS1gN9qtmVy_HiMHk)LYzZPgkc&Z>@UAi#Y`#dUU!f~w4doEDGQmu$crBvGWdvh
zZNre_Z--f{8k9d}mKs0pZ5uwXj|%scDW9Gd&xVW{QI2c2ktxRcgGb6LpLMlm7Gv3^
zxLR94w4!FVPBINp%2&Jz3C>r%+Eb)^(h`)9>f4gFk7bg~;zmZwSN^>{y*Pd5G&XPE
zj6J*em^=f40H-&xnH3CnIV2MqiR7=<94dq8hNyVUP$pT+fS_<&wvw0lRx!lBZB$S^
z(G?FCkCr@qY;z_bwiod!nW_##QPwTXd*b*BjE#=s_E)`%Q?~%$3dw}h)e2S4LzGt}
z96=}?#>0<3f*WtT3EQ@BC%q_XS$W#+(#~6uth8!LkjxcFXNnjee14WGfN)f+SU5)^
z;+zl3pz-31M=&%rh}XX6R+N|Iqpfcg-~9fgXl?7@8<9kwCxX}UUAgTwcf%bhFmjU|
z?EEF#V36_-Pp~}J+pIcz{)I#6?eE9%@Cfd^?`^2xxCP+}&P!9}T{`UewsYXX!E#}Y
z8L6>qKB#|JAMAIQPv3u
zT@JXOKoCWR1$h5YZ&ECS`Kp8C(hPLl(GDDVX$S+|gPffBHQLda8KaXl}lC8{TpcP+kk1Y62P?6DSC}
zapPX#^fBN&KRQL)c^3P&K|{_Ig@4kEq@YYl#uAh!qdJt!PB)W~jHpfo4*Djp24SVs
zWKMjVI@5q2#~ruq!P{>KiaA$pqnN+{IQc$)90*ay{(Qz^8X09o%S`@sjhOq=(n5UT
z{X4OH7wf7I`0_U+=;-YM!rE5Dwh<)_n>FPrgO=`%L5PlR!@TsII2nIVLJY6U%NSo$
z<*;Eo2-}b4Qn;Z>j;)m$NuyNiS^kq#Qz+ydLW-|Z
z9^+KHeI5ijBxohc_;eIUPMkt$asqeV{%QpC3(UcY8VW?uM`d<;Y8oY_#h9AZ8CF!%
zsSrzH4pW&`KIt$kNMszRpgdm7kT5yPagC5NF8KUZo~cMnk+suH>6D@^C0mMSetw>z
zZz;SomWY2h>FIz-O9@V2*&xMT{^D8x1Z!o4Rc;xQ6h|c}=MN%>b>2?m2k;g(TB{-Uo{
zHcBtKm{mP=w$Z`j-GivceZm-F=qH|K)KEQ(hIrM`9t`GNWntMRgJ>*jWLj8QX!sTn
zl8cO7lBeV=nreq(&aFa>^Wvcw;bq>p-FhqhET7uH)%wW;l?knr&|#p{kubjd!1r;_
z>tBzm+8RW{A#3_a{tB3%>QKDO*#;%M&UHU~wG0ey&R649R3uY1!Wid}p->nPKKMi2
zdB>gDyrBwbP94LEvn|+l&5e97Y7k@@I=fnsa7D0T<7UpUDot5bCfkr_w-*=ZV3aiF
z9_Dl@j&FYFe%$+}x8T(2GpHyl!~PvRP!`noXmP}Xq?(x3GG-nZfnwUR9XN2X99Zw1
z)pyPZZD}*(lHTE@7Y|r@6KqPhC^4>2N#Fzbw~@
z#-knRYU#$n;1CiXcFYn59V)Yx7rY0Q#_T!8vJya=R0rPfV-=J8#t#Z
zj>OT_d;#4dH9{|fMM2zmO)=iP6S$sZUC9_wFwuscRmIp>>%vGUaIERP*}G{OXj1Yr
zWk_Wx8QO9*S22-tG7zN1YDRnon>1|vGZsCiQv_Z$@zFr2xN-{D{6Y7eP;TG01;22&
z7x!*pH8n@j_bqrBK*SWfryOS_ljL}y0cGfsXff1B5pI%eU(fddE2hWZbygI^((
zHinY;0N!{*JpxpgsEW}yjjqXK7XAJG7#kl)Wkn@6)@?Ac0s%kDs31wv92^=#RaK?A
zw~P)c!$``>&>;mgOy#z7Z~_~5?#HIu3RDLZsNmR`msh|p-ovSbrB+hiHF-$MwN`sr
zN_W7|F@gQNt?-&(z4dE3`){xLRXoWdCCF!uDdd65ViJDz4~(;fUes@@CmpS1;P|0j
zIB$qVV#v!cKs2JSOORVCoAF2zL4P6R1(?5|^dujtxOAM~Xxoi`h1YIq84Yx%gocEq
z#vbydtJn@wM*32Pi{W7%_UP4$gE;&?FBQ-T%Mhbd?=^9KtOpN?$S)C(nmD3qJJE?#
zao70~GU%x6mUOAmM?;J+LK
zdil)pndF01d`&5tjg)aOa
z`PE7awkVF7a!SX*%+>Px+h1JVUlf~@#7JO(NFISndtB9
z!w}!0eb-!r!O?LXIe8lU_U*x*hK;Bx45ENh3;3?(RaPUwI`ObVyi6;jPK@nH@|0Zl
z)~T#<&f#Lrv!iDKp_m&x_gznx{b)Sjj=I_!6y;lUHci!~U|Fqb)(M$@j&q#ja(CKp
zW)225*LfWO>5Lq=D}gBcxN{?J9JxSUOZ|@Nc-R@}#l2!7-q(@;C1mX?`Sf6Nv@dmr
z24!vuoz=$*vSO%1w?ThxW8kEU-7bO%A&}s}$9wD3QmQl!*0a!LMwHYbH2Df8J(%(o
zBkn8X=#kG>qWnlxU$roKN|8?`m=_BMWQXW$@}&IqU0{{dm%#u1@U8gC?Ns0yWpHE?
zfA+&x{M#{3K->kmW=Ac4{WXPncSC~1N*u?Uy6}VN4&zkMIA?~n=p6OnHN
zRzzs~kIKUuklb`0=xcUu!f(H&6gL)j;ej7KhX3Jr7`JLA55omNe<-0<4I7;q^*gqDVLlr
zFvU^iG??3%;`UmjTi^n-`=ixQiZO7J%Ai{As
zrs7omIBs>cx8cxp&tr6G7#nJ9as3T9pmswo4!-aLc{pwIlj8Qax7~-LqCy-wdc@Fg
zYi~zwO)Ylq+=UBmJ>;hmQxikjR$qrb`)|U~Xcz;%UGRGoXu8mXlFCZF?(RELQ&CP{
zEu~n$aGyMV3IoFuZ@HOi@(iB`UpUC}xKUAFfxGX#6C23ef#;q>enCF=?%PWx-^;Wwq9DHj
zJDIliLMwWDdN43B$S@!F@854$o}4^+5<`Q7ENc;7_qx|n8P3CVFFcQPjb|wb4{p5i
z23)i6T0}W6KJmok7#K%u0>}2c9*ndbEN><-GZ(n=!)rK62zR`UeNh
zN}rqPS5;MI=vW3GZ7<^dh;49cGK|BAkKqDkHaR_o>Y8f2_O-7?Nl6J_c>Z}_OU-Jc
zq2VFaZ`pzyZoHBCZ!mL@gNF{GtG5TWo9f7W%=q6-8ES=wV6|Eod3hNJ*$$H|?(JNNi>Drc+_ZbGM7imvo3MB9UL)hd!6BwSg7fFkqrAM#w2N}KfkVu9
zgt93qF2ZfEx&@mz)}#z@G86%iJn}eds;k+CZo-ihXVB9>h+A*C2Air%;nM1wSQ6u$
z8-@LO$YZ@H!`hky3{YNuy?sUoRh89zKeXC~Z!oVB%IrtvbK4HSXWO=8bRvX@AAJnB
z-?SgQw$!HHbd@%BRWe@9`#*2@Vuhx92;<}6>O?NJKX21oCe}OuT%b3|Ufnhyj6uU_
ze)=Mj2M=EG;@_WWV~F}F=|f7paC%ERm-G9n8d&dISRAh6&x|C;ItJPC>U;<)o`O-b
z=VM&q9_p?x(vFON+!r==Opmh
zqZ5cv4C9@z-;DAAP$nA+#n-~>d;0JI3ONaS*X^Zv<7S|iV`&rfJU$FmlmL6^H^sbL
z2Y}}|ANtml9cUREgT9Do4n$>L-cWb}fBM0jvAePik3M_=^>@7%-3nFgVtte}wns$5R9xW0?;;{4{C`fV7y^JNxh3SM+jceX^
z?ZANp2Wt)WJ&W`|S-U3oV;KgGnE9%f`ZqI*8iR&e-cMyzOZv}IIn~wyDc|L#CDt$R
z>~po$e{5`=%Ii6!6w7d-gG03(J{|T7jpzDiDO6Nc<9d!GPe1!(^ikQCQogNWJKEd2
zFu?JsqN)Z(1xXYac{zp_DzR2>w6?XG6%?gqC7kh^-%$?KtVngn3R#W-^KWfT_|GJQ41CMR(A+&NskXCGP{o8b?5QAPj!!XlKE
zmKy^>exA?#exWj`Eykl!&KzYp8N>MWG=?Z&)rpK=yVy1#{NDYn|6$yI$DO7v9z6I0
zuD|{U(ksE?myVciMVmKo#PD#xG1Apg-fCZ}2N~R^r$e~V-i4Qr9LFx!i;NOF8$`ya
z_q_K#@Uk3_J^napUT2IJ8s(frcIkn{!(0I08w3T-?5$Tu^3fs
zOC^-E&l_Xg)K(cEzWbdY;)WacVM~224m|%N=R|p=c@6n3#JMwPaFQ|~8W|&vUB+-$
z%1I!bBU;-pnEdP6MmB9^JM?O+6Q^OE9}f->v5h^3H^1Sve5Xn{XLjM6_dkTad-q`P
zuI+3a9_#bVY*#LpUFUST_0E%L9gM0sc>b=$T8-r3>P0RK1k4mXN33`Lxj?MTP^$N6
z?i*%3YjL`M)=xh}<6j@Uz#y$kqe8a8TY2R`{O3{z$tjmg%$~o;_mt9dgCIFXgq&y3%cKC`J8N%Qb8*K`_4*
zL!(hlj!l81v6ZhWBk9{&q-UKDJBQ5`KD_zP4QSW`j103_6EOMj-UO6+i?F}kgLmCp
zhPo0e_`M7#kENWqP4`UTEw8S{8?FUPLd>^2j*3Dz_LGl>s(i%9Mvh0{$#XzU!qWY}8rXzDu2vhFFwPv5x%&HXX_#}nsKyLks1
zDuIbnHwWNuPM`sQEQUBIVH>>#*jnvIIb{>?iD8S66U6O6_c#aZ7hlBAz!2U-MmfjB
zWIs?%{@;8(u!rTG=u6BpO1vl};VN
zvmz>~@M=4Rg2unDGXsKoLF7^4t|%`-HI?G7
zzG3tajF>Ga1H2x3@nv*Tfoy5*KvQ!oqOqhoGr;8u;JN1y;^gVG7#N#G#O=lAty|IA
z)rF$s5-NY!p|GeFXV1669mq$+_U)*yE=5s6fRjQ$h9