Combined (01) (PEXT) Practice Exam Test #30
MohamedRadwan-DevOps
announced in
Documentation
Replies: 1 comment
-
|
@MohamedRadwan-DevOps Really appreciate the massive effort you put into this 🙏 The questions, explanations, and tips are incredibly helpful and well organized , well-structured, clear. Thank you for sharing such valuable material with us 🌷 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Combined (01): (PEXT) Practice Exam Test
Document Type: PEXT (Practice Exam Test)
Scope: This document provides the first combined practice exam set, containing exam-style questions spanning multiple modules of the learning path. Each question includes the correct answer and a concise explanation, designed to mirror real exam scenarios, reinforce key concepts, and highlight common pitfalls and misunderstandings.
Question: [001]
Which Microsoft ethical AI principle is aimed at ensuring AI systems treat all people equally?
Options:
A. Inclusiveness
B. Privacy and Security
C. Fairness
D. Reliability and Safety
Correct Answer(s): C
Explanation:
Fairness requires AI systems to treat people equitably, avoid discriminatory outcomes, and minimize bias in data, modeling, and evaluation. Microsoft guidance stresses diverse datasets, bias measurement, mitigation techniques, and continuous monitoring to uphold Fairness.
Tips and Tricks:
Important
Fairness addresses harms from biased data and uneven model performance across groups, use practices like dataset balance checks, bias metrics (for example, demographic parity, equal opportunity), and mitigation (for example, rebalancing, threshold tuning), combined with human review and ongoing monitoring in production.
Source:
Responsible AI principles, overview (Microsoft Learn)
Fairness in machine learning (Microsoft Learn)
Responsible AI in Azure workloads (Microsoft Learn)
Question: [002]
Which types of prompts or code snippets might be flagged by the GitHub Copilot toxicity filter? (Choose two)
Options:
A. Sexually explicit content
B. Code with logical errors
C. Hate speech or discriminatory language
D. Strong personal opinions in comments
Correct Answer(s): A, C
Explanation:
Copilot applies content filters to reduce harmful or inappropriate material. Prompts or completions containing sexually explicit content or hate speech/discriminatory language may be blocked, while logical errors and personal opinions are not toxicity categories.
Tips and Tricks:
Important
Content filters focus on blocking harmful language categories, they are separate from content exclusion, code scanning, or license policy, and they do not correct logic or style.
Source:
Responsible use of GitHub Copilot inline suggestions (GitHub Docs)
Responsible use of GitHub Copilot Chat in GitHub (GitHub Docs)
How we evaluate models for GitHub Copilot (GitHub Blog)
What is Responsible AI (Microsoft Learn)
Question: [003]
Which Responsible AI principle is focused on ensuring people remain accountable for the deployment and impact of AI systems?
Options:
A. Transparency
B. Accountability
C. Inclusiveness
D. Reliability and Safety
Correct Answer(s): B
Explanation:
Accountability keeps humans and organizations responsible for AI outcomes, with governance, oversight, and escalation so people can intervene, correct errors, and own decisions rather than shifting responsibility to technology.
Tips and Tricks:
Important
Put Accountability into practice with clear ownership, review boards, audit trails, and incident response procedures that enable people to override or roll back harmful AI behavior.
Source:
Responsible AI principles, overview (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Responsible AI in Azure workloads (Microsoft Learn)
Question: [004]
Which principle requires AI systems to be dependable, consistent, and work as intended under expected conditions?
Options:
A. Inclusiveness
B. Privacy and Security
C. Reliability and Safety
D. Fairness
Correct Answer(s): C
Explanation:
Reliability and Safety requires AI systems to perform as intended, be dependable and consistent, and operate safely under expected and challenging conditions. Microsoft guidance emphasizes rigorous testing, validation, monitoring, and risk mitigation to prevent harmful or unstable behavior.
Tips and Tricks:
Important
Operationalizing Reliability and Safety involves evaluation before deployment, guardrails and fallback behaviors, live monitoring, and incident response to handle failures without causing harm.
Source:
Responsible AI principles, overview (Microsoft Learn)
Reliability and safety guidance in Responsible AI (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [005]
Which of the following is not a Microsoft Responsible AI principle?
Options:
A. Fairness
B. Reliability and Safety
C. Transparency
D. Maximizing Profit
Correct Answer(s): D
Explanation:
Microsoft’s framework centers on six principles, Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability. Maximizing Profit is not part of the Responsible AI principles.
Tips and Tricks:
Important
Remember the six, Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, Accountability. Any answer outside these is incorrect.
Source:
Responsible AI principles, overview (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Responsible AI in Azure workloads (Microsoft Learn)
Question: [006]
Which principle emphasizes protecting data, ensuring confidentiality, and maintaining security in AI systems?
Options:
A. Reliability and Safety
B. Privacy and Security
C. Fairness
D. Accountability
Correct Answer(s): B
Explanation:
Privacy and Security focuses on protecting data, preserving confidentiality, and preventing unauthorized access or misuse. Guidance includes data minimization, access controls, encryption, secure development practices, and compliance with relevant privacy regulations.
Tips and Tricks:
Important
Operationalizing Privacy and Security typically involves privacy-by-design, threat modeling, role-based access, key management and encryption in transit/at rest, logging and monitoring, and data retention/deletion policies.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [007]
What is the primary goal of the Inclusiveness principle?
Options:
A. Ensure AI systems are reliable under stress
B. Ensure AI systems are only used in enterprise settings
C. Ensure AI systems are accessible and usable by diverse groups of people
D. Ensure AI systems provide maximum profit
Correct Answer(s): C
Explanation:
Inclusiveness focuses on designing AI so that people of diverse abilities, backgrounds, and cultures can access and use it. Guidance emphasizes accessible experiences, representative data, and feedback from diverse users.
Tips and Tricks:
Important
Operationalizing Inclusiveness typically involves accessibility-by-design, inclusive user research, and localization to support different contexts and needs.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [008]
What is a key responsibility under the Transparency principle?
Options:
A. Allowing unrestricted public access to AI data
B. Making AI system operations understandable to people
C. Guaranteeing AI systems never malfunction
D. Ensuring AI systems produce open-source code
Correct Answer(s): B
Explanation:
Transparency means providing clear, understandable information about how an AI system works, the data it uses, and its limitations and risks, so stakeholders can assess appropriateness and trustworthiness.
Tips and Tricks:
Important
Good practice includes model cards or documentation, user-facing explanations, and disclosure when users are interacting with AI.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [009]
Which principle addresses the risk of biased or unrepresentative training data?
Options:
A. Accountability
B. Transparency
C. Reliability and Safety
D. Fairness
Correct Answer(s): D
Explanation:
Fairness targets bias in data and models, requiring representative datasets, bias measurement, and mitigation to avoid discriminatory outcomes across groups.
Tips and Tricks:
Important
Common practices include dataset balance checks, fairness metrics such as equal opportunity, threshold tuning, and human review.
Source:
Fairness in machine learning (Microsoft Learn)
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads (Microsoft Learn)
Question: [010]
Why is human oversight critical under the Accountability principle?
Options:
A. To maintain responsibility for outcomes and correct harmful results
B. To replace organizational governance with automated systems
C. To ensure AI systems run without any human involvement
D. To guarantee all AI code is bug-free
Correct Answer(s): A
Explanation:
Accountability requires people to own outcomes and intervene when AI behaves harmfully or unexpectedly, using governance, escalation, and remediation processes.
Tips and Tricks:
Important
Implement with clear ownership, review boards, audit logs, incident response, and kill switches to pause or roll back problematic behavior.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [011]
Which principle requires proactively informing users when AI systems are making decisions that affect them?
Options:
A. Transparency
B. Inclusiveness
C. Accountability
D. Privacy and Security
Correct Answer(s): A
Explanation:
Transparency requires that users are informed when AI influences decisions that affect them, and that they receive clear information about how the system works and its limitations and risks.
Tips and Tricks:
Important
Good practice includes clear user notifications when AI assists decisions, plain language explanations, and routes to question or contest outcomes.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, transparency (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [012]
A team wants to deploy GitHub Copilot but needs assurance that personal data will not be exposed in completions. Which principle directly addresses this concern?
Options:
A. Transparency
B. Privacy and Security
C. Reliability and Safety
D. Accountability
Correct Answer(s): B
Explanation:
Privacy and Security focuses on protecting personal and sensitive data, including confidentiality, access controls, and secure handling to prevent exposure in AI outputs.
Tips and Tricks:
Important
Implement privacy-by-design, role-based access, and monitoring to reduce risks of data leakage in generated content.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, privacy and security (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [013]
Which principle requires that AI systems clearly communicate their limitations and potential risks?
Options:
A. Inclusiveness
B. Transparency
C. Reliability and Safety
D. Accountability
Correct Answer(s): B
Explanation:
Transparency includes explaining how the AI works, what data it uses, and its limitations and risks, so people can make informed decisions about reliance and additional human verification.
Tips and Tricks:
Important
Provide user facing explanations, known failure modes, and uncertainty cues to avoid overconfidence and enable informed adoption.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, transparency (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [014]
Which principle helps ensure AI solutions do not unintentionally exclude people with disabilities?
Options:
A. Reliability and Safety
B. Privacy and Security
C. Accountability
D. Inclusiveness
Correct Answer(s): D
Explanation:
Inclusiveness ensures AI technologies are designed to support accessibility and the needs of people with disabilities, using inclusive design, accessible interfaces, and feedback from diverse users.
Tips and Tricks:
Important
Apply accessibility-by-design, inclusive user research, alternative input and output methods, and WCAG-aligned practices to reduce exclusion risk.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, inclusiveness (Microsoft Learn)
Inclusive design, overview (Microsoft Learn)
Question: [015]
Which principle requires organizations to establish processes for identifying, assessing, and mitigating risks before AI deployment?
Options:
A. Accountability
B. Privacy and Security
C. Fairness
D. Reliability and Safety
Correct Answer(s): D
Explanation:
Reliability and Safety calls for risk assessment, testing, and validation prior to deployment to ensure safe, consistent performance and to minimize potential harm.
Tips and Tricks:
Important
Use pre-deployment evaluations, guardrails, fallback behaviors, and live monitoring to mitigate risks across expected conditions.
Source:
Responsible AI principles, overview (Microsoft Learn)
Reliability and Safety guidance (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [016]
Which Microsoft Responsible AI principle emphasizes that people, not machines, are ultimately answerable for AI outcomes?
Options:
A. Transparency
B. Fairness
C. Accountability
D. Privacy and Security
Correct Answer(s): C
Explanation:
Accountability states that people and organizations remain responsible for AI outcomes, with governance, oversight, and escalation processes to correct issues and own decisions.
Tips and Tricks:
Important
Put Accountability into practice with clear ownership, review boards, audit logs, incident response, and mechanisms to pause or roll back harmful behavior.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, accountability (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [017]
Which principle requires that AI systems are tested for accuracy, security, and reliability before widespread release?
Options:
A. Transparency
B. Accountability
C. Reliability and Safety
D. Inclusiveness
Correct Answer(s): C
Explanation:
Reliability and Safety demands pre-release testing, validation, and risk assessment so systems perform consistently and safely, including security checks to reduce vulnerabilities.
Tips and Tricks:
Important
Use evaluation before deployment, guardrails, fallback behaviors, and live monitoring to maintain dependable operation.
Source:
Responsible AI principles, overview (Microsoft Learn)
Reliability and Safety guidance (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [018]
A development team wants to explain to users when GitHub Copilot is generating suggestions in their IDE. Which principle supports this communication?
Options:
A. Privacy and Security
B. Inclusiveness
C. Accountability
D. Transparency
Correct Answer(s): D
Explanation:
Transparency requires informing users about when and how AI influences their work, including disclosure that Copilot is generating or assisting with suggestions.
Tips and Tricks:
Important
Provide clear prompts or indicators in the IDE that suggestions are AI-assisted, and link to explanations of capabilities and limits.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, transparency (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [019]
Which principle focuses on preventing discrimination by ensuring AI systems are trained and evaluated with representative datasets?
Options:
A. Transparency
B. Privacy and Security
C. Inclusiveness
D. Fairness
Correct Answer(s): D
Explanation:
Fairness emphasizes representative, diverse, and unbiased training and evaluation data, plus bias measurement and mitigation to prevent discriminatory outcomes.
Tips and Tricks:
Important
Practices include dataset balance checks, fairness metrics such as equal opportunity, threshold tuning, and human review.
Source:
Fairness in machine learning (Microsoft Learn)
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, fairness (Microsoft Learn)
Question: [020]
Which Responsible AI principle is violated if an AI system provides recommendations with no way for users to understand its decision-making process?
Options:
A. Reliability and Safety
B. Privacy and Security
C. Fairness
D. Transparency
Correct Answer(s): D
Explanation:
Transparency is violated when users cannot understand how AI produces outputs, which undermines trust, accountability, and appropriate reliance.
Tips and Tricks:
Important
Provide user-facing explanations, disclose limitations and risks, and document rationale so users can evaluate and contest outcomes when needed.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, transparency (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [021]
A team notices two issues with their AI tools: GitHub Copilot suggestions work much better for English comments than for non-English ones, and an AI system that auto-assigns code reviews is overloading a few developers while others get very few. Which approach and principles best address both problems?
Options:
A. Apply Fairness and Inclusiveness by expanding non-English training data, testing performance across languages, and adjusting the assignment logic so review workloads are more evenly distributed.
B. Focus only on Transparency by documenting that English and certain developers are favored.
C. Rely solely on Reliability and Safety by adding more automated tests, without changing data or assignment logic.
D. Use Accountability only, by asking managers to manually fix any unfair assignments when they notice them.
Correct Answer(s): A
Explanation:
Fairness addresses performance gaps and uneven outcomes, and Inclusiveness ensures systems work for diverse languages and users. Expanding non-English examples, validating parity across languages, and rebalancing assignment logic reduce bias in functionality and workload distribution.
Tips and Tricks:
Important
Use group-wise evaluation, bias metrics, data augmentation for under-served groups, and algorithmic constraints to avoid workload skew.
Source:
Responsible AI principles, overview (Microsoft Learn)
Fairness in machine learning (Microsoft Learn)
Responsible AI in Azure workloads, inclusiveness (Microsoft Learn)
Question: [022]
A team is building an internal GitHub Copilot extension. They want to publish transparency documentation for internal users and enable later audits to understand data sources, model scope, and validation tests. Which option best aligns with the Transparency principle?
Options:
A. Providing a detailed revenue forecast and exact run-costs of the AI system.
B. Publishing how the AI system works, its intended use, known limitations, and high-level information about data sources and validation methods, while protecting sensitive or proprietary details.
C. Listing every user’s personal data and identifiers to be “fully transparent.”
D. Releasing all source code and full training datasets, even if this exposes private data or trade secrets.
Correct Answer(s): B
Explanation:
Transparency means clear information about purpose, operation, limitations, and high-level data and testing details that support trust and auditing, while protecting sensitive data and IP. It does not require exposing personal data or proprietary assets.
Tips and Tricks:
Important
Provide user-facing docs, model card style summaries of data and evaluation at a high level, and change logs for updates.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, transparency (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [023]
During a project, a developer uses Copilot and generated code accidentally includes personal identifiers from a public dataset, and later an audit reveals a custom Copilot extension was trained on real customer data without consent. Which actions best demonstrate Accountability while addressing the Privacy and Security violation? Choose two.
Options:
A. Ignore the issues because the data was public or eventually anonymized.
B. Remove personal identifiers from the codebase, report the incident, and document technical and process changes to prevent recurrence.
C. Quietly retrain the model on a different dataset without recording what happened.
D. Escalate the misuse of customer data, document transparently what went wrong, enforce or update policies to require consent before using real customer data, and maintain an auditable record.
Correct Answer(s): B, D
Explanation:
Accountability requires owning outcomes, escalating incidents, and documenting corrective actions. Privacy and Security is violated when real customer data is used without consent, remediation includes removing sensitive data, reporting, and policy enforcement to prevent recurrence.
Tips and Tricks:
Important
Enforce consent and data minimization policies, maintain incident logs, and use privacy-preserving approaches for future training.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, privacy and security (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [024]
A Copilot for Business customer wants to ensure employees’ private repositories, prompts, and completions are not used to train the Copilot model. A developer proposes adding sensitive user attributes to training data to improve fairness, but this would exceed user consent. Which statement best reflects the correct Responsible AI priority and product behavior?
Options:
A. Privacy and Security should take precedence, private repositories, prompts, and completions in Copilot Business/Enterprise are not used to retrain the base models, and fairness should be improved using synthetic or privacy-preserving methods that respect consent.
B. Fairness should take precedence, equal performance matters more than consent about data use.
C. Transparency is most important, documenting the data use is sufficient.
D. Inclusiveness should override privacy so more groups can be represented in training.
Correct Answer(s): A
Explanation:
Privacy and Security requires using data only for consented, legitimate purposes and protecting confidentiality. With Copilot Business/Enterprise, private code, prompts, and completions are not used to train the base models, preserving IP and privacy. When privacy and fairness goals are in tension, pursue fairness via privacy-preserving techniques rather than expanding sensitive data without consent.
Tips and Tricks:
Important
Prioritize consent and data minimization, then address fairness with techniques that do not expand access to sensitive attributes.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, privacy and security (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Responsible use of GitHub Copilot inline suggestions (GitHub Docs)
Question: [025]
A company adopts GitHub Copilot with two strict policies, all Copilot-generated code must pass secure-coding review and validation tests before merge, and developers must fix accessibility issues in generated UI code and document that they have done so. Which combination of principles is most clearly reflected?
Options:
A. Reliability and Safety, Inclusiveness, and Accountability
B. Fairness only
C. Privacy and Security only
D. Transparency only
Correct Answer(s): A
Explanation:
Reliability and Safety, secure-coding reviews and tests ensure robust, safe code before deployment. Inclusiveness, addressing accessibility ensures diverse users can use the UI. Accountability, developers are responsible for reviewing AI outputs, fixing issues, and documenting actions.
Tips and Tricks:
Important
Treat AI outputs like third party code, require review, security checks, accessibility conformance, and recorded sign off.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, inclusiveness (Microsoft Learn)
Reliability and safety guidance (Microsoft Learn)
Inclusive design, overview (Microsoft Learn)
Question: [026]
Which principle requires systems to maintain confidentiality of personal information and prevent misuse of data?
Options:
A. Privacy and Security
B. Fairness
C. Transparency
D. Accountability
Correct Answer(s): A
Explanation:
Privacy and Security mandates strong protections for personal and sensitive data, including secure storage, access controls, and compliance with privacy laws to prevent exposure or misuse.
Tips and Tricks:
Important
Use encryption, least privilege access, monitoring, and clear consent policies to uphold Privacy and Security in AI systems.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, privacy and security (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [027]
An AI model generates offensive or unsafe content during testing. Which principle is most directly violated?
Options:
A. Reliability and Safety
B. Transparency
C. Inclusiveness
D. Privacy and Security
Correct Answer(s): A
Explanation:
Reliability and Safety requires AI systems to avoid harmful or unsafe outputs and to behave predictably under expected conditions. Offensive or unsafe content indicates insufficient safeguards, evaluation, or risk mitigation before release.
Tips and Tricks:
Important
Address with pre-deployment evaluations, safety guardrails, content filters, and live monitoring to reduce risk in production.
Source:
Responsible AI principles, overview (Microsoft Learn)
Reliability and Safety guidance (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [028]
Why is Inclusiveness critical in AI adoption across global teams?
Options:
A. It guarantees AI will never malfunction
B. It eliminates the need for transparency
C. It allows AI systems to serve people of different cultures, abilities, and languages
D. It ensures consistent profitability
Correct Answer(s): C
Explanation:
Inclusiveness ensures AI is accessible and usable by people with diverse abilities, languages, and cultural contexts, reducing exclusion and enabling broad adoption.
Tips and Tricks:
Important
Implement with accessibility-by-design, inclusive research, localized UX, and assistive technology support.
Source:
Responsible AI principles, overview (Microsoft Learn)
Responsible AI in Azure workloads, inclusiveness (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [029]
A team discovers that GitHub Copilot occasionally generates outdated or insecure code patterns. Which Responsible AI principle should guide their response?
Options:
A. Reliability and Safety
B. Fairness
C. Inclusiveness
D. Accountability
Correct Answer(s): A
Explanation:
Reliability and Safety requires ensuring outputs are safe and robust. Teams should validate Copilot suggestions, enforce secure-coding checks, and update review processes to prevent unsafe recommendations.
Tips and Tricks:
Important
Use policy gates, security tooling, and fallback behaviors to block unsafe patterns and maintain dependable operation.
Source:
Responsible AI principles, overview (Microsoft Learn)
Reliability and Safety guidance (Microsoft Learn)
Apply responsible AI principles, module (Microsoft Learn)
Question: [030]
An organization plans to store encrypted API keys used by its AI assistant. Which approach aligns best with Privacy and Security?
Options:
A. Use a local text file accessible to all developers
B. Store keys in Azure Key Vault or Managed HSMs with controlled access
C. Share keys through email with the development team
D. Keep keys in plaintext within version control for easy access
Correct Answer(s): B
Explanation:
Privacy and Security emphasizes protecting sensitive credentials with encryption and least-privilege access. Azure Key Vault and Managed HSM provide key management, access policies, and auditability.
Tips and Tricks:
Important
Enforce role-based access, key rotation, secrets scanning, and monitoring to prevent credential exposure.
Source:
Azure Key Vault overview (Microsoft Learn)
Azure Managed HSM overview (Microsoft Learn)
Responsible AI principles, overview (Microsoft Learn)
Beta Was this translation helpful? Give feedback.
All reactions