- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`This control verifies that all Amazon Bedrock Agent aliases point to Agent versions with an Amazon Guardrail policy that has the Prompt Attack filter enabled and configured to block prompt attacks at high sensitivity, for both text and image.
Amazon Bedrock Agents can support multiple aliases, each pointing to a different immutable versions with its own guardrail configuration. Guardrails are crucial in maintaining the integrity and security of AI/ML environments by detecting and blocking prompt injection attacks that could manipulate model behavior or output.
Failing to implement these guardrail settings increases the risk of model exploitation, unauthorized access to confidential data, and potential security breaches, adversely affecting the integrity and reliability of your AI/ML workflows.
For comprehensive guidance on implementing and connecting guardrail policies with the required configurations, refer to the Create a guardrail documentation.