Skip to content
Introducing Aletyx Decision Control — Enterprise decision management with governance and multi-environment deployment ×

Usage Scenarios

This document provides step-by-step tutorials for common Decision Control workflows, from creating your first DMN model through promoting it to production. Each scenario includes detailed instructions, screenshots descriptions, code examples, and best practices for enterprise deployment.

Scenario 1: Creating and Publishing a DMN Model

Learn how to create a new decision model, test it, and publish it for use in Decision Control.

Overview

This tutorial walks through creating a credit scoring decision model that evaluates loan applicants based on age, income, and credit history. You'll use the Decision Control Authoring UI to build the model, test it with sample data, and publish it for execution.

Time to Complete: 30 minutes

Prerequisites:

  • Access to Decision Control Development environment
  • User account with Business Analyst role (decision-control-dev-users)
  • Basic understanding of DMN concepts

Step 1: Access the Authoring UI

  1. Navigate to Decision Control Dev:
https://decision-control-dev.example.com
  1. Log in with Keycloak: You'll be redirected to the Keycloak login page. Enter your credentials:
  2. Username: sarah@demo.local
  3. Password: (your assigned password)

  4. Click "Authoring UI": From the Decision Control landing page, select the Authoring UI option.

First-Time Login

If this is your first time accessing Decision Control, you'll see a welcome screen. Click "Get Started" to proceed to the model authoring interface.

Step 2: Create a New Unit

Units organize related decision models. Create a unit for financial services models:

  1. Click "Create Unit": In the top navigation, click the "+" button next to Units.

  2. Enter Unit Details:

  3. Name: financial-services
  4. Description: Financial services decision models including credit scoring and risk assessment
  5. Status: ENABLED

  6. Click "Create": The system creates the unit and navigates to its detail page.

API Equivalent:

curl -X POST https://decision-control-dev.example.com/api/management/units \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "financial-services",
    "description": "Financial services decision models",
    "status": "ENABLED"
  }'

Step 3: Create a Version

Versions enable you to maintain multiple releases of your models:

  1. Click "New Version": From the unit detail page, click "Create Version".

  2. Enter Version Details:

  3. Version Number: 1.0.0
  4. Change Log: Initial release with credit scoring model
  5. Status: DRAFT

  6. Click "Create": The version is created in DRAFT status, allowing model uploads.

Semantic Versioning

Use semantic versioning (MAJOR.MINOR.PATCH) for clarity: - MAJOR: Breaking changes to model interface - MINOR: New features, backward compatible - PATCH: Bug fixes, no interface changes

Step 4: Create the DMN Model

Now create the actual decision model:

  1. Click "Upload Model" or "Create New Model": Choose "Create New Model" to use the visual editor.

  2. Name the Model: CreditScoring

  3. Create Input Data Nodes:

Create three input nodes by dragging "Input Data" shapes from the palette:

  • Applicant Age (type: number)
  • Annual Income (type: number)
  • Credit History Length (type: number)

  • Create the Risk Score Decision:

Drag a "Decision" node onto the canvas:

  • Name: Risk Score
  • Type: number

Connect information requirements from all three input nodes to the Risk Score decision by dragging arrows from inputs to the decision node.

  1. Define the Decision Logic:

Click "Edit" on the Risk Score decision node, then select "Decision Table" as the expression type.

Create a decision table with the following rules:

Applicant Age Annual Income Credit History Length Risk Score
< 25 < 30000 < 2 500
< 25 >= 30000 >= 2 600
25..40 < 50000 < 5 620
25..40 >= 50000 >= 5 720
> 40 < 60000 < 10 680
> 40 >= 60000 >= 10 780
- - - 650

!!! tip "Hit Policy" Use "FIRST" hit policy (F) for this table. The system evaluates rules top-to-bottom and returns the first match.

  1. Add a Risk Category Decision:

Create another decision node that depends on Risk Score:

  • Name: Risk Category
  • Type: string
  • Expression Type: Decision Table
Risk Score Risk Category
< 600 "HIGH"
600..700 "MEDIUM"
> 700 "LOW"
  1. Add an Approval Decision:

Final decision that recommends approval or rejection:

  • Name: Approval Recommended
  • Type: boolean
  • Expression Type: Decision Table
Risk Score Annual Income Approval Recommended
>= 700 >= 50000 true
>= 650 >= 75000 true
< 600 - false
- - false
  1. Save the Model: Click "Save" in the top toolbar. The DMN model is now part of version 1.0.0.

Step 5: Test the Model

Before publishing, test the model with sample data:

  1. Click "Test" Tab: Switch to the Test view in the Authoring UI.

  2. Enter Test Inputs:

  3. Applicant Age: 35
  4. Annual Income: 75000
  5. Credit History Length: 10

  6. Click "Execute Decision": The system runs all decisions in the model.

  7. Review Results:

    {
      "Risk Score": 720,
      "Risk Category": "LOW",
      "Approval Recommended": true
    }
    

  8. Test Edge Cases: Try additional test scenarios:

  9. Young applicant with low income: Age 22, Income 25000, History 1
  10. High-risk applicant: Age 28, Income 40000, History 3
  11. Ideal applicant: Age 45, Income 100000, History 15

Validation Required

Always test at least 5-10 scenarios covering edge cases, boundary conditions, and typical cases before publishing.

Step 6: Publish the Version

Once testing is complete, publish the version to make it available for execution:

  1. Navigate to Versions: Return to the unit detail page and select version 1.0.0.

  2. Click "Publish Version": This marks the version as ready for use.

  3. Confirm Publication: A dialog confirms publication. The version status changes to PUBLISHED.

API Equivalent:

curl -X POST https://decision-control-dev.example.com/api/management/units/1/versions/1/publish \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"publishedBy": "sarah@demo.local"}'

Published Versions are Immutable

Once published, a version cannot be modified. To make changes, create a new version (e.g., 1.0.1 or 1.1.0).

Step 7: Execute the Decision via API

Now that the model is published, execute it via REST API:

curl -X POST https://decision-control-dev.example.com/api/runtime/units/financial-services/versions/1.0.0/execute \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "modelName": "CreditScoring",
    "decisionName": "Approval Recommended",
    "context": {
      "Applicant Age": 35,
      "Annual Income": 75000,
      "Credit History Length": 10
    }
  }'

Response:

{
  "executionId": "exec-a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "timestamp": "2025-01-25T14:30:00.000Z",
  "modelName": "CreditScoring",
  "decisionName": "Approval Recommended",
  "result": {
    "Risk Score": 720,
    "Risk Category": "LOW",
    "Approval Recommended": true
  },
  "executionTimeMs": 42,
  "status": "SUCCESS"
}

Best Practices

Model Design:

  • Keep decision tables focused on a single concern
  • Use descriptive names for inputs, decisions, and outputs
  • Document complex logic with annotations in the DMN model
  • Limit decision tables to 20-30 rules for maintainability

Testing:

  • Test all decision paths before publishing
  • Create a test suite with expected inputs and outputs
  • Include boundary conditions (min/max values, empty strings)
  • Test with production-like data volumes

Versioning:

  • Use semantic versioning consistently
  • Document all changes in the version changelog
  • Maintain backward compatibility when possible
  • Archive old versions but keep them available for audit

Scenario 2: Testing a Model with Prompt UI

Use natural language to test decision models without knowing technical details.

Overview

The Prompt UI allows business users to test DMN models using conversational queries. This tutorial demonstrates testing the credit scoring model from Scenario 1 using natural language.

Time to Complete: 15 minutes

Prerequisites:

  • Completed Scenario 1 (published CreditScoring model)
  • Access to Decision Control with Innovator edition or higher
  • User account with testing permissions

Step 1: Access Prompt UI

  1. Navigate to Decision Control Dev:

    https://decision-control-dev.example.com
    

  2. Click "Prompt UI": From the Decision Control landing page, select Prompt UI.

  3. Select Your Model: From the model selector dropdown:

  4. Unit: financial-services
  5. Version: 1.0.0
  6. Model: CreditScoring

Step 2: Basic Natural Language Query

Use conversational language to test the model:

  1. Enter a Natural Language Query:
What is the approval recommendation for a 35-year-old applicant
with annual income of $75,000 and 10 years of credit history?
  1. Click "Execute" or Press Enter: The system:
  2. Parses the natural language query
  3. Extracts input values (Age: 35, Income: 75000, History: 10)
  4. Executes the decision model
  5. Returns results in natural language

  6. Review the Response:

Based on the credit scoring model:

Risk Score: 720
Risk Category: LOW
Approval Recommended: Yes

This applicant qualifies for approval with a low-risk profile.
The strong credit history (10 years) and solid income level
contribute to a favorable risk assessment.

Step 3: Test Multiple Scenarios

Try variations to understand model behavior:

High-Risk Scenario:

Test a 22-year-old with $25,000 income and 1 year credit history

Response:

Risk Score: 500
Risk Category: HIGH
Approval Recommended: No

This applicant does not qualify for approval due to high risk.
Limited credit history and lower income contribute to elevated risk.

Boundary Test:

What happens with exactly $50,000 income, age 25, and 5 years history?

Edge Case:

Evaluate someone who is 65 years old with $150,000 income and 30 years credit history

Step 4: Compare Results

The Prompt UI allows side-by-side comparisons:

  1. Click "Compare Mode": Enable comparison view.

  2. Enter Two Scenarios:

Scenario A:

Age 30, Income $60,000, History 7 years

Scenario B:

Age 30, Income $62,000, History 7 years

  1. View Side-by-Side Results: The system highlights differences in risk scores and approval decisions.

Step 5: Export Test Results

Save test results for documentation:

  1. Click "Export Results": Choose export format (CSV, JSON, or PDF).

  2. Select Test Cases: Check the scenarios you want to export.

  3. Download: Results include inputs, outputs, timestamps, and model version.

Example JSON Export:

{
  "testSuite": "Credit Scoring Validation",
  "modelName": "CreditScoring",
  "version": "1.0.0",
  "executedAt": "2025-01-25T14:30:00.000Z",
  "executedBy": "sarah@demo.local",
  "testCases": [
    {
      "caseId": 1,
      "description": "Standard approval case",
      "inputs": {
        "Applicant Age": 35,
        "Annual Income": 75000,
        "Credit History Length": 10
      },
      "expectedOutputs": {
        "Approval Recommended": true
      },
      "actualOutputs": {
        "Risk Score": 720,
        "Risk Category": "LOW",
        "Approval Recommended": true
      },
      "status": "PASS"
    }
  ]
}

Best Practices

Query Construction:

  • Use clear, specific language
  • Include all required input values
  • State units clearly (dollars, years, etc.)
  • Ask follow-up questions to explore edge cases

Testing Strategy:

  • Start with typical scenarios
  • Test boundary conditions (minimum/maximum values)
  • Verify error handling (missing inputs, invalid values)
  • Compare similar scenarios to understand sensitivity

Documentation:

  • Export test results for audit trails
  • Save test suites for regression testing
  • Include test cases in version changelogs
  • Share test results with stakeholders

Scenario 3: Promoting a Model Through Governance

Submit a model for review and navigate the approval workflow from dev to test to production.

Overview

This scenario demonstrates the complete governance workflow for promoting the CreditScoring model from Development through Testing to Production, including multiple approvals and audit trail generation.

Time to Complete: 45 minutes (depends on approver availability)

Prerequisites:

  • Published model in Development environment (from Scenario 1)
  • Access to Aletyx Decision Control Tower landing page
  • Multiple user accounts for different roles:
  • Business Analyst: sarah@demo.local
  • Risk Manager: tom@demo.local
  • Compliance Officer: maria@demo.local
  • Administrator: admin@demo.local

Step 1: Submit Model for Review (Business Analyst)

  1. Log in as Business Analyst (sarah@demo.local):

Navigate to Aletyx Decision Control Tower:

https://your-app.example.com

  1. Navigate to Models View: Click "Models" in the sidebar.

  2. Find Your Model:

  3. Expand the financial-services unit
  4. Expand version 1.0.0
  5. Locate CreditScoring model

  6. Click "Submit for Review": A dialog appears with workflow options.

  7. Complete the Submission Form:

  8. Workflow Type: Standard Dev → Test
  9. Target Environment: Test (UAT)
  10. Justification:
    Initial deployment of credit scoring model to UAT environment.
    Model has been tested with 15 scenarios covering typical,
    boundary, and edge cases. All tests passed successfully.
    Ready for UAT validation before production deployment.
    
  11. Additional Notes:

    Test results attached. Model uses industry-standard credit
    scoring factors. No PII is stored in decision logic.
    

  12. Click "Submit Request": The system creates governance request #42.

API Equivalent:

curl -X POST https://governance-api.example.com/api/governance/requests \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "modelName": "CreditScoring",
    "modelVersion": "1.0.0",
    "unitName": "financial-services",
    "sourceEnv": "dev",
    "targetEnv": "test",
    "workflowType": "standard-dev-to-test",
    "submittedBy": "sarah@demo.local",
    "justification": "Initial deployment of credit scoring model to UAT..."
  }'
  1. Confirmation: You receive confirmation with request ID 42 and current status.
sequenceDiagram
    participant Sarah as Sarah (BA)
    participant System as Governance API
    participant Tom as Tom (Risk Manager)

    Sarah->>System: Submit Request #42
    System->>System: Create workflow
    System->>System: Assign to Risk Manager
    System->>Sarah: Confirmation (PENDING_REVIEW)
    Note over Tom: Notification sent

Step 2: Business Review Approval (Risk Manager)

Four-Eyes Principle

Sarah cannot approve her own request. A different user with the Risk Manager role must perform this approval.

  1. Log out as Sarah, Log in as Tom (tom@demo.local).

  2. Navigate to Tasks View: Click "Tasks" in the sidebar.

  3. View Pending Tasks: The table shows all requests awaiting Risk Manager approval:

Request ID Model Version Submitted By Submitted At Current Step
42 CreditScoring 1.0.0 sarah@demo.local 2025-01-25 10:00 Risk Review
  1. Click on Request #42: The detail view shows:
  2. Model information
  3. Justification from Sarah
  4. Timeline of events
  5. Test results (if attached)

  6. Review the Model:

  7. Check the justification
  8. Review test coverage
  9. Verify model logic aligns with risk policies
  10. Confirm no regulatory concerns

  11. Approve the Request:

  12. Click "✓ Approve"
  13. Enter approval comment:
    Risk assessment complete. Credit scoring logic aligns with
    our risk management policies. No high-risk factors identified.
    Model appropriately considers applicant age, income, and
    credit history. Approved for UAT deployment.
    
  14. Click "Submit Approval"

API Equivalent:

curl -X POST https://governance-api.example.com/api/governance/requests/42/approve \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "approvedBy": "tom@demo.local",
    "comment": "Risk assessment complete. Model aligns with policies..."
  }'
  1. Next Step Assignment: The system automatically advances to the next workflow step and deploys to Test environment.

Step 3: Verify Deployment to Test

After approval, the model is automatically deployed:

  1. View Deployment Status: The request detail page shows:
  2. Status: DEPLOYED
  3. Deployment Time: 2025-01-25 14:01:00Z
  4. Target Environment: Test

  5. Verify in Test Environment:

# Check that model is available in Test
curl -X GET https://decision-control-test.example.com/api/management/units \
  -H "Authorization: Bearer $TOKEN" \
  | jq '.[] | select(.name == "financial-services")'
  1. Execute Test Decision:
curl -X POST https://decision-control-test.example.com/api/runtime/units/financial-services/versions/1.0.0/execute \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "modelName": "CreditScoring",
    "decisionName": "Approval Recommended",
    "context": {
      "Applicant Age": 35,
      "Annual Income": 75000,
      "Credit History Length": 10
    }
  }'
  1. Verify Result: Confirms model is executing correctly in Test environment.

Step 4: UAT Testing Phase

Perform user acceptance testing in the Test environment:

  1. Run UAT Test Suite: Execute comprehensive tests with realistic data.

  2. Document Results: Record test outcomes, performance metrics, and any issues.

  3. Stakeholder Sign-Off: Obtain approval from business stakeholders.

UAT Best Practices

  • Test with production-like data volumes
  • Include end-to-end integration tests
  • Verify decision outcomes match business expectations
  • Measure performance (response time, throughput)
  • Document all issues and resolutions

Step 5: Submit for Production Deployment (Operations Manager)

After successful UAT, promote to production:

  1. Log in as Operations Manager (ops@demo.local).

  2. Navigate to Models ViewTest Environment → Find CreditScoring 1.0.0.

  3. Click "Submit for Review" → Select "Standard Test → Prod" workflow.

  4. Complete Submission:

  5. Justification:
    UAT testing completed successfully with 100+ test cases.
    All scenarios passed. Performance benchmarks met (avg 45ms).
    Stakeholder sign-off obtained from business and risk teams.
    Ready for production deployment.
    
  6. Attach UAT Report: Include test results and performance data.

  7. Submit Request: Creates request #43 for Test → Prod promotion.

Step 6: Multi-Stage Production Approval

Production deployments require multiple approvals:

Step 6a: Business Review (Different Business Analyst):

  1. Log in as Maria (maria@demo.local, Compliance Officer).

  2. Navigate to Tasks → Find Request #43.

  3. Review UAT Results: Examine test coverage and outcomes.

  4. Approve:

    Business validation complete. UAT results demonstrate model
    accuracy and reliability. Decision logic matches business
    requirements. Approved for compliance review.
    

Step 6b: Risk Review (Tom, Risk Manager):

  1. Log in as Tom (tom@demo.local).

  2. Review Production Risk Assessment: Evaluate production deployment risks.

  3. Approve:

    Production risk assessment complete. Model poses no elevated
    risk to production systems. Rollback procedures documented.
    Approved for compliance review.
    

Step 6c: Compliance Review (Maria, Compliance Officer):

  1. Log in as Maria (maria@demo.local).

  2. Verify Regulatory Compliance:

  3. Check model doesn't violate fair lending laws
  4. Verify audit trail completeness
  5. Confirm model explainability

  6. Approve:

    Compliance review complete. Model adheres to fair lending
    regulations. Audit trail is comprehensive and immutable.
    Decision logic is transparent and explainable. Approved
    for production deployment.
    

Step 6d: Final Administrator Approval (Admin):

  1. Log in as Administrator (admin@demo.local).

  2. Review Complete Approval Chain: Verify all previous approvals.

  3. Final Approval:

    All approvals obtained. Change control requirements met.
    Deployment window scheduled for 2025-01-26 02:00 UTC.
    Final approval granted for production deployment.
    

Step 6e: Automated Production Deployment:

The system automatically deploys to production after final approval:

stateDiagram-v2
    [*] --> Submit: Ops submits
    Submit --> BusinessReview: Auto
    BusinessReview --> RiskReview: Maria approves
    RiskReview --> ComplianceReview: Tom approves
    ComplianceReview --> FinalApproval: Maria approves
    FinalApproval --> Deploy: Admin approves
    Deploy --> [*]: Auto-deploy

Step 7: View Complete Audit Trail

Review the full governance history:

  1. Navigate to Tasks View → Click on Request #43.

  2. View Timeline Tab: Shows complete event history:

{
  "requestId": 43,
  "timeline": [
    {
      "event": "SUBMITTED",
      "timestamp": "2025-01-25T16:00:00Z",
      "user": "ops@demo.local",
      "details": "Request created for production deployment"
    },
    {
      "event": "BUSINESS_REVIEW_APPROVED",
      "timestamp": "2025-01-25T17:15:00Z",
      "user": "maria@demo.local",
      "comment": "Business validation complete..."
    },
    {
      "event": "RISK_REVIEW_APPROVED",
      "timestamp": "2025-01-25T18:30:00Z",
      "user": "tom@demo.local",
      "comment": "Production risk assessment complete..."
    },
    {
      "event": "COMPLIANCE_REVIEW_APPROVED",
      "timestamp": "2025-01-25T20:00:00Z",
      "user": "maria@demo.local",
      "comment": "Compliance review complete..."
    },
    {
      "event": "FINAL_APPROVAL_GRANTED",
      "timestamp": "2025-01-26T01:30:00Z",
      "user": "admin@demo.local",
      "comment": "All approvals obtained..."
    },
    {
      "event": "DEPLOYED_TO_PRODUCTION",
      "timestamp": "2025-01-26T02:00:00Z",
      "system": "governance-api",
      "targetEnv": "prod",
      "deploymentId": "deploy-abc123"
    }
  ]
}
  1. Export Audit Report: Click "Export Audit Trail" for compliance documentation.

Best Practices

Submission:

  • Provide detailed, clear justifications
  • Include test results and metrics
  • Link to related tickets or documentation
  • Specify deployment windows for production

Approvals:

  • Review thoroughly before approving
  • Provide substantive comments (not just "approved")
  • Ask clarifying questions if justification is unclear
  • Reject requests that don't meet standards

Audit Trail:

  • Export audit trails for regulatory reviews
  • Include governance history in change documentation
  • Review patterns (frequent rejections, slow approvals)
  • Use audit data for process improvement

Scenario 4: Viewing Audit Trails

Access and analyze comprehensive audit logs for compliance and troubleshooting.

Overview

Audit trails provide complete history of model changes, approvals, and deployments. This scenario demonstrates accessing audit data through the UI and API.

Time to Complete: 15 minutes

Prerequisites:

  • Completed governance workflows (Scenario 3)
  • Access to Aletyx Decision Control Tower with appropriate role
  • Compliance or Administrator permissions

Step 1: Access Audit View

  1. Log in to Aletyx Decision Control Tower: Use credentials with audit access.

  2. Navigate to Audit View: Click "Audit" in the sidebar.

  3. View Audit Dashboard: Shows summary metrics:

  4. Total governance requests (last 30 days)
  5. Average approval time
  6. Approval rate (approved vs rejected)
  7. Emergency deployments

Step 2: Filter Audit Records

Use filters to find specific events:

  1. Filter by Date Range:
  2. From: 2025-01-01
  3. To: 2025-01-31

  4. Filter by Event Type:

  5. Select: DEPLOYED_TO_PRODUCTION

  6. Filter by User:

  7. Enter: sarah@demo.local

  8. Apply Filters: View filtered results table.

Step 3: View Request Details

  1. Click on Request ID: Opens detailed timeline view.

  2. Review Event Details: Each event includes:

  3. Timestamp (with timezone)
  4. User email and roles
  5. IP address and user agent
  6. Action taken
  7. Comments or justification
  8. System-generated metadata

  9. View Approval Chain: Visualize approval flow:

graph LR
    A[Sarah submits] --> B[Tom approves Risk]
    B --> C[Maria approves Compliance]
    C --> D[Admin final approval]
    D --> E[Deployed to Prod]

Step 4: Export Audit Data

Generate compliance reports:

  1. Select Export Format:
  2. PDF: Human-readable report
  3. CSV: Spreadsheet analysis
  4. JSON: Programmatic processing

  5. Choose Date Range and Filters: Specify scope of export.

  6. Download Report: Audit trail with all metadata.

Example CSV Export:

Request ID,Model,Version,Event Type,Timestamp,User,User Roles,IP Address,Comment
42,CreditScoring,1.0.0,SUBMITTED,2025-01-25T10:00:00Z,sarah@demo.local,decision-control-dev-users,192.168.1.100,Initial submission
42,CreditScoring,1.0.0,RISK_REVIEW_APPROVED,2025-01-25T14:00:00Z,tom@demo.local,decision-control-risk-manager,192.168.1.105,Risk assessment complete
43,CreditScoring,1.0.0,DEPLOYED_TO_PRODUCTION,2025-01-26T02:00:00Z,governance-api,system,10.0.0.5,Automated deployment

Step 5: API Access to Audit Data

Retrieve audit trails programmatically:

# Get audit trail for specific request
curl -X GET "https://governance-api.example.com/api/governance/audit?requestId=42" \
  -H "Authorization: Bearer $TOKEN" \
  | jq '.[] | {timestamp, eventType, user: .userEmail, comment: .details.comment}'

Filter by date range:

curl -X GET "https://governance-api.example.com/api/governance/audit?startDate=2025-01-01&endDate=2025-01-31&eventType=DEPLOYED_TO_PRODUCTION" \
  -H "Authorization: Bearer $TOKEN"

Get user activity summary:

curl -X GET "https://governance-api.example.com/api/governance/audit/summary?user=sarah@demo.local" \
  -H "Authorization: Bearer $TOKEN"

Best Practices

Audit Review:

  • Schedule regular audit reviews (monthly/quarterly)
  • Look for patterns in rejections or delays
  • Verify four-eyes principle compliance
  • Monitor emergency workflow usage

Compliance:

  • Export audit trails for regulatory audits
  • Retain audit data per compliance requirements
  • Document audit procedures in compliance manuals
  • Train staff on audit trail access and interpretation

Troubleshooting:

  • Use audit trails to diagnose workflow issues
  • Identify bottlenecks (slow approvals)
  • Track deployment failures
  • Correlate events across systems

Next Steps