Usage Scenarios¶
This document provides step-by-step tutorials for common Decision Control workflows, from creating your first DMN model through promoting it to production. Each scenario includes detailed instructions, screenshots descriptions, code examples, and best practices for enterprise deployment.
Scenario 1: Creating and Publishing a DMN Model¶
Learn how to create a new decision model, test it, and publish it for use in Decision Control.
Overview¶
This tutorial walks through creating a credit scoring decision model that evaluates loan applicants based on age, income, and credit history. You'll use the Decision Control Authoring UI to build the model, test it with sample data, and publish it for execution.
Time to Complete: 30 minutes
Prerequisites:
- Access to Decision Control Development environment
- User account with Business Analyst role (
decision-control-dev-users) - Basic understanding of DMN concepts
Step 1: Access the Authoring UI¶
- Navigate to Decision Control Dev:
- Log in with Keycloak: You'll be redirected to the Keycloak login page. Enter your credentials:
- Username:
sarah@demo.local -
Password: (your assigned password)
-
Click "Authoring UI": From the Decision Control landing page, select the Authoring UI option.
First-Time Login
If this is your first time accessing Decision Control, you'll see a welcome screen. Click "Get Started" to proceed to the model authoring interface.
Step 2: Create a New Unit¶
Units organize related decision models. Create a unit for financial services models:
-
Click "Create Unit": In the top navigation, click the "+" button next to Units.
-
Enter Unit Details:
- Name:
financial-services - Description:
Financial services decision models including credit scoring and risk assessment -
Status:
ENABLED -
Click "Create": The system creates the unit and navigates to its detail page.
API Equivalent:
curl -X POST https://decision-control-dev.example.com/api/management/units \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "financial-services",
"description": "Financial services decision models",
"status": "ENABLED"
}'
Step 3: Create a Version¶
Versions enable you to maintain multiple releases of your models:
-
Click "New Version": From the unit detail page, click "Create Version".
-
Enter Version Details:
- Version Number:
1.0.0 - Change Log:
Initial release with credit scoring model -
Status:
DRAFT -
Click "Create": The version is created in DRAFT status, allowing model uploads.
Semantic Versioning
Use semantic versioning (MAJOR.MINOR.PATCH) for clarity: - MAJOR: Breaking changes to model interface - MINOR: New features, backward compatible - PATCH: Bug fixes, no interface changes
Step 4: Create the DMN Model¶
Now create the actual decision model:
-
Click "Upload Model" or "Create New Model": Choose "Create New Model" to use the visual editor.
-
Name the Model:
CreditScoring -
Create Input Data Nodes:
Create three input nodes by dragging "Input Data" shapes from the palette:
- Applicant Age (type:
number) - Annual Income (type:
number) -
Credit History Length (type:
number) -
Create the Risk Score Decision:
Drag a "Decision" node onto the canvas:
- Name:
Risk Score - Type:
number
Connect information requirements from all three input nodes to the Risk Score decision by dragging arrows from inputs to the decision node.
- Define the Decision Logic:
Click "Edit" on the Risk Score decision node, then select "Decision Table" as the expression type.
Create a decision table with the following rules:
| Applicant Age | Annual Income | Credit History Length | Risk Score |
|---|---|---|---|
| < 25 | < 30000 | < 2 | 500 |
| < 25 | >= 30000 | >= 2 | 600 |
| 25..40 | < 50000 | < 5 | 620 |
| 25..40 | >= 50000 | >= 5 | 720 |
| > 40 | < 60000 | < 10 | 680 |
| > 40 | >= 60000 | >= 10 | 780 |
| - | - | - | 650 |
!!! tip "Hit Policy" Use "FIRST" hit policy (F) for this table. The system evaluates rules top-to-bottom and returns the first match.
- Add a Risk Category Decision:
Create another decision node that depends on Risk Score:
- Name:
Risk Category - Type:
string - Expression Type: Decision Table
| Risk Score | Risk Category |
|---|---|
| < 600 | "HIGH" |
| 600..700 | "MEDIUM" |
| > 700 | "LOW" |
- Add an Approval Decision:
Final decision that recommends approval or rejection:
- Name:
Approval Recommended - Type:
boolean - Expression Type: Decision Table
| Risk Score | Annual Income | Approval Recommended |
|---|---|---|
| >= 700 | >= 50000 | true |
| >= 650 | >= 75000 | true |
| < 600 | - | false |
| - | - | false |
- Save the Model: Click "Save" in the top toolbar. The DMN model is now part of version 1.0.0.
Step 5: Test the Model¶
Before publishing, test the model with sample data:
-
Click "Test" Tab: Switch to the Test view in the Authoring UI.
-
Enter Test Inputs:
- Applicant Age:
35 - Annual Income:
75000 -
Credit History Length:
10 -
Click "Execute Decision": The system runs all decisions in the model.
-
Review Results:
-
Test Edge Cases: Try additional test scenarios:
- Young applicant with low income: Age 22, Income 25000, History 1
- High-risk applicant: Age 28, Income 40000, History 3
- Ideal applicant: Age 45, Income 100000, History 15
Validation Required
Always test at least 5-10 scenarios covering edge cases, boundary conditions, and typical cases before publishing.
Step 6: Publish the Version¶
Once testing is complete, publish the version to make it available for execution:
-
Navigate to Versions: Return to the unit detail page and select version 1.0.0.
-
Click "Publish Version": This marks the version as ready for use.
-
Confirm Publication: A dialog confirms publication. The version status changes to
PUBLISHED.
API Equivalent:
curl -X POST https://decision-control-dev.example.com/api/management/units/1/versions/1/publish \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"publishedBy": "sarah@demo.local"}'
Published Versions are Immutable
Once published, a version cannot be modified. To make changes, create a new version (e.g., 1.0.1 or 1.1.0).
Step 7: Execute the Decision via API¶
Now that the model is published, execute it via REST API:
curl -X POST https://decision-control-dev.example.com/api/runtime/units/financial-services/versions/1.0.0/execute \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelName": "CreditScoring",
"decisionName": "Approval Recommended",
"context": {
"Applicant Age": 35,
"Annual Income": 75000,
"Credit History Length": 10
}
}'
Response:
{
"executionId": "exec-a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"timestamp": "2025-01-25T14:30:00.000Z",
"modelName": "CreditScoring",
"decisionName": "Approval Recommended",
"result": {
"Risk Score": 720,
"Risk Category": "LOW",
"Approval Recommended": true
},
"executionTimeMs": 42,
"status": "SUCCESS"
}
Best Practices¶
Model Design:
- Keep decision tables focused on a single concern
- Use descriptive names for inputs, decisions, and outputs
- Document complex logic with annotations in the DMN model
- Limit decision tables to 20-30 rules for maintainability
Testing:
- Test all decision paths before publishing
- Create a test suite with expected inputs and outputs
- Include boundary conditions (min/max values, empty strings)
- Test with production-like data volumes
Versioning:
- Use semantic versioning consistently
- Document all changes in the version changelog
- Maintain backward compatibility when possible
- Archive old versions but keep them available for audit
Scenario 2: Testing a Model with Prompt UI¶
Use natural language to test decision models without knowing technical details.
Overview¶
The Prompt UI allows business users to test DMN models using conversational queries. This tutorial demonstrates testing the credit scoring model from Scenario 1 using natural language.
Time to Complete: 15 minutes
Prerequisites:
- Completed Scenario 1 (published CreditScoring model)
- Access to Decision Control with Innovator edition or higher
- User account with testing permissions
Step 1: Access Prompt UI¶
-
Navigate to Decision Control Dev:
-
Click "Prompt UI": From the Decision Control landing page, select Prompt UI.
-
Select Your Model: From the model selector dropdown:
- Unit:
financial-services - Version:
1.0.0 - Model:
CreditScoring
Step 2: Basic Natural Language Query¶
Use conversational language to test the model:
- Enter a Natural Language Query:
What is the approval recommendation for a 35-year-old applicant
with annual income of $75,000 and 10 years of credit history?
- Click "Execute" or Press Enter: The system:
- Parses the natural language query
- Extracts input values (Age: 35, Income: 75000, History: 10)
- Executes the decision model
-
Returns results in natural language
-
Review the Response:
Based on the credit scoring model:
Risk Score: 720
Risk Category: LOW
Approval Recommended: Yes
This applicant qualifies for approval with a low-risk profile.
The strong credit history (10 years) and solid income level
contribute to a favorable risk assessment.
Step 3: Test Multiple Scenarios¶
Try variations to understand model behavior:
High-Risk Scenario:
Response:
Risk Score: 500
Risk Category: HIGH
Approval Recommended: No
This applicant does not qualify for approval due to high risk.
Limited credit history and lower income contribute to elevated risk.
Boundary Test:
Edge Case:
Step 4: Compare Results¶
The Prompt UI allows side-by-side comparisons:
-
Click "Compare Mode": Enable comparison view.
-
Enter Two Scenarios:
Scenario A:
Scenario B:
- View Side-by-Side Results: The system highlights differences in risk scores and approval decisions.
Step 5: Export Test Results¶
Save test results for documentation:
-
Click "Export Results": Choose export format (CSV, JSON, or PDF).
-
Select Test Cases: Check the scenarios you want to export.
-
Download: Results include inputs, outputs, timestamps, and model version.
Example JSON Export:
{
"testSuite": "Credit Scoring Validation",
"modelName": "CreditScoring",
"version": "1.0.0",
"executedAt": "2025-01-25T14:30:00.000Z",
"executedBy": "sarah@demo.local",
"testCases": [
{
"caseId": 1,
"description": "Standard approval case",
"inputs": {
"Applicant Age": 35,
"Annual Income": 75000,
"Credit History Length": 10
},
"expectedOutputs": {
"Approval Recommended": true
},
"actualOutputs": {
"Risk Score": 720,
"Risk Category": "LOW",
"Approval Recommended": true
},
"status": "PASS"
}
]
}
Best Practices¶
Query Construction:
- Use clear, specific language
- Include all required input values
- State units clearly (dollars, years, etc.)
- Ask follow-up questions to explore edge cases
Testing Strategy:
- Start with typical scenarios
- Test boundary conditions (minimum/maximum values)
- Verify error handling (missing inputs, invalid values)
- Compare similar scenarios to understand sensitivity
Documentation:
- Export test results for audit trails
- Save test suites for regression testing
- Include test cases in version changelogs
- Share test results with stakeholders
Scenario 3: Promoting a Model Through Governance¶
Submit a model for review and navigate the approval workflow from dev to test to production.
Overview¶
This scenario demonstrates the complete governance workflow for promoting the CreditScoring model from Development through Testing to Production, including multiple approvals and audit trail generation.
Time to Complete: 45 minutes (depends on approver availability)
Prerequisites:
- Published model in Development environment (from Scenario 1)
- Access to Aletyx Decision Control Tower landing page
- Multiple user accounts for different roles:
- Business Analyst:
sarah@demo.local - Risk Manager:
tom@demo.local - Compliance Officer:
maria@demo.local - Administrator:
admin@demo.local
Step 1: Submit Model for Review (Business Analyst)¶
- Log in as Business Analyst (
sarah@demo.local):
Navigate to Aletyx Decision Control Tower:
-
Navigate to Models View: Click "Models" in the sidebar.
-
Find Your Model:
- Expand the
financial-servicesunit - Expand version
1.0.0 -
Locate
CreditScoringmodel -
Click "Submit for Review": A dialog appears with workflow options.
-
Complete the Submission Form:
- Workflow Type:
Standard Dev → Test - Target Environment:
Test (UAT) - Justification:
-
Additional Notes:
-
Click "Submit Request": The system creates governance request #42.
API Equivalent:
curl -X POST https://governance-api.example.com/api/governance/requests \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelName": "CreditScoring",
"modelVersion": "1.0.0",
"unitName": "financial-services",
"sourceEnv": "dev",
"targetEnv": "test",
"workflowType": "standard-dev-to-test",
"submittedBy": "sarah@demo.local",
"justification": "Initial deployment of credit scoring model to UAT..."
}'
- Confirmation: You receive confirmation with request ID 42 and current status.
sequenceDiagram
participant Sarah as Sarah (BA)
participant System as Governance API
participant Tom as Tom (Risk Manager)
Sarah->>System: Submit Request #42
System->>System: Create workflow
System->>System: Assign to Risk Manager
System->>Sarah: Confirmation (PENDING_REVIEW)
Note over Tom: Notification sent
Step 2: Business Review Approval (Risk Manager)¶
Four-Eyes Principle
Sarah cannot approve her own request. A different user with the Risk Manager role must perform this approval.
-
Log out as Sarah, Log in as Tom (
tom@demo.local). -
Navigate to Tasks View: Click "Tasks" in the sidebar.
-
View Pending Tasks: The table shows all requests awaiting Risk Manager approval:
| Request ID | Model | Version | Submitted By | Submitted At | Current Step |
|---|---|---|---|---|---|
| 42 | CreditScoring | 1.0.0 | sarah@demo.local | 2025-01-25 10:00 | Risk Review |
- Click on Request #42: The detail view shows:
- Model information
- Justification from Sarah
- Timeline of events
-
Test results (if attached)
-
Review the Model:
- Check the justification
- Review test coverage
- Verify model logic aligns with risk policies
-
Confirm no regulatory concerns
-
Approve the Request:
- Click "✓ Approve"
- Enter approval comment:
- Click "Submit Approval"
API Equivalent:
curl -X POST https://governance-api.example.com/api/governance/requests/42/approve \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"approvedBy": "tom@demo.local",
"comment": "Risk assessment complete. Model aligns with policies..."
}'
- Next Step Assignment: The system automatically advances to the next workflow step and deploys to Test environment.
Step 3: Verify Deployment to Test¶
After approval, the model is automatically deployed:
- View Deployment Status: The request detail page shows:
- Status:
DEPLOYED - Deployment Time:
2025-01-25 14:01:00Z -
Target Environment:
Test -
Verify in Test Environment:
# Check that model is available in Test
curl -X GET https://decision-control-test.example.com/api/management/units \
-H "Authorization: Bearer $TOKEN" \
| jq '.[] | select(.name == "financial-services")'
- Execute Test Decision:
curl -X POST https://decision-control-test.example.com/api/runtime/units/financial-services/versions/1.0.0/execute \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelName": "CreditScoring",
"decisionName": "Approval Recommended",
"context": {
"Applicant Age": 35,
"Annual Income": 75000,
"Credit History Length": 10
}
}'
- Verify Result: Confirms model is executing correctly in Test environment.
Step 4: UAT Testing Phase¶
Perform user acceptance testing in the Test environment:
-
Run UAT Test Suite: Execute comprehensive tests with realistic data.
-
Document Results: Record test outcomes, performance metrics, and any issues.
-
Stakeholder Sign-Off: Obtain approval from business stakeholders.
UAT Best Practices
- Test with production-like data volumes
- Include end-to-end integration tests
- Verify decision outcomes match business expectations
- Measure performance (response time, throughput)
- Document all issues and resolutions
Step 5: Submit for Production Deployment (Operations Manager)¶
After successful UAT, promote to production:
-
Log in as Operations Manager (
ops@demo.local). -
Navigate to Models View → Test Environment → Find CreditScoring 1.0.0.
-
Click "Submit for Review" → Select "Standard Test → Prod" workflow.
-
Complete Submission:
- Justification:
-
Attach UAT Report: Include test results and performance data.
-
Submit Request: Creates request #43 for Test → Prod promotion.
Step 6: Multi-Stage Production Approval¶
Production deployments require multiple approvals:
Step 6a: Business Review (Different Business Analyst):
-
Log in as Maria (
maria@demo.local, Compliance Officer). -
Navigate to Tasks → Find Request #43.
-
Review UAT Results: Examine test coverage and outcomes.
-
Approve:
Step 6b: Risk Review (Tom, Risk Manager):
-
Log in as Tom (
tom@demo.local). -
Review Production Risk Assessment: Evaluate production deployment risks.
-
Approve:
Step 6c: Compliance Review (Maria, Compliance Officer):
-
Log in as Maria (
maria@demo.local). -
Verify Regulatory Compliance:
- Check model doesn't violate fair lending laws
- Verify audit trail completeness
-
Confirm model explainability
-
Approve:
Step 6d: Final Administrator Approval (Admin):
-
Log in as Administrator (
admin@demo.local). -
Review Complete Approval Chain: Verify all previous approvals.
-
Final Approval:
Step 6e: Automated Production Deployment:
The system automatically deploys to production after final approval:
stateDiagram-v2
[*] --> Submit: Ops submits
Submit --> BusinessReview: Auto
BusinessReview --> RiskReview: Maria approves
RiskReview --> ComplianceReview: Tom approves
ComplianceReview --> FinalApproval: Maria approves
FinalApproval --> Deploy: Admin approves
Deploy --> [*]: Auto-deploy
Step 7: View Complete Audit Trail¶
Review the full governance history:
-
Navigate to Tasks View → Click on Request #43.
-
View Timeline Tab: Shows complete event history:
{
"requestId": 43,
"timeline": [
{
"event": "SUBMITTED",
"timestamp": "2025-01-25T16:00:00Z",
"user": "ops@demo.local",
"details": "Request created for production deployment"
},
{
"event": "BUSINESS_REVIEW_APPROVED",
"timestamp": "2025-01-25T17:15:00Z",
"user": "maria@demo.local",
"comment": "Business validation complete..."
},
{
"event": "RISK_REVIEW_APPROVED",
"timestamp": "2025-01-25T18:30:00Z",
"user": "tom@demo.local",
"comment": "Production risk assessment complete..."
},
{
"event": "COMPLIANCE_REVIEW_APPROVED",
"timestamp": "2025-01-25T20:00:00Z",
"user": "maria@demo.local",
"comment": "Compliance review complete..."
},
{
"event": "FINAL_APPROVAL_GRANTED",
"timestamp": "2025-01-26T01:30:00Z",
"user": "admin@demo.local",
"comment": "All approvals obtained..."
},
{
"event": "DEPLOYED_TO_PRODUCTION",
"timestamp": "2025-01-26T02:00:00Z",
"system": "governance-api",
"targetEnv": "prod",
"deploymentId": "deploy-abc123"
}
]
}
- Export Audit Report: Click "Export Audit Trail" for compliance documentation.
Best Practices¶
Submission:
- Provide detailed, clear justifications
- Include test results and metrics
- Link to related tickets or documentation
- Specify deployment windows for production
Approvals:
- Review thoroughly before approving
- Provide substantive comments (not just "approved")
- Ask clarifying questions if justification is unclear
- Reject requests that don't meet standards
Audit Trail:
- Export audit trails for regulatory reviews
- Include governance history in change documentation
- Review patterns (frequent rejections, slow approvals)
- Use audit data for process improvement
Scenario 4: Viewing Audit Trails¶
Access and analyze comprehensive audit logs for compliance and troubleshooting.
Overview¶
Audit trails provide complete history of model changes, approvals, and deployments. This scenario demonstrates accessing audit data through the UI and API.
Time to Complete: 15 minutes
Prerequisites:
- Completed governance workflows (Scenario 3)
- Access to Aletyx Decision Control Tower with appropriate role
- Compliance or Administrator permissions
Step 1: Access Audit View¶
-
Log in to Aletyx Decision Control Tower: Use credentials with audit access.
-
Navigate to Audit View: Click "Audit" in the sidebar.
-
View Audit Dashboard: Shows summary metrics:
- Total governance requests (last 30 days)
- Average approval time
- Approval rate (approved vs rejected)
- Emergency deployments
Step 2: Filter Audit Records¶
Use filters to find specific events:
- Filter by Date Range:
- From:
2025-01-01 -
To:
2025-01-31 -
Filter by Event Type:
-
Select:
DEPLOYED_TO_PRODUCTION -
Filter by User:
-
Enter:
sarah@demo.local -
Apply Filters: View filtered results table.
Step 3: View Request Details¶
-
Click on Request ID: Opens detailed timeline view.
-
Review Event Details: Each event includes:
- Timestamp (with timezone)
- User email and roles
- IP address and user agent
- Action taken
- Comments or justification
-
System-generated metadata
-
View Approval Chain: Visualize approval flow:
graph LR
A[Sarah submits] --> B[Tom approves Risk]
B --> C[Maria approves Compliance]
C --> D[Admin final approval]
D --> E[Deployed to Prod]
Step 4: Export Audit Data¶
Generate compliance reports:
- Select Export Format:
- PDF: Human-readable report
- CSV: Spreadsheet analysis
-
JSON: Programmatic processing
-
Choose Date Range and Filters: Specify scope of export.
-
Download Report: Audit trail with all metadata.
Example CSV Export:
Request ID,Model,Version,Event Type,Timestamp,User,User Roles,IP Address,Comment
42,CreditScoring,1.0.0,SUBMITTED,2025-01-25T10:00:00Z,sarah@demo.local,decision-control-dev-users,192.168.1.100,Initial submission
42,CreditScoring,1.0.0,RISK_REVIEW_APPROVED,2025-01-25T14:00:00Z,tom@demo.local,decision-control-risk-manager,192.168.1.105,Risk assessment complete
43,CreditScoring,1.0.0,DEPLOYED_TO_PRODUCTION,2025-01-26T02:00:00Z,governance-api,system,10.0.0.5,Automated deployment
Step 5: API Access to Audit Data¶
Retrieve audit trails programmatically:
# Get audit trail for specific request
curl -X GET "https://governance-api.example.com/api/governance/audit?requestId=42" \
-H "Authorization: Bearer $TOKEN" \
| jq '.[] | {timestamp, eventType, user: .userEmail, comment: .details.comment}'
Filter by date range:
curl -X GET "https://governance-api.example.com/api/governance/audit?startDate=2025-01-01&endDate=2025-01-31&eventType=DEPLOYED_TO_PRODUCTION" \
-H "Authorization: Bearer $TOKEN"
Get user activity summary:
curl -X GET "https://governance-api.example.com/api/governance/audit/summary?user=sarah@demo.local" \
-H "Authorization: Bearer $TOKEN"
Best Practices¶
Audit Review:
- Schedule regular audit reviews (monthly/quarterly)
- Look for patterns in rejections or delays
- Verify four-eyes principle compliance
- Monitor emergency workflow usage
Compliance:
- Export audit trails for regulatory audits
- Retain audit data per compliance requirements
- Document audit procedures in compliance manuals
- Train staff on audit trail access and interpretation
Troubleshooting:
- Use audit trails to diagnose workflow issues
- Identify bottlenecks (slow approvals)
- Track deployment failures
- Correlate events across systems
Next Steps¶
- Integration and APIs: Integrate Decision Control with your systems
- Governance Workflow: Configure custom workflows
- FAQ and Troubleshooting: Common issues and solutions