Part Two: How to Test Your Sanctions Name Screening System. A Step-by-Step Guide
9th February 2026
Practical Sanctions Screening Testing
In Part One of this series, we explored why testing your sanctions name screening system is no longer optional. Regulatory expectations from the FCA and EBA are clear: firms must be able to demonstrate that their systems are effective, efficient, and aligned with their risk appetite. But how do you actually go about testing your system in a way that satisfies regulators and strengthens your internal controls?
In this article, we’ll walk through a practical, in-house testing methodology that any financial crime prevention team can adopt. This approach is designed to be repeatable, auditable, and scalable and it doesn’t require external consultants or expensive tools. What it does require is structure, documentation, and a commitment to continuous improvement.
Step 1: Review Your Current Setup
Start by understanding what you’re working with:
- What systems are in scope? (e.g., customer onboarding, payments, trade finance)
- What sanctions lists are being used and how are they updated?
- What matching logic is in place? (e.g., fuzzy matching thresholds, tokenization, phonetic matching)
- Who owns the system? Who is responsible for tuning and oversight?
- Are there known issues or audit findings that should inform your testing?
This baseline will help you identify where to focus your efforts and provide a reference point for measuring improvements.
Step 2: Define Your Testing Objectives
Before you begin testing, be clear about what you’re trying to achieve:
- Are you validating that the system detects sanctioned names accurate?
- Are you trying to reduce false positives without increasing risk?
- Are you testing the impact of a recent configuration change or list update?
Align your objectives with regulatory expectations and your internal risk appetite.
- What level of variation is acceptable?
- Should name reversal make a difference?
- How should a partial name match be handled?
Document these goals as they’ll form the foundation of your audit trail.
Step 3: Build a Robust Test Dataset
The content of your test data will be driven by the defined testing objectives from step 2.
If you are looking the validate that the system detects sanctioned names you approach may be:
- Download a copy of a relevant sanctions list.
- Extract all names and create a dataset where matching should identify 100%.
- Add entries to test your conditions, such as name reversal.
- Create iterations of your dataset, each with increasing level of variation in the names.
For reducing false positive, test data should reflect real-world scenarios, including:
- True positives: Known sanctioned names and their variants (e.g., aliases, transliterations, misspellings)
- False positives: Common names or near-matches that should not trigger alerts
- Edge cases: Complex or manipulated data that tests the limits of your system
Use publicly available sources (e.g., OFSI, EU, UN, OFAC lists) and synthetic data where appropriate. For PEPs try the CIA World Leaders directory.
OFAC SDN
UK Sanctions
UN Sanctions
EU Sanctions
CIA World Leaders
Clearly document each test case, its source, and the expected outcome.
Step 4: Execute the Tests
Run your test data through the screening system in a controlled environment. Capture:
- Which names triggered alerts
- Which didn’t and whether they should have
- Matching scores or logic used
- Any anomalies or unexpected results
Avoid making changes during this phase. Focus on collecting clean, unbiased results.
Step 5: Analyse the Results
Compare actual outcomes to your expectations:
- Did all true positives trigger alerts?
- Were any sanctioned names missed (false negatives)?
- How many false positives were generated and why?
Look for patterns. Are certain types of names consistently missed? Are false positives clustered around specific name structures or nationalities? This analysis will guide your tuning decisions.
Step 6: Tune and Re-Test
Based on your findings:
- Adjust matching thresholds, logic, or list configurations
- Document every change and the rationale behind it
- Re-run your tests to confirm improvements
- Watch for unintended consequences (e.g., new false positives)
Calibration is an iterative process. Don’t expect perfection on the first try but do expect to learn and improve with each cycle.
Step 7: Document Everything
This is where many firms fall short and where regulators are increasingly focused. Your documentation should include:
- Test plans and objectives
- Test data and expected outcomes
- Actual results and analysis
- Configuration changes and approvals
- Final outcomes and lessons learned
A well-maintained audit trail not only satisfies regulatory scrutiny, it also helps your team build institutional knowledge and resilience.
What’s Coming Next
In Part Three, we’ll explore how to embed this testing process into your broader governance framework, including how to assign roles, manage change, and report results to senior management and regulators.
Related articles
Part Four: Future-Proofing Your Sanctions Screening Testing Programme
Part Three: Embedding Sanctions Screening Testing into Your Governance Framework
Part One: Why Testing Your Sanctions Name Screening System Is No Longer Optional