Skip to content

3 Simple Tests for Better Matching and More Efficient Name Screening

Taking another look at your name matching logic is probably worthwhile, especially now   

Meeting regulatory expectations - particularly with the explosion in sanctions and watchlists over the past several years - is more important than ever. 

While not exhaustive, some areas to look at are covered below, representing 'quick checks' on how well your name screening engine is performing (though, hopefully you have already looked into this as part of above/below the line testing).  It is well worth the 15 minutes to take a look and compare against your current OFAC and other list screening operations.  

Name & transaction screening operations for sanctions, watchlists and other risks are more nuanced than you think (we've written extensively on this). For example, exact match is one thing (and absolutely must be nailed from a performance perspective), but it is the edge cases that present many of the issues for risk ops teams.  Smart matching algorithms help with transliteration and other data-related problems we commonly see across organizations.  

Three edge cases and tests of note:

1. Take Vladimir Putin vs. Vlad Putin (as a nickname)

U.S. Treasury OFAC's online search tool returns Vladimir Putin at 100%, however, it does not return Vlad Putin (unless you lower the match percentages to 73%). Many screening systems fail on this test, which makes your algorithm choice, pre-optimization of your data and level of partnership with your vendor critical to get tuning right.

As a bonus, in this case, the nickname for Vladimir is not typically Vlad, its Vovo among others and part of pre-optimization of list data that can put you on the front-foot.

2. Name and birthday merges 

Sometimes, your data is not as clean as you would like, and you may see the combination of a name and a birthday presented for screening.  In this case, again, let's again use Vladimir Putin. 

Vladimir Putin vs. Vladimir Putin 1952 

Matching algorithms should universally catch the first example, but what about the second?  U.S. Treasury OFAC's online search tool returns at 67%, but many other commercially available tools tested fail to return at 50% (which also is not a realistic match threshold to have in place and also ensure efficient operations). 

3. Fat fingers (e.g., user-error)

Everyone makes mistakes.  The chances that your data is completely accurate is near 0 based on hundreds of examples we have looked at over the years.

With this in mind, let's take again Vladimir Putin and present a data error version for screening as Vl@dimir P^tin.  Will your system match this name at an acceptable match threshold? 

In this example, the OFAC online search tool matches at 96% (which is great), however, other commercially available tools tested did not match above 80% and would have missed this user error in the data.  

Summary 

Getting your watchlist and name screening system dialed in is more critical than ever - since 2022, OFAC alone has added thousands of names to the sanction list.  This is doubly true if you operate in, transact in or facilitate transactions through high risk jurisdictions and/or high risk industries and where transliteration may be an issue. 

There are hundreds of scenarios that you can - and should test - including those listed above.  

To find out more about Sigma360’s approach to advanced adverse media capabilities please  request your demo today.

How can Sigma360 help me stay ahead? 

Using Sigma360's risk decisioning software platform, organizations can not only get ahead of risk, but leverage unstructured data - like the screening examples highlighted in this article.  


To see Sigma360 in action, book a demo.
Sanctions Risk Management Investigations Entity Resolution
Sigma Loading