Nov 25, 2013

Pricing Model Validation – Regulation & Best Practices

In today’s video blog, Dr. David Eliezer, Vice President and Head of Model Validation at Numerix sits down with CMO, Jim Jockle to discuss the increased importance of model validation. Jim and David discuss the primary regulatory and market drivers impacting the resurgence of model validation; and touch on a number of issues, including the importance of transparency and the five key components that comprise the validation process for derivatives. David also expands upon model implementation focusing on best practices to address fundamental model issues.

Weigh in and continue the conversation on Twitter @nxanalyticsLinkedIn, or in the comments section.


Video Transcript: Pricing Model Validation - Regulation & Best Practices

Jim Jockle (Host): Hi welcome to Numerix Video Blog, I’m your host Jim Jockle. Joining me today, Dr. David Eliezer, Vice President of Model Validation here at Numerix. David how are you?   

David Eliezer (Guest): I’m okay. How are you Jim?  

Jockle: Thank you for joining us. Following up on Model Validation – the oversight an area which you’re in charge of. Really has come back into the limelight yet again. We’ve seen some significant trading loses into the market. New reflections on pricing models. And even as early as this past August, further guidance from the OCC, as it relates to the supervisory observations and guidance around model validation, building off the 2004 directive of MiFID and the CRD directive in 2006.

One of the key elements that I really want to dive into around validation is around this concept of what is transparency? Can you give us a solid definition, when it comes to pricing models, how would you define transparency?

Eliezer: A transparent model is something where I don’t need to look at your code, but if you tell me what you have done, in just a few words, tell me what equation you’ve solved, tell me what expectation you’re evaluating, tell me what you have done, then I can go and implement it. And I don’t need to see your code, and we can get the same number. Maybe my code performs more slowly than yours. Or uses different numerical methods entirely, but we should get the same number. And then the testing of that model is straightforward, because we could check whether you’ve solved the equation and so on.

So the test even can be made transparent. I don’t need to say that I’ve moved some number there and the result looked reasonable. Very often people do reasonability tests and exactly what they consider reasonable – that’s not transparent. What is transparent is — I test a mathematical identity for example or I make it very clear exactly what criteria I’ve used to pass or fail.

Jockle: So within that regard, one could argue that there are probably five basic elements that go into a validation. Looking at the market parameters, the configuration parameters, the implementation of the model itself, the assumptions and approximation, the deal representation, and then finally of course, the source of data. And many issues have been well documented and discussed issues around market data. And we could talk about all these different areas and different segments. But I really want to talk about the model implementation at this point in time.

In a discussion in a recent presentation I’ve watched you give, you were talking about examinations for code. And best practices around code in the model validation process. Perhaps you could give us a bit of an insight of the way you recommend organizations look at code implementation and testing within that environment.

Eliezer: Okay. Well I think the first thing to think about with code base, and we’re talking about mathematical code bases specifically now, is that, code bases like these have a kind of a life. And the state of the code can deteriorate. And you can see what code starts to decay and deteriorate by various symptoms like: programmers that are afraid to go into the code; they spend more time fixing bugs than they do developing new code; it takes a very long time to add a new feature to the code because programmers have to be so careful because nobody knows what is going on inside the code; if there are bugs that are longstanding that nobody knows how to fix but they just sort of work around them; if you use workarounds, on top of workarounds, on top of workarounds, you know eventually that code base is going to drown in bugs.

So what you need to do to maintain this – and this is before the process of model validation, is really bug prevention rather than bug catching. Bug prevention means that you have segmented layers of code. The most basic code sits in one library and it has tests that guarantee that every component - unit tests - that guarantee that every component functions the way it’s supposed to function. Each one should have a very clear interface so that no mistake could be made as to what the behavior of that function is going to be. No one should be surprised if there is a side effect because it’s very well documented and very obvious that it’s going to produce a side effect. Functions that don’t have a side effect should be somehow indicated that they don’t have side effects.

And mathematical code in particular needs to take advantage of this because mathematical code is the most difficult type of code to debug because these symptoms of bugs can be extremely subtle. A number could be off just slightly and that could be indicating a serious bug. But it’s the easiest code to test because there are all sorts of identities. The identities are so strong that I can guarantee you that if it passes a certain test, it can’t be wrong. So if you make that lowest layer and have this layer of protecting tests, then the upper layer should make sure that it never inlines that functionality but always calls the lower layer functions. Even though there is some optimization that’s present, that you couldn’t make use of to make the code faster, that would let me inline this call to the cumnorm or this matrix inverse, don’t do it.

Jockle: So let me ask you a question now. Right, so that sounds all well and good. In a very non-dynamic environment, how are institutions supposed to manage the demands and needs of the front office – the traders where perhaps there isn’t the rigorous elements of documentation within the code. Or there is leakage within the code and rot over time but yet that process has been ongoing for many years where things are being built upon or worked around. What are some ways institutions can better manage that kind of time to market and competitive edge that…

Eliezer: Well this is sort of what I was trying to get at. There are parts of the code, even option pricing code, that have to be gotten out the door in ten minutes. There are parts of it that are eternal. You know a Cholesky Decomposition does not change next year. Get that into the lower library, get it reliable, make sure that it has a clean interface and it’s always working. And then, your guy who needs to produce this price in ten minutes has something he can call; he doesn’t need to inline it or write it himself or anything like that.

Make sure that he has a library of functions that he can call a Library that is absolutely reliable and its interface is clear. Now, when I come to the code that I need to turn around rapidly, I want as little of the mathematical logic in that layer as I can. I want him to be cobbling together a string of these lower level library functions whose performance I know is correct. If I give him a good library of components that he could use, then instead of having to write this much code [indicates a large amount], he writes this much code [indicates a small amount] and it’s known to work.

Jockle: Thank you David. And clearly this is as we’re seeing institutions bringing pricing models alongside their risk models and going to single models from front, to middle and back, that it’s going to become much more challenging, and much more important in terms of overall risk measures and implementing over the overall profitability of the firm. So David I want to thank you. I know we’re out of time. We have four more areas to discuss.

I hope you’ll join us again and we can get in a little bit more deeper into your thoughts on market parameters, configuration, model assumptions, approximation and representation of data. And we want to hear what you have to say as well. Please join the conversation. Feel free to follow us on twitter @nxanalytics, or stay in touch with us on LinkedIn to catch all of our updates and the ongoing conversations. We want to hear back from you and make sure we’re talking about the topics that you want to talk about. With that David, thank you. And we’ll see you next time.

Blog Post - Mar 27, 2012

Understanding More about Model Risk Management

Need Assistance?

Want More From Numerix? Subscribe to our mailing list to stay current on what we're doing and thinking

Want More from Numerix?

Subscribe to our mailing list to stay current on what we're doing and thinking at Numerix

Subscribe Today!