Skip to main content
Response to consultation on supervisory handbook on the validation of rating systems under the Internal Ratings Based approach
Go back1a) How is the split between the first and the subsequent validation implemented in your institution?
See attachment1b) Do you see any constraints in implementing the proposed expectations (i) as described in section 4 for the first validation for a) newly developed models; and b) model changes; and (ii) as described in section 5 for the subsequent validation of unchanged models?
See attachmentQuestion 2: For rating systems that are used and validated across different entities, do you have a particular process in place to share the findings of all relevant validation functions? Do you apply a singular set of remedial action across all the entities or are there cases where remedial actions are tailor-made to each level of application?
See attachment3a) Do you deem it preferential to split the review of the definition of default between IRB-related topics and other topics?
See attachment3b) If you do prefer a split in question 3a, which topics of the definition of default would you consider to be IRB-related, and hence should be covered by the internal validation function?
See attachmentQuestion 4: Which approach factoring in the rating philosophy of a model into the back-testing analyses should be considered as best practices?
See attachmentQuestion 5: What analyses do you consider to be best practice to empirically assess the modelling choices in paragraph [76] and, more generally, the performance of the slotting approach used (i.e. the discriminatory power and homogeneity)?
See attachment6a) Which of the above mentioned approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?
See attachment6b): More in general, which validation approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?
See attachmentName of the organization
AFME