Q: How do you define IT GRC?
Other than the three broad definitions contained in the research report (2008 Annual Report: IT Governance, RIsk and Compliance), all revolving around IT governance, risk management and compliance, we aren’t defining it. Rather, we’re letting the findings from the primary benchmark research going back almost two years, define what IT GRC is.
Q: How do you define maturity?
We really aren’t defining maturity either, other than to map business results against an existing, well accepted and employed maturity scale. In this sense, we’re simply standing on the shoulder of giants, especially ISACA, the IT Governance Institute, The IIA, and our other supporting members, CSI, Protiviti and Symantec. The only thing we did was to map business reward and risk results from the benchmarks against a capability maturity scale that is already widely accepted and used.
Q: What motivated you to do this report?
About a year ago we were being tasked by our supporting members, who are our board of directors, to identify the business impact of the improvements that organizations are making in audit, regulatory compliance, IT assurance and security. In addition, members and advisors of the Group were asking for more insight into the practices that were working to improve results at other organizations.
At about the same time, we noticed that the firms experiencing poor results for regulatory compliance were the exact same firms with the highest loss or theft of customer data and the largest financial loss from these events. The benchmark data also revealed the converse was true: firms with the best regulatory compliance results were the same firms with the least loss of sensitive data and the lowest financial losses from these events. Similarly, we found the majority of organizations operating at the norm for regulatory compliance the same one’s with moderate levels of customer data loss and financial loss from these events.
We were a little surprised at the time because the benchmarks were not designed to uncover this. It simply resulted from asking the question: is there a relationship between the two set of outcomes and which organizations are experiencing these results?
To find the relationship we first employed logical tests on the raw data, which resulted in very high levels of correlation. For example, we simply added up all the firms with the poorest results for one set of business outcomes (high levels of regulatory audit problems) and tested to find out if these were the same firms experiencing the poorest results for other outcomes (high levels of customer data loss or theft). The data showed they are the same firms. We also found the opposite to be true, the same firms with the best results were almost identical across all of the business metrics measured by the benchmarks. We also conducted statistical variance testing to make sure we weren’t missing anything.
We expanded to effort to business disruptions and their financial impact, as well as revenue, profit, and customer-related metrics for the improvements being made to compliance and data protection. We went back through nearly two years of identical questions and came up with nearly identical population distributions for all these results, even though different firms participated in the research.
Q: Do highly regulated industries do better?
We originally throught that industry segments might influence results and that highly regulated industries would be doing better. But we were wrong: industry segment and high levels of regulation do not influence results. The best example I can give is the banking industry, where there is more of a disposition – and population distribution – toward lower maturity and poorer results than overall industry results. And, if you’ve been in IT shops in banking, you know it’s highly regulated with lots of audits.
What’s responsible for better results then?
The inescapable conclusion we came to — especially after seeing the results for the actions being taken by firms to respond to audits, regulations, losses and theft of customer data, and the differences in the practices being implemented to respond to these pressures — is that it is the practices, and the capabilities firms have to take action, that are driving better, and worse, results.
We continued testing different business outcomes and purposely used seed questions to try and disprove the conenctiion between practices and business outcomes. The additional tests confirmed earlier results. As they say, and as we’ve documented from the research, the correct practices do make perfect.
Q: So, how did you come up with the practices?
We didn’t. The practices and capabilities shown in the report are a direct result of the research. After mapping the business results to the well accepted maturity scale, we simply let the practices fall out at each level of business results. Each of the practices cited in the report, and especially in the Tables in Appendix A, are direclty related to the business outcomes at each level. The practices are a direct reflection of the business outcomes. The key finding is: if you implement the practices, the organization is going to retain customers better, increase revenue, be more profitable, and much less likely to experience downside business risk and loss – as these relate to things that can – and do – go wrong from the use of IT.
Q: What advice would you have for someone looking to improve their results?
Take a look at the report (2008 Annual Report on IT Governance, Risk and Compliance). Or, if you don’t want to wade through an entire 80 page report, and who can blame you, take a quick drive-by the site (www.itpolicycompliance.com) and check out the Interactive Tools.
These provide, in a quick five-minutes, the essence of what the report is about. After that, I’d recommend downloading the GRC CMM tables. Once you have the information from the tools and the tables, you can quickly identify the gaps and shortfalls and what needs to be done to improve results.