For most firms, risk management tends to be very complicated, and therefore difficult to administrate on a day-to-day basis. For a start, the financial services sector continues to rely on complex spreadsheets, which are not only prone to human error, but in many cases also include a confusing mix of manual and system-sourced data. ICAEW research supports this fact, finding structural defects in 90% of spreadsheets, with the majority of companies unaware of the number of spreadsheets they produce, or if they are understood by users.
As a result, switching to automated processes may seem like a favourable alternative, but the tighter controls and more formal processes involved with this approach can also have a number of downfalls if not applied correctly. For example, automated processes that are not sufficiently agile will not cope adequately with business change. A key symptom of this is the proliferation of manual workarounds for processes that have previously been automated. In this case, efforts to reduce risk can often end up increasing it.
Despite these risks, automation can be an indispensable tool if it is used effectively. The challenges associated with de-risking automation are well covered within manufacturing. Monitoring areas like Quality Assurance (QA) and customer feedback can go a long way towards solving existing problems. Not only will a well-designed data production line lead to minimal product recalls of everyday output, but it will also appeal to auditors and other regulatory functions.
Defining process roles
A good start to reducing process risk is to clarify the roles that exist within a process. Broadly speaking, these can be broken into three main areas: design, as the designer is responsible for process design and delivery; execution, since a good process should run smoothly without any design changes; and finally, configuration, since processes often need a layer of configuration, so that their behaviour can be altered without any underlying design change.
From the above, one might infer that where design and execution are not separate, we have a control problem, for example, with spreadsheets. Likewise, where configuration is ill-defined or changes cannot be easily applied and tracked, there is an agility problem. Both of these challenges inevitably lead to a higher process risk.
Capturing process intellectual property
Around all business data processes there exists a body of knowledge and experience. In our experience, we find that this information is seldom documented, or more importantly, embedded into the process itself.
Many of the problems in this area concern data validation controls. Within spreadsheets, for example, these controls are often not obvious to anyone apart from the original designer. As a result, these key dependencies create a situation where a process outcome varies alongside the person running it.
Larger system-based automations often leave validation up to the end users. Although users can become adept at spotting process flaws, this is too accidental to form the basis of any reasonable control. As users we ask: â€œwhy didnâ€™t the system tell meâ€?
When validations are built into processes, and new knowledge about potential errors is captured and embedded quickly, we see some interesting changes. Suddenly processes are easier to manage, fail less often, and, most importantly, we tend not to see the same error twice. Whereas many processes deteriorate over time, processes built in this way actually improve.
Obviously we need tools that can capture this experience in a structured way, and this is where rules-based processing comes into play. If, for example, relevant staff are empowered to configure new validations in a user-friendly way, the time-to-market of new process experience can be extremely short, and another spreadsheet workaround can be averted.
Moreover, organisations can benefit because this knowledge is now carefully documented and carries no key person dependencies. Conversely, they can free up valuable â€œdesignerâ€ skills to focus on other areas. This leverage incentivises good staff and helps to retain skills that can be difficult to replace.
Reporting vs Presentation
All of the above ideas were formulated with one goal in mind â€” to improve the quality of data presentation. This implies a wider look at information requirements around the enterprise, and, again, providing more information regarding how data was processed, rather than just the end result.
Taking a more lateral approach to presentation, by doing process walk-throughs for example, encourages more critical thought around what the numbers actually mean. This will inevitably translate into a positive feedback loop and more buy-in from users. Capturing this information (process intellectual property) once again strengthens process and therefore delivers a better service to the end-customer.