Month End: Find the Errors Before they Find you

Love it or hate it — and most businesses hate it — the month end process is very predictable. We know what we must produce and where our raw materials are, and we can spot the usual potholes. Even so, month end never seems to be executed as simply as it could be. How can this single business process deliver such despair time after time?

Regulations, systems and business requirements are constantly changing, so it can feel as though each month differs from the next. However, there is no reason to see the month end process as an isolated event. It should however be viewed rather like a manufacturing process, complete with defect tracking, quality control and customer feedback — all of which can help to avoid costly product recall.

Quality assurance

As with any production line, we must begin with the raw materials, but also must recognise the need for constant quality assurance. At the moment, however, this only tends to happen when reviewing the results. The familiar month-end cycle involves the CFO examining the numbers, spotting an issue, making a change and then resubmitting the data. Problems like these can often be traced back to data issues in the General Ledger (GL) or other upstream systems, which can be time consuming to correct. As such, resolution is often left until the following month, which creates a wildly inefficient system of clearing up.

Nip it in the bud

By comparison, the new production line mind-set involves detecting and tackling issues before they arise, and avoiding the reoccurrence of avoidable defects. In order to achieve this goal, a tracking system must be implemented to ascertain how effective any remedial action is. Such systems will need to be ongoing to account for any new issues that arise and/or to spot others that have crept back after initial removal.

Here are a few areas that warrant continual testing to ensure a comprehensive month end process to avoid a back-log:

1) Posting errors

Within some GL systems it can be difficult to prevent posting to old (previously used) profit or cost centres, particularly when data is fed from upstream business systems. We can find that the same errors recur month after month. A formal testing process can identify these amounts with certainty, allowing them to be corrected quickly and easily.

2) Dimension relationship issues

We may identify combinations of dimensions (for example, cost / account) that are invalid (even temporarily). These subtleties are the product of in-depth business knowledge, but without formalising this knowledge into a set of rules, there is a good chance they may not be detected.

3) Account relationship issues

In some sectors it makes sense to perform “referential integrity tests” for monthly and quarterly submissions. We might consider, for example, whether relationships between Income Statement and Balance Sheet account movements are in line — movements in provisions and FX balances are two examples of this.

4) Items for review

In every Finance Controller’s mind, there is a checklist of transaction groups that warrant ongoing review. These usually indicate a possibility of error: certain accounts (particularly reserves), manual journals to cash accounts, unusually large transactions, etc. A formal diagnostic process presents these transactions to the right person at the right time (in other words, not during month end).

Tests should be split up into logical units within firms so that specific employees own the checks that are required for their areas. After all, they will generally be the people that “own” the data quality challenge, and therefore need know what the errors are in order to fix them.

By paying attention to these areas, firms can run detailed diagnostics at regular intervals throughout the month. This creates a comprehensive system that eliminates a built-up of issues and time-consuming correction at the month end. Adding workflow into the scheme enables GL diagnostics to become a continuous practice, and allows problems to be pre-empted rather than corrected after the damage is done. By the end of the month, the pathway should be clear of all known defects, resulting in a decreased number of reporting iterations, and allowing the company’s focus to be re-directed onto more valuable activity.

Financial Director, The rise of the CIO and its impact on the CFO

TECHNOLOGY is no longer about’keeping the lights on’; organisations require new forms of IT to enable their business, and to give them a competitive edge. And as the years go by, the importance of the person managing IT – the chief information officer (CIO) – has steadily grown.

CIOs are increasingly reporting to CEOs, and becoming part of their company’s executive board.

This year’s Harvey Nash/KPMG CIO survey found that more than a third (34%) of CIOs now report in to the CEO, up from 13% in 2009. This is mainly at the expense of the CFO, who has seen CIO reports fall from 19% to 12% in the past seven years.

For some CIOs, this is a critical move for organisations to make.

“Digital transformation has brought the CIO role and contribution into sharp focus; businesses who continue to hide IT under a CFO and treat it as a cost to be reduced are missing the point and will ultimately miss the boat,” Phil Jordan, the global CIO of telecoms firm Telefonica, tells Financial Director.

“If a CEO isn’t investing in digital IT and feeling the need to bring a CIO who can lead digital transformation into the executive team directly – my guess is their business won’t be here in ten years,” he adds.

A changing dynamic

Robert Gothan, CEO and founder of Accountagility believes that these changes – particularly within the financial services industry, have led to growing tension between the CFO and the CIO – particularly as CIOs are given larger budgets, which he claims they use for ‘technical vanity projects’ of which there is questionable value.

He advises CFOs to “consider business partnering approaches in order to raise the profile of finance and tilt the boardroom in their favour”.

And indeed, this is a shift that Capital One Europe plc CIO Rob Harding has seen first-hand in the last five years.

“From my experience, the CFO is now looking for a partnership with the CIO to help reimagine cost and growth opportunities across the whole business,” he says.

However, while Harding acknowledges that the change in reporting lines could be a part of this change, he doesn’t think that is the only factor for this switch.

“I think it is much more about technology teams being positioned to see an integrated picture across all aspects of a business,” he states.

Data isn’t just for IT

At software company Advanced, the IT department has switched from reporting to the FD Andrew Hicks, to reporting directly to the CEO via chief technology officer (CTO) Jon Wrennall.

Hicks explains that his role now involves being a ‘data champion”.

“The best person to drive our business forward operationally using data is the CFO, as [the CTO] Jon is focused on the products that we go on to sell…although other businesses may instead use a specialist CIO who can champion that connected view of the business,” he says.

But Hicks isn’t the only FD that is taking on the ‘data champion’ role; according to a recent CFO survey from Adaptive Insights, nearly half of CFOs see data ownership shifting to finance.

“The CFO is no longer just a recipient of data with the aim of determining historic performance, but they can now use predictive and prescriptive analytics to drive future results,” Steve Treagust, global industry director of finance at IFS, explains.

Hicks suggests that that since the IT team and its CTO have reported into the CEO, he has been more involved in the IT application side of the business and less involved on a day-to-day basis in the IT infrastructure side of the business.

This is because the applications contain business’ core data; data which is required to get a connected view of the entire business.

Procurement process

While the CIO may have more responsibilities than years gone by, there has also been a corresponding increase in pressure on CFOs.

“CFOs cannot sit on an investment for a year – they need to demonstrate results in nine to 12 months following implementation, or they run the risk of missing yearly targets,” says Andy Bottrill, regional VP of BlackLine.

“They’re now demanding technologies or recommendations of technologies that have much quicker and visible benefits than they used to,” he adds.

However, there are exceptions; for example, Advanced has spent a significant sum on a new CRM system – which has exceeded the firm’s anticipated IT spend. However, Hicks explains that the insight the company will get into its customer base and the added benefits it will give to the firm’s lead generation activity means that Advanced is confident it will get a higher return on investment.

When the firm is procuring for other areas in IT such as compliance, or day-to-day IT running capabilities, the firm is always looking to minimise and reduce costs, says Hicks.

Hicks works alongside Wrennall for procurement of all IT-related projects.

He signs off on projects that go “beyond any meaningful size”, while the day-to-day budget is handled by the CTO and his team. In most organisations, it is likely both IT and finance need to be involved when procuring technology. Recent research by Computing and CRN found that the FD’s influence grows in the final two stages of procurement: when shortlisting a vendor, and when the final decision is made. By the fourth stage, they are the third-most important involved – still behind the IT director and IT manager.

A positive push

Technology was seen a hindrance to the business not so long ago, and CFOs may have been reluctant to sign off on many IT products, particularly as many IT departments of old may have not made the best of their budgets.

But times are changing; the best CFOs are learning more about technology and using data to help them make better decisions on procurement and for other areas of the business. They are partnering with the CIO – rather than battling for control over technology, and they are seeing positivity in the change of reporting lines.

“The impact on CFOs is largely positive; removing the responsibility of managing technology means they’re freed up to concentrate on what they’re good at and can therefore take complete ownership of their position,” says Clare Eades, associate director at Venquis.

While the general trend is toward a CIO reporting to a CEO – there are plenty of examples of CIOs reporting to CFOs.

Ironically, as CFOs get more tech-savvy, becoming ‘data champions’ of their organisations, and having an expanded remit beyond the traditional finance function, CIOs reporting to the CFO may make more sense in the years to come than it does now. However, it is a natural step for organisations to ensure their CIO is on their executive board, to emphasise the importance of technology and data to their organisation.

But reporting lines don’t necessarily matter – what matters most is that the CFO and CIO can work effectively together, putting any lingering tension aside.

Global Banking and Finance Review, 1 in 3 Insurance Firms is Not Happy With it’s Financial Planning Tool

Over one-third (37%) of firms in the insurance sector are unhappy with their financial planning tools, according to research from Accountagility, a leading solutions provider for the finance function.

The research also revealed that over half (57%) of insurance companies thought their planning tool did not have enough features to cope with the volume of data sets produced by the business, including internal functions like expense allocation, with four in five (80%) firms listing this as their biggest challenge.

The results show that firms need a planning tool that can accommodate multiple functions and different sets of data. With changing regulations, and businesses processing ever-increasing volumes of data, it is crucial that insurance firms have planning tools that are agile and can cope with more than one task. However, more than half (51%) of the firms surveyed felt that their current planning tool is inflexible.

To thrive in this competitive market, insurers are being urged to choose a planning tool that is flexible enough to process many different sets of data in one place. Having this functionality will not only ensure that the finance team’s planning and reporting reflect real time accounts and forecasts, but will also unlock accurate business insights that can help to drive performance more effectively.

Robert Gothan, CEO and Founder of Accountagility, comments:

“The insurance sector is no stranger to the need to process and plan for large quantities of data. With the sector constantly affected by changes in regulation, it is vital that firms are using planning tools that can help them adapt to new policies, including any changes to Solvency II following the government’s inquiry. With a third of firms in this sector unhappy with their current planning tools, many companies will be feeling the need to consider their options and look for more innovative and automated options.

When it comes to planning, there are a large number of businesses still relying on spreadsheets, which are only really suitable for a singular purpose, since they cannot handle multiple users or large amounts of information. We are seeing firms wanting to move beyond the status quo and embrace the latest technology in this area, in order to stay one step ahead of the competitionâ€.

Reducing the Risk Factor

For most firms, risk management tends to be very complicated, and therefore difficult to administrate on a day-to-day basis. For a start, the financial services sector continues to rely on complex spreadsheets, which are not only prone to human error, but in many cases also include a confusing mix of manual and system-sourced data. ICAEW research supports this fact, finding structural defects in 90% of spreadsheets, with the majority of companies unaware of the number of spreadsheets they produce, or if they are understood by users.

As a result, switching to automated processes may seem like a favourable alternative, but the tighter controls and more formal processes involved with this approach can also have a number of downfalls if not applied correctly. For example, automated processes that are not sufficiently agile will not cope adequately with business change. A key symptom of this is the proliferation of manual workarounds for processes that have previously been automated. In this case, efforts to reduce risk can often end up increasing it.

Despite these risks, automation can be an indispensable tool if it is used effectively. The challenges associated with de-risking automation are well covered within manufacturing. Monitoring areas like Quality Assurance (QA) and customer feedback can go a long way towards solving existing problems. Not only will a well-designed data production line lead to minimal product recalls of everyday output, but it will also appeal to auditors and other regulatory functions.

Defining process roles

A good start to reducing process risk is to clarify the roles that exist within a process. Broadly speaking, these can be broken into three main areas: design, as the designer is responsible for process design and delivery; execution, since a good process should run smoothly without any design changes; and finally, configuration, since processes often need a layer of configuration, so that their behaviour can be altered without any underlying design change.

From the above, one might infer that where design and execution are not separate, we have a control problem, for example, with spreadsheets. Likewise, where configuration is ill-defined or changes cannot be easily applied and tracked, there is an agility problem. Both of these challenges inevitably lead to a higher process risk.

Capturing process intellectual property

Around all business data processes there exists a body of knowledge and experience. In our experience, we find that this information is seldom documented, or more importantly, embedded into the process itself.

Many of the problems in this area concern data validation controls. Within spreadsheets, for example, these controls are often not obvious to anyone apart from the original designer. As a result, these key dependencies create a situation where a process outcome varies alongside the person running it.

Larger system-based automations often leave validation up to the end users. Although users can become adept at spotting process flaws, this is too accidental to form the basis of any reasonable control. As users we ask: “why didn’t the system tell me”?

When validations are built into processes, and new knowledge about potential errors is captured and embedded quickly, we see some interesting changes. Suddenly processes are easier to manage, fail less often, and, most importantly, we tend not to see the same error twice. Whereas many processes deteriorate over time, processes built in this way actually improve.

Empower users

Obviously we need tools that can capture this experience in a structured way, and this is where rules-based processing comes into play. If, for example, relevant staff are empowered to configure new validations in a user-friendly way, the time-to-market of new process experience can be extremely short, and another spreadsheet workaround can be averted.

Moreover, organisations can benefit because this knowledge is now carefully documented and carries no key person dependencies. Conversely, they can free up valuable “designer” skills to focus on other areas. This leverage incentivises good staff and helps to retain skills that can be difficult to replace.

Reporting vs Presentation

All of the above ideas were formulated with one goal in mind — to improve the quality of data presentation. This implies a wider look at information requirements around the enterprise, and, again, providing more information regarding how data was processed, rather than just the end result.

Taking a more lateral approach to presentation, by doing process walk-throughs for example, encourages more critical thought around what the numbers actually mean. This will inevitably translate into a positive feedback loop and more buy-in from users. Capturing this information (process intellectual property) once again strengthens process and therefore delivers a better service to the end-customer.

Month End Mania: Taking the pain away

As an event, the month end process is entirely predictable. We know what we have to produce, and we generally know where the raw materials are. Predictability and practice give rise to experience, so we also tend to understand where any potholes lie. It sounds simple, yet month-end, to all but a relative few, remains an enigma. How does a single business process deliver such misery month on month?

It is arguable that the finance function suffers from change overload. With constantly shifting regulation, systems and business requirements, it is easy for every month to feel different to the last. As a result, we risk seeing our processes as isolated events, and of running them as if they had never been done before (re-inventing the wheel each time).

There is nothing to prevent us from seeing a month end as a manufacturing process, complete with a production line, defect tracking, quality control and formal customer feedback. Ultimately, we all want the same thing – to avoid a “costly” product recall.

So, like every production line, we have to start with the raw materials. The principal feed for our month end is our general ledger (GL), but we will no doubt require detail from other systems.

Quality assurance

In manufacturing, there is always a bad cop with a clipboard who is responsible for quality assurance. If we were to apply manufacturing techniques to our month end, we would always test our process inputs before we relied on them.

As it happens, our month end actually does have quality assurance, but this only tends to happen to the result. The cycle goes a bit like this: the CFO reviews the numbers, spots an issue, the team tries to find the answer, spots the issue, makes a change, and then resubmits. Imagine if a car was developed this way – would you want to drive it?

Some of our reporting problems can be tracked to data issues in our GL or other upstream systems. Obviously it can be time consuming to find and correct these issues during critical path month end time, so we often leave the problem for next month. But having had this experience, the question is: what do we do next?

Some of these issues will recur naturally, while others will disappear. For the latter, we breathe a sigh of relief, and move on. For the former, it can be months before we identify and fix the root cause.

In both cases, however, we’ve missed the point. Our new production line mind-set requires that, instead of waiting for defects to find us, we go and find the defects. Not only that, but we need to track and measure them to determine whether our remedial action is working. It is important to note that a data defect could come back a year later, so these tests must be on-going.

Intellectual Property

Let’s consider that our (considerable) knowledge about the business puts a great deal of intellectual property in our hands. Naturally, we put this to good use during our month end process, but gradually we see the burden of this begin to grow on our able staff, both during information production and review.

For instance, when I review process output, I need to consider whether we checked for all data issues that occurred within the past year (at least). What happens when we take on a new member of staff – what does this mean for the review process?

Consolidations

We should also bear in mind that many consolidation systems now contain “referential integrity tests” within our monthly and quarterly submissions, where, for example, relationships between Income Statement and Balance Sheet account movements are tested.

Our manufacturing process, being completely focused on customer experience, would also aim to detect any defects that may affect downstream dependencies. Fundamentally, the solution to these challenges lies in having an automated diagnosis process that tests your GL data for all previously experienced errors.

Introducing General Ledger diagnostics

A GL diagnostic process should be able to test for a number of standard defects: invalid general ledger codes, for example, product (these may originate from upstream systems), relationships between GL accounts (in other words, did a journal post to the correct accounts?), dimensional combinations (for example, business unit / profit centre combinations) and so on.

There are two groups of tests that are advisable: defect testing and review testing. Defect testing is absolute – it either fails or succeeds. Review testing involves identifying transactions that may not be wrong, yet almost definitely warrant a review. Good examples of this might be postings to other reserves, manual reserve adjustments over a certain threshold, and so on.

Ownership

Tests should be split up into logical units, and should be “owned” and executed by the staff that own the underlying problems, for example expenses, technical processing, intercompany, and so on. We have had considerable success with these processes where they were user-friendly enough for non-technical users to be able to add new tests quickly and easily.

Adding workflow into the mix means that our GL diagnostics can be made an integral part of the month end process. It also goes without saying that if we can automatically spot errors, we can certainly create automated processes to correct them.

An easier month end

With all of this in place, firms are able to run diagnostics at key intervals during the month. By month end, the GL is mostly clear of all known defects, focusing month end critical path time on more valuable activity and reducing the number of reporting iterations.