One of the most commonly encountered formats for presenting data is a table of monthly totals, often accompanied by a graph.
One reason for the extensive use of this approach is that it is very familiar and is practically self-perpetuating. It almost suggests itself.
Another reason is that it is a relatively easy calculation to produce month based counts using commonly available tools such as Excel, Access, SQL. Producing weekly aggregate numbers from raw data is a slightly higher level of difficulty. Similarly, producing a graph in Excel from a table of monthly totals is almost effortless. Producing an SPC (Statistical Process Control) type run chart is a little less easy.
A monthly review makes perfectly good sense. It is a reasonable time interval against which to assess a progress. Much less than a month and we risk over-reacting to short term fluctuations; much more than a month and we may not give ourselves enough time to act.
Where things can go wrong is where a good time interval for review gets confused with a good time interval for analysis.
Unfortunately there are many situations where a month-by-month view masks, rather than highlights, the real patterns in the data. People presented with a month by month graph in which the numbers go up and down will feel that they are seeing the real changes in activity. They may be reassured if the numbers go the 'right' way and panic if the numbers seem to go in the opposite direction.
The following illustration shows a series of monthly counts of events:
The monthly totals time series
shows numbers going up and down suggesting a pattern of changing activity.
Most readers will quickly spot that the pattern in the graph is nothing more than a
representation of the number of days in each month. It has been built up from exactly 10 events each day throughout the entire period being studied. Nothing is changing at all.
The appearance of changing
activity has been exaggerated by the default behaviour of Excel in
truncating the vertical axis when creating graphs.
A way of compensating for this is to divide the total number
of events in the month by the number of days in the month i.e. to calculate a daily rate. This is added on the illustration above. It shows that the monthly pattern of activity is actually static.
For analysis, the rule should be that where the period being
studied is divided up into unequal time periods (such as months), it is better to use a rate
than to use a simple period total. It is surprising how often this simple rule
is not followed.
In many situations, this compensating technique (translation into rates) may not be sufficient. The next illustration shows a series of monthly counts of events. The daily rate has been calculated. The rate goes up and down, suggesting change:
However, here again, the impression of a changing levels of activity is an illusion.
The underlying pattern of activity is really static. It is a constant 28 events a week throughout the study period. For the
purposes of illustration, the weekly pattern synthesised for the illustration
has been strongly polarised (All Mondays have 7 events, Tuesdays 6 events, etc
down to Sundays with 1 event).
The spurious pattern in the illustration results
from the interplay between the consistent but uneven distribution within
each week in combination with the disposition of weeks across
month end boundaries.
Another regularly used 'compensatory' technique is to show a monthly total alongside the equivalent month from the previous year:
Surely this approach is valid as, with the exception of some Februaries, we are comparing the activity in exactly equivalent time periods, side by side?
Unfortunately, again, the appearance of changing activity is an illusion. The graph is derived from exactly the same static pattern of 28 events per week as was used in the previous illustration.
If this is what static data looks like through this framework, imagine how hard it would be to make sense of any real patterns of change.
In areas where there are strong weekly patterns, such as healthcare, viewing data through the distortion of a
monthly framework makes it very difficult to see what is really going on.
So while monthly views work well for appropriate uses, such as showing cumulative income year-to-date, they will often be a poor choice for displaying and understanding patterns of event activity.
Unlike the month, the week is a more 'natural' time period which aligns well with patterns of behaviour. It is worth the extra effort of aggregating event data by week and looking at it in weekly run charts. The SPC method can be used to assess whether the variation seen reflects any real changes.
At the very least, a preliminary analysis will be needed to assess the strength of weekly patterns in the data before a decision can be made on how much 'noise' this is likely to create in a monthly time series
.
Another technique used in reports and dashboards is to divide monthly totals of events by the number of 'working days' in a month. This can go some way towards making comparisons better. But it still does not compensate fully for a strong weekly pattern.
Also, it would be necessary to review each measure being reported on and assign an individual number of working days to it. This can vary from measure to measure. Some events would be expected to occur on weekends and bank holidays. Some services may operate for less than five days a week. It cannot be assumed that using the number of 'working days' worked by the analyst in the month is the correct denominator in every, or even any, case.