Origin of Algorithms

In 2003-04 the Cooperative Research Centre (CRC) for Catchment Hydrology gathered costing information on urban stormwater treatment measures from 46 stormwater managers. This information originated from all six Australian States, including major cities and regional areas. It included descriptions of the treatment measure’s design / type, unusual characteristics (e.g. unusual construction costs or disposal costs), expected life cycle / span, catchment area, area of the treatment zone (e.g. for vegetated treatment measures and infiltration measures), cost elements, data quality and how cost elements vary over time (e.g. maintenance costs). Data was sought for all types of stormwater treatment measure that MUSIC version 3 can model.

Two forms of analysis were undertaken on the data. Firstly, regression analysis was undertaken to relate the size of specific types of stormwater treatment measures to their total acquisition cost and typical annual maintenance cost. Secondly, statistical analysis occurred on the data to generate estimates for typical renewal / adaptation costs (and the renewal period), decommissioning costs and the life cycle for each type of stormwater treatment measure.

Regression analysis

For the regression analysis, the data were firstly assessed for quality. Poor quality data sets were excluded based on an assessment of:

the extent to which the data represent ‘best practice’ treatment measures (i.e. data that were not associated with a ‘best practice’ design were excluded); and/or

the likely accuracy of the data (e.g. rough estimates of costs were excluded in preference to detailed and itemised records of actual costs).

Some gaps in the data set were then interpolated. For example, some stormwater managers supplied high quality data for the ‘construction cost’ of an asset, but not the required ‘total acquisition cost’. Average ratios between related cost-elements were developed from all complete data sets to fill these gaps (e.g. on average, the construction cost is ~92% of the total acquisition cost for greenfield wetlands).

For each relationship between a treatment measure’s size and major cost elements (i.e. total acquisition cost and typical annual maintenance cost), regression curves (using the statistical software package SPSS version 11.5.0) were fitted, with the best fit chosen based on its explanation of variance (R2) and its significance (p-value). Only regressions with a significance < 0.05 were accepted. Consistency with the assumption of normality was tested using the Kolmogorov-Smirnov test (with rejection at p < 0.05). The resulting equations were then transformed into a linear form, to allow 68% prediction intervals to be calculated and used as upper and lower estimates.

For each type of treatment measure, the regression equation, as well as the ± 1 standard error prediction interval equations were calculated to form an "expected" relationship along with its lower and upper estimates. (refer to the section, Prediction Interval, below).

Estimates of remaining cost elements and analysis variables

The statistical analysis to generate estimates for typical renewal / adaptation cost (and the typical renewal period), decommissioning costs, and life cycle for each type of stormwater treatment measure involved the following protocol:

Median values from the interrogated data set were used to estimate typical values for renewal/adaptation costs (RC), renewal period, decommissioning costs (DC) and life cycles (LC). LC and RC frequency values were in years, while RC and DC values were expressed as a percentage of the treatment measure’s total acquisition cost.

Where the sample size of the interrogated data set (n) was >5, the RC, DC and LC data were transformed to satisfy assumptions of normality (log10 transformation with normality accepted for Kolmogorov-Smirnov test p > 0.05), and 68% confidence intervals around the mean (i.e. 1 standard deviation) were generated to provide upper and lower estimates.

For data sets with n > 5 that failed the Kolmogorov-Smirnov test for normality in the log10 domain, the 16th and 84th percentiles were used to generate the lower and upper estimates, respectively.

For smaller data sets (n ≤ 5), the 16th and 84th percentiles were used to generate the lower and upper estimates, respectively, as there were inadequate data from which to reliably estimate the standard deviation.

For most types of treatment measure (i.e. sediment basins and ponds, constructed wetlands, bioretention systems and vegetated swales, and infiltration systems), the life cycle (in years) was derived using expert judgment. For example, it could be argued that constructed wetlands have an infinite life cycle if typical annual maintenance is undertaken and resetting (i.e. sediment removal and replanting) of the macrophyte zone occurs say every 20 years (as part of the renewal / adaptation cost). But to calculate a ‘life cycle cost’ using the Australian Standard for life cycle costing (Standards Australia, 1999), the length of the life cycle must be finite. Consequently, the life cycle can be set at a figure of say 50 years, where the effect of discounting future costs makes costs incurred this far after construction insignificant in the calculation of the life cycle cost.

Prediction Interval

This prediction interval defines a band on a cost / size regression (e.g. like the one shown in Introduction To Life Cycle Costing) within which 68% of individual data points will fall. The ± 2 standard error prediction interval is a more commonly used statistic, as it defines a band within which 95% of individual data points will fall, but was not used in MUSIC as the band was considered too wide for most regressions to be of practical assistance to users.