Geotech `92 Conference Proceedings, Lakewood, Colorado, September , 1992.

Examining Common Problems Associated with Various Contouring Methods, Particularly Inverse-Distance Methods, Using Shaded Relief Surfaces

William L. Wingle
Department of Geology and Geological Engineering
Colorado School of Mines
Golden, Colorado 80401


A common problem in geologic data analysis is incorrectly creating and interpreting (or failing to interpret) gridded contour maps. Too often, once data is contoured, the resultant map is considered to be a true representation of the surface with only minor errors. This is a poor assumption, but by using shaded-relief surface maps, many potential problems can be quickly and graphically identified. Because the visual appearance of contour maps can vary significantly by changing parameters that have nothing to do with the data being contoured, great care must be taken when using gridded maps. This paper acts as a reminder of some of the problems in contouring, and suggests that the use of shaded-relief surface maps can be a complementary method to viewing and checking gridded model data. Although shaded-relief surfaces are difficult to evaluate for specific values, they show the texture of the gridded surface in ways not easily seen in plan view contour maps or even three-dimensional contoured surfaces. Because of the common use of contour maps, it is important that those using them understand the problems, and have tools to help them identify when the gridding algorithm is creating noise not due to the field data, but due to the mathematics.


The methods used to contour gridded data have been strongly debated for several reasons. To mention a few: 1) different gridded techniques (inverse-distance, kriging, minimum curvature) yield different results given the same data (Banks, 1991), though under specific conditions each method has strengths over the others; 2) using the same method, minor changes in how the data are displayed (e.g. different contour intervals) can significantly modify the interpretation; 3) changing the dimensions of the model grid can significantly modify the interpretation (Banks, 1991); 4) depending on the algorithm used to contour the data, the raw data are not necessarily honored (Banks, 1991); and 5) using gridded data, it is not possible to show vertical cliffs, and in general sharp surface transitions are difficult to describe without using very fine grids. The object of this paper is not to debate which method should be used, but to suggest that some tools and observations are useful in analyzing the accuracy of the results, or at least indicating potential problems. To do this, comments will be made about examining the contour map itself, the importance of overlaying the contour map with the raw field data, and using shaded relief surface maps to evaluate the texture of the gridded surface.

In this analysis, only regularly gridded data were used. Contouring field data using triangulation algorithms was not considered. Inverse-distance, was used as the principle gridding algorithm, because some error types are obvious and easily created with this method.

To demonstrate the inaccuracies in the contouring method, posted data and shaded-relief surface maps are used as an aid. Shaded-relief has been used effectively to identify geologic structures such as lineaments in ways similar to low-sun-angle photography and side-looking aerial radar imagery (Lillesand and Keifer, 1987). The attributes which make these methods useful are applicable to highlighting errors on gridded maps.

The problems addressed were will relate to the 1) grid density, 2) search pattern for nearest neighbors, 3) number of neighbors used to calculate the grid value, 4) tolerance in honoring data (to be explained later), and 5) effects of the search radius. The errors will be compared to a kriged solution of the data set for the purposes of comparison, but the kriged solution is also not ideal.


In this study, both inverse-distance and ordinary kriging (with a linear semivariogram) were used to contour the raw field data. The data set used was a hypothetical valley, and 659 data points were digitized along contour lines (Figure 1). The area was 40 feet by 50 feet with 13 feet of relief. The site was gridded with 75 x 75, and 200 x 200 grids using all data points, and using the nearest ten data points (in one case, a 200 x 200 grid used a quadrant search). Inverse-distance squared (1/d2, where d = distance) was used at both grid sizes and kriging was used only for the 200 x 200 grid. The results of the 200 x 200 kriged map are shown first, to demonstrate what the surface should look like (there are some inaccuracies but these will be addressed later). As can be seen in the contour map and the shaded relief surface in Figure 2 the contours are quite smooth, there are few circular features indicating pits or peaks, and the original data are exactly honored (Figure 3). When the shaded-relief surface is examined, the surface has a smooth texture and appears much as expected.

Figure 1: 659 digitize contour data point values used for data set.

Figure 2: Kriged solutions for the data set are considered to be accurate interpretations that the following gridded contour and surface maps will be judged against. Note that there are several problems even with these maps. If one examines either map carefully it is possible to identify the location of several digitized data points.

Figure 3: This is a kriged map with a one foot contour interval. Note that the contour lines exactly honor the digitized data points. Note though that three are unusual arcs between many points. On the shaded-relief surface map these will appear as minor pits or peaks. This contour map is reasonable, but because incorrect assumptions were made about the semivariogram, there is room for improvement.

When using the inverse-distance method there are a number of assumptions that have to be made which can lead to some unexpected, or at least undesired results. The general formula used in inverse-distance is (Burrough, 1986):

where gi is the estimated value at the grid location, di is the distance between the grid location and the sample data, and p is the power to which the distance is raised. The basis of this technique is that nearby data are most similar to the actual field conditions at the grid location, and when a grid point lies between several data points the best estimate is a weighted interpolation. Depending on the site conditions the distance may be weighted in different ways. If p = 1, this is a simple linear interpolation between points. Many investigators have found that using p = 2 produces reasonable results; thus p = 2 is used throughout this paper. In this case, close points are heavily weighted, and more distant point are lightly weighted (points are weighted by 1 / d2). In other instances, p has been set to powers other than 2 and yielded reasonable results. It has also been suggested that the weights should be modified based on the distance the data point is from the point being interpolated (Jones et al, 1986, Harbaugh et al, 1977, and Ripley, 1981). This will help prevent "spikes," but it is not a commonly applied approach.

When d = 0, or approximately 0
A problem with inverse-distance methods occurs when the distance d equals 0.0. Under these conditions a division by zero math error occurs. This is not a major problem, however since the value for that point is known. The field data value can simply be assigned to the grid location. From a practical viewpoint, however it is unlikely a data point will exactly coincide with a grid location. This brings up an interesting question then about how close is close. Should the field data only be used if the points "exactly" coincide, or if a point is "close" to the grid location, should it also be directly assigned? By varying how this assumption is treated unexpected results can occur.

Nearest Neighbor

When gridding, the number of surrounding data points used to interpolate the grid value can significantly alter the map. The inverse distance method is fundamentally a function that averages the surrounding points (Jones et al, 1986). Initially it may seem reasonable to use all the data points, this is the maximum use of the available information. There are however drawbacks: 1) this can be computationally very expensive (particularly with large data sets), 2) what controls the nature of the values in one region may not apply to the entire map area, and 3) although more distant points are only lightly weighed, because of the number of points involved they tend to force the grid value to the mean value for the data set. Commonly the nearest 10 data points are used (this is an arbitrary number, and under differing circumstances different values can be used). By using only a few points there is less averaging and only local features affect the grid value. The method is also computationally faster. The surfaces, however tend to be rougher. Some fairly obvious errors can be created as a result of the spatial distribution of the data. In Figure 4, a grid point is placed between two contour lines, but because of the data density all the nearest neighbors come from one contour line, thus the interpolated value is inappropriate. This tends to give a stair step look to the surface map (Figure 5). Also in pits, or on peaks, a point completely surrounded by one contour will often have the value of the contour. The method will not interpolate above or below the contour. If multiple grid points are within the contour, the two problems tend to create flat spots or tables.

Figure 4: By choosing only the eight nearest neighbors to estimate the value for a grid location, incorrect interpolation is possible. The nearest neighbors can be dominated by a single contour line which has closely space data. This tends to generate benches on slopes. Grid points surrounded by a contour are assigned the value of the contour. Because inverse-distance is a averaging technique in cannot interpolate above or below the surrounding data. This condition tends to generate flat tables.

Figure 5: Because of the search method used by inverse-distance and the mathematics of the method, pits in the valley bottoms, benches on the valley slopes, and flat tables on the mountain peaks are to be expected.

The problems of these flat spots or tables can be partially eliminated however by using quadrant or octant searches (Figure 6) (Jones et al, 1986, Harbaugh et al, 1977, and Ripley, 1981). The quadrant search method requires that an equal number of neighboring points come from four different quadrants (labeled in Figure 6). This reduces the possibility a nearby contour line or cluster of data will dominate local grid values. In some cases octants are used, where the surrounding area is divided into eight equal radial sections. The method unfortunately does not eliminate the problem entirely, and because points from further away are used, the averaging tends to be more severe. Although the problem with flat spots is reduced, "spikes" can be more extreme.

Figure 6: By choosing a quadrant (or octant) search, the inverse-distance method can be forced to select neighbors from more then a local concentration of data points. This tends to reduce benching on slopes.

The problems of these flat spots or tables can be partially eliminated however by using quadrant or octant searches (Figure 6) (Jones et al, 1986, Harbaugh et al, 1977, and Ripley, 1981). The quadrant search method requires that an equal number of neighboring points come from four different quadrants (labeled in Figure 6). This reduces the possibility a nearby contour line or cluster of data will dominate local grid values. In some cases octants are used, where the surrounding area is divided into eight equal radial sections. The method unfortunately does not eliminate the problem entirely, and because points from further away are used, the averaging tends to be more severe. Although the problem with flat spots is reduced, "spikes" can be more extreme.


Raw field data should always be posted (overlaid) on the contour map, particularly when kriging is not used. Kriging will exactly honor the field data (Figure 2 and Figure 3), and it is more appropriate to evaluate estimation error maps. If inverse-distance is used, contour lines may be a significant distance from where the field data would suggest (Figure 7) due to the way in which the method tends to smooth the data. Inverse-distance gridding methods do not insure that the data is honored.

Figure 7: This inverse-distance map (contour interval = 1.0 feet) clearly shows that the method does not honor the field data points. When using these maps it is important for the user to evaluate whether the errors are acceptable for the desired use.

This process of posting data points is, fast, simple, and can quickly identify areas within the contour map which are questionable, or are based only on highly scattered data, and should be considered suspect. In areas of dense data, the map can become too complex to interpret easily, and some data may have to be removed, or printed at a larger scale to evaluate the contour accuracy. This is generally a minor problem.

Although posting data helps identify how well the map honors the field data, honoring field data is not a complete measure of the quality or accuracy of the gridded map.


Using shaded relief 2-1/2 dimensional isometric surface maps can greatly aid the modeler in evaluating the accuracy of contour maps. Though shaded relief maps are only a visualization tool and do not give detailed information about absolute values, they are useful for evaluating the texture of the gridded surface. Quickly, the modeler can find unreasonable features, given prior expectations about what the nature of the surface. For example, a common problem that quickly stands out are sharp peaks and pits (spikes) around digitized data points. Clearly these features are not real, but by only examining a contour map, they are difficult to identify, particularly to the untrained eye. An extreme example of a poorly gridded map is shown in Figure 11.


A series of contour maps and shaded relief surfaces are presented with commentary as to visible features on each type of map, and what may be the cause of their occurrence.

Grid Density

Many investigators assume a denser grid is a more accurate grid. The first set of maps progresses from a relatively coarse grid to a finer grid demonstrating the problems of too fine a grid. Figure 8a is an inverse-distance gridded (75 x 75) map of the data set. Although the contour lines are a bit rough and there are some circular pits and peaks in the valley bottom, the map is fairly reasonable. In Figure 8b, a shaded relief map, the surface texture is somewhat rough, but for the grid density, this is probably acceptable.

Figure 8: This is an inverse-distance solution for the map area based on a 75 x 75 grid. This is a reasonable solution, but there are pits in the valley bottom, and benches on the valley slopes are clearly visable in the shaded relief map.

When the grid was increased to 200 x 200 there is a substantial increase in the number of small circles and the roughness of the contour lines has increased (Figure 9a). In Figure 9b the problems are easy to identify; the digitized data points are forming pits and peaks as nearby grid locations are being substantially smoothed and averaged. These peaks and pits coincide with the digitized data points. By increasing the grid density, one can attain more smoother surfaces, but locally produce unacceptable anomalies.

Figure 9: This is an inverse-distance solution for the map area based on a 200 x 200 grid. The results here are disturbing. Grid density was fined to increase map accuracy, but the contour lines became rougher, and spikes at digitized data points are severe.

Search Method

It may be thought that using a quadrant search technique might reduce this problem, but because data points from even further away are used, the results are in some ways even worse. The contour map in Figure 10a has even more circular features, and these are deeper and higher then those seen in the previous figures. The shaded relief map (Figure 10b) does show a somewhat smoother texture between data points, but the problems at data point locations is still quite clear. However, the "stair stepping" on the gentle slopes was reduced. This illustrates that using quadrant (or octant) searches may result in a trade off between eliminating flat spots, and increasing the local averaging and enhancing of spikes.

Figure 10: This is an inverse-distance solution for the map area based on a 200 x 200 grid. Using a quadrant search helped reduce the benching, but because the method averages over greater distances, the spikes are even more severe then in Figure 9.

Gridding Algorithm

When the data are gridded using ordinary kriging, with a linear semivariogram, the results are much better, though there are still some problems. One look at the contour map in Figure 3 and it is obvious this map better represents the data then the previous examples. Locations of some of the data points can still be identified by looking at the contour lines however. In a number of locations the contours lines make a series of short connected arcs (Figure 3), where the arcs join there is usually a data point. There are also still a few unexplained circles in the valley bottom. This, however is a common problem with all gridding algorithms in valley bottoms. The shaded surface map in Figure 2b confirms the improved quality of this gridded interpretation. The surface has a quite smooth texture, though there are still a few small pits and peaks surrounding the digitized data points.

Nearest Neighbors, Search Radius, and Tolerance

The previous maps concentrated of the differences between grid sizing, search technique, and the gridding algorithm. In the following figures, the concentration is on the number of nearest neighbor data points used in creating the grid, setting a maximum search radius, and in defining how close is close (the tolerance) when deciding if a grid location falls near enough to a data point to assign it the exact value (if multiple data points fall within the range, a straight average is used). The following figures show a number of features; these are summarize below:


Figures 11 - 14 are gridded using all 659 data points
Figures 15 - 16 use only the nearest 10 data points

Figure 11: This is an inverse-distance solution for the map area based on a 200 x 200 grid. By making poor decisions about the search radius, the number of nearest neighbors, and the tolerance of grid points to data points, very poor results are possible.

Figure 12: This is an inverse-distance solution for the map area based on a 200 x 200 grid. Although the results are still poor, increasing the tolerance has the effect of reducing spikes. It also has the effect of pulling down actual high points or pulling up actual low points.

Figure 13: This is an inverse-distance solution for the map area based on a 200 x 200 grid. Taking the tolerance to the extreme severely smoothes the data set, and although the contours appear very smooth (a good indicator) this map is a very poor representation of the site.

Figure 14: This is an inverse-distance solution for the map area based on a 200 x 200 grid. By reducing the search distance smoothing is not as severe; grid points are only averaged by local data, and are not averaged towards the mean of the data set.

Figure 15: This is an inverse-distance solution for the map area based on a 200 x 200 grid. By limiting the number of nearest neighbor data points the overall features of the surface are preserved, but spikes near digitized points occur. Also benching and flat spots are common.

Figure 16: This is an inverse-distance solution for the map area based on a 200 x 200 grid. By balancing the positive and negative effects of the tolerance, search radius, and the number of nearest neighbors, it is possible to achieve reasonable results with a dense grid.


Figure 14 has a maximum search range of 10.0 feet
Figures 11 - 13 have a maximum search range of 30.0 feet
Figures 15 - 16 have a maximum search range of 65.0 feet (this is effectively negated however by the fact that only the nearest 10 points are used).


Figures 11 & 15 have a tolerance of 0.1 feet
Figures 12 & 16 have a tolerance of 1.0 feet
Figures 13 & 14 have a tolerance of 3.0 feet

The results of the gridded shaded relief surface can be summarized as follows: 1) using more data points to evaluate each grid location will smooth the surface texture of the map, but will also smooth the actual peaks and valleys in the data. Ten data points seems to produce reasonable results with this data set, where using all the points severely smoothed major features; 2) a smaller maximum search distance yields less regional smoothing; this, however tends to produce a rougher surface texture; 3) where very small tolerances are used (0.1 feet), individual data points produce large pits and peaks (Figure 11); when larger tolerances are used (3.0 feet) there is significant regional smoothing which is not appropriate (Figure 13). There is a combination of gridding parameters that yield adequate representation of this data set. A 1.0 foot tolerance appeared reasonable, but determining a general rule thumb for this behavior may prove be difficult.

Finally, a problem visible in all the shaded relief maps of inverse-distance gridded surfaces where only the 10 nearest data points are evaluated is the occurrence of flat spots. These occur on peaks and in the valley bottom and to a slight extent on valley slopes. These problems relate to the search method, and it is possible a quadrant search would have improved the results.


Before using a contour map, it is critical that it accurately represents the field conditions of concern. If the contoured data are inappropriate and are used in further analysis, the error is propagated to later results, making those results at best questionable. Two techniques were presented to aid those developing contour maps to evaluate their accuracy. In addition to examining the contour map for obvious problems, it is suggested that field data values, be overlaid with the contour map at some stage before the contoured data are used for analysis. This will highlight areas where the gridded contour map clearly does not honor the field data. Also, shaded relief isometric surface maps are useful in identifying unusually rough surfaces and errors surrounding digitized data points.

By using these techniques, many of the problems commonly associated with using contoured maps can be identified. These techniques can identify areas within a contour map which are clearly unreasonable, and will emphasize that the raw data need to be more carefully processed before they are used.


This work was funded by the U.S. Bureau of Reclamation, cooperatve agreement 1-FC- 81-17500.


Banks, R., 1991, Contouring Algorithms, Geobyte, October, pp 15-23.

Burrough, P.A., 1986, Principles of Geographical Information Systems for Land Resource Assessment, Monographs on Soil and Resources Survey No. 12, Oxford Science Publications - Clarendon Press, Oxford, pg. 153.

Harbaugh, J.W., J.H. Doveton, and J.C. Davis, 1977, Probability Methods in Oil Exploration, John Wiley & Sons, New York.

Jones, T.A., D.E. Hamilton, and C.R. Johnson, 1986, Contouring Geologic Surfaces With the Computer, Van Nostrand Reinhold, London, England.

Lillesand, T.M. and R.W. Kiefer, 1987, Remote Sensing and Image Processing, Second Edition, John Wiley and Sons, New York.

Ripley, B.D., 1981, Spatial Statistics, John Wiley & Sons, Inc., New York.