Re-evaluation of early instrumental earthquake locations: methodology and examples

The difficulties of locating earthquakes in the early instrumental period are not always fully appreciated. The networks were sparse, and the instruments themselves were of low gain, often had inappropriate frequency response and recording resolution, and their timing could be unreliable and inaccurate. Additionally, there was only limited knowledge of earth structure and consequent phase identification and propagation. The primitive Zöppritz tables for P and S, with no allowance for the core, did not come into use until 1907, and remained the main model until the adoption of the Jeffreys-Bullen tables in the mid-1930s. It was not until the early 1920s that studies of Hindu Kush earthquakes revealed that earthquake foci could have significant depth. Although many early locations are creditably accurate, others can be improved by use of more modern techniques. Early earthquakes in unusual places often repay closer investigation. Many events after about 1910 are well enough recorded to be re-located by computer techniques, but earlier locations can still be improved by using more recent knowledge and simpler techniques, such as phase re-identification and graphical re-location. One technique that helps with early events is to locate events using the time of the maximum phase of surface waves, which is often well reported. Macroseismic information is also valuable in giving confirmation of earthquake positions or helping to re-assess them, including giving indications of focal depth. For many events in the early instrumental period macroseismic locations are to be preferred to the poorly-controlled instrumental ones. Macroseismic locations can also make useful trial origins for computer re-location. Even more recent events, which appear to be well located, may be grossly in error due to mis-interpretation of phases and inadequate instrumental coverage. A well converging mathematical solution does not always put the earthquake in the right place, and computer location programs may give unrealistically small estimates of error. Examples are given of improvements in locations of particular earthquakes in various parts of the world and in different time periods. Mailing address: Dr. Robin D. Adams, International Seismological Centre, Pipers Lane, Thatcham, Berkshire, RG19 4NS, U.K.; e-mail: robin.adams1@btopenworld.com in a gradual refinement. Particularly in the early days the locations could be unreliable, and instrumental information was used to refine macroseismic studies rather than replace them. Even at the present time instrumental and macroseismic investigations can be used to supplement each other, and their combined use produces the best results. This paper summarises various location techniques and points out some difficulties associated with them that are not always appreciated. 2. Macroseismic locations Macroseismic location was the only technique available until the beginning of the twentieth century, and in some parts of the world is still the best available. Some general points are obvious. For small events, say of magnitude 2 1/2 or less, the felt area will be small, and the epicentre must be close to anywhere where it was reported felt. For larger events the pattern of isoseismals may give the epicentre, as being close to the centre of the isoseismal of highest intensity. In some instances epicentres are very well controlled by felt effects and may be preferred to poor instrumental determinations. There can be complications, however, in areas above subduction zones, where lateral velocity variations can cause focussing of energy and consequent distortion of the felt pattern. In New Zealand, for example, because of the shallowing of the zone of deep earthquakes southwards, intermediate depth earthquakes beneath the centre of the North Island are often felt only lightly at the epicentre, with the highest intensities displaced southwards. There is an additional complication with the very largest earthquakes, say of magnitude 7 and greater, where the source dimension can be more than several tens of kilometres in extent. Unless the source rupture is symmetrical the epicentre, above the point of initiation of rupture, may be displaced from the region of highest energy release identified by macroseismic reports. In such cases the epicentre has less significance. The spacing of isoseismals may also give an indication of focal depth. In general isoseismals from a shallow event will have a peak near the epicentre and fall off evenly with distance; those from deeper events will have inner isoseismals that are wider, with the outer isoseismals more closely spaced. Many attempts have been made to quantify this effect, an early one being by Kövesligethy (1906), with more recent relations given, for example, by Musson and Cecic (2002). Epicentral intensity may give an indication of magnitude. Various formulae have been developed for this applicable in various regions, see for example, Ambraseys et al. (1994). It must be remembered that for many centuries these were the only techniques available for earthquake location. In some areas macroseismic techniques may now be calibrated by well-controlled instrumental locations, enabling some re-evaluation of earlier results. 3. Location with networks Basic principles Some information about an earthquake’s location may be found from readings at a single station, such as its distance from the interval between different phases, and its approximate azimuth from analysis of horizontal motion. The character of the recording and possibly the identification of depth phases may also give an indication of focal depth. Recent advances in digital processing allow these parameters to be determined more accurately from modern digital stations. For reliable location, however, it is desirable to compare arrival times of different phases at a network of stations. To determine the origin time and the three spatial co-ordinates of the focus at least four independent arrival times are needed from at least three stations, and for a good solution many stations should be used. Computers were not generally available before the early 1960s, so before then it was usual to use graphical methods to locate earthquakes. In this method, the origin time is

in a gradual refinement.Particularly in the early days the locations could be unreliable, and instrumental information was used to refine macroseismic studies rather than replace them.Even at the present time instrumental and macroseismic investigations can be used to supplement each other, and their combined use produces the best results.
This paper summarises various location techniques and points out some difficulties associated with them that are not always appreciated.

Macroseismic locations
Macroseismic location was the only technique available until the beginning of the twentieth century, and in some parts of the world is still the best available.Some general points are obvious.
For small events, say of magnitude 2 1 /2 or less, the felt area will be small, and the epicentre must be close to anywhere where it was reported felt.
For larger events the pattern of isoseismals may give the epicentre, as being close to the centre of the isoseismal of highest intensity.In some instances epicentres are very well controlled by felt effects and may be preferred to poor instrumental determinations.There can be complications, however, in areas above subduction zones, where lateral velocity variations can cause focussing of energy and consequent distortion of the felt pattern.In New Zealand, for example, because of the shallowing of the zone of deep earthquakes southwards, intermediate depth earthquakes beneath the centre of the North Island are often felt only lightly at the epicentre, with the highest intensities displaced southwards.
There is an additional complication with the very largest earthquakes, say of magnitude 7 and greater, where the source dimension can be more than several tens of kilometres in extent.Unless the source rupture is symmetrical the epicentre, above the point of initiation of rupture, may be displaced from the region of highest energy release identified by macroseismic reports.In such cases the epicentre has less significance.
The spacing of isoseismals may also give an indication of focal depth.In general isoseismals from a shallow event will have a peak near the epicentre and fall off evenly with distance; those from deeper events will have inner isoseismals that are wider, with the outer isoseismals more closely spaced.Many attempts have been made to quantify this effect, an early one being by Kövesligethy (1906), with more recent relations given, for example, by Musson and Cecic (2002).
Epicentral intensity may give an indication of magnitude.Various formulae have been developed for this applicable in various regions, see for example, Ambraseys et al. (1994).
It must be remembered that for many centuries these were the only techniques available for earthquake location.In some areas macroseismic techniques may now be calibrated by well-controlled instrumental locations, enabling some re-evaluation of earlier results.

Location with networks -Basic principles
Some information about an earthquake's location may be found from readings at a single station, such as its distance from the interval between different phases, and its approximate azimuth from analysis of horizontal motion.The character of the recording and possibly the identification of depth phases may also give an indication of focal depth.Recent advances in digital processing allow these parameters to be determined more accurately from modern digital stations.
For reliable location, however, it is desirable to compare arrival times of different phases at a network of stations.To determine the origin time and the three spatial co-ordinates of the focus at least four independent arrival times are needed from at least three stations, and for a good solution many stations should be used.
Computers were not generally available before the early 1960s, so before then it was usual to use graphical methods to locate earthquakes.In this method, the origin time is found from the differences between phases at individual stations, and the travel times of observed phases is then used to determine distances to the stations.Arcs corresponding to these distances are drawn on a map or globe, and adjustments made to give the best fit.Graphical methods demonstrate well the principles involved, and in particular the necessity of having stations well distributed around the epicentre to obtain a reliable solution.
Mathematical analysis generally uses the method of least squares, proposed by Geiger (1910).A trial origin is chosen and for each phase the time difference between the observed arrival and that expected from the adopted velocity model is calculated, and the origin parameters adjusted to minimise the sum of the squares of these «residuals».Before the advent of computers this procedure could be carried out on mechanical calculators, but this was a laborious task and usually only a single iteration was performed.The difficulties of graphical solutions still remain when computer location is used, and the use of computers cannot overcome bad station geometry.If the station distribution is bad, the solution will still be bad and the formal errors large.Depending on station geometry and choice of trial origin, the program might find a false location or even fail to converge.This method will do exactly what it is asked to do -it will minimise the sum of squares of residuals for the given velocity model.This is not always helpful.For example the program will try to overcome any deficiencies in the velocity model.In subduction zones with large lateral velocity variations, the computer may put the event in quite the wrong place, with an obligingly small but unrealistic error, with no indication that the event is misplaced.This is often the case in the South Pacific. Figure 2 shows an example from New Zealand in which an intermediate-depth earthquake in the centre of a network of more than 20 stations was given an apparently well-controlled solution using the laterally homogeneous Jeffreys-Bullen velocity model (k = 1.0 in fig.2).This origin did not fit reported arrival times from stations outside New Zealand, however, nor the felt effects.Assuming a velocity in the deep earthquake zone 10% higher than normal gave a solution in which the epicentre was moved 60 km (k = 0.9) and the focal depth reduced by 50 km.Adopting the new position removed all discrepancies and gave what was clearly a position closer to the true one, but from the mathematical point of view the initial determination was well controlled, with small errors, and there was no reason to doubt its validity (Adams and Ware, 1977).
It must be stressed that the formal standard errors given by the least squares procedure are a measure of the consistency of the data with respect to the specific model used, and do not necessarily reflect physical errors.

General difficulties
Some sources of error are common to all earthquake catalogues and listings, including those from the pre-instrumental era.Errors may be divided into omissions, spurious events and mis-location.

Omissions
In pre-instrumental times earthquakes were reported only from populated areas; note for example the lack of earthquakes in oceanic areas in early compilations of global seismicity (e.g., Mallet, 1858).The increasing sensitivity of instruments now means that global coverage of detection is generally down to magnitude 4 1 /2 at the International Seismological Centre (Willemann, 1999), but detection thresholds are much lower in areas with close local networks.Nevertheless, earthquakes may remain undetected if they occur at times of instrumental failure or excessive microseismic noise, or if their record is confused with that of another event.Little can be done to rectify such omissions.Earthquakes reported with errors in time or position may result in the omission of the true event.

Spurious events
Errors in timing are a common source of spurious events; sometimes the event is also reported at the correct time, resulting in duplication.Errors of minute, day, month, year or even century occur.Minute and day errors are particularly common in recent instrumental listings and ISC uses a program to seek these.A felt report can sometimes help re-solve an ambiguity, but these too may be subject to error.
Different agencies can also locate the same event far enough apart to cause a «split» event.
An example of such a three-way split is shown in fig. 3. Three national agencies each used their own network to locate a small earthquake off Central America at widely different positions; ISC was able to combine the readings into a single event (Adams and Richardson, 1996).On the global scale spurious events can be formed by chance mis-association of unrelated readings.The automatic «search» procedure at ISC regularly found several hundred of these spurious events each month, which had to be removed from the files; occasionally, however, such spurious events would remain in the listings.Improved procedures now reduce the chance of such mis-associations.
Mis-interpretation of phases can also result in spurious events.There is a tendency for small local networks to interpret arrivals from teleseisms as a local event.Core phases from Pacific earthquakes have been interpreted in Europe as local readings, and a false local earthquake postulated.In an extreme case the Large Aperture Seismic Array in Montana (LASA) during the early 1970s mis-interpreted steeply arriving core phases as direct P arrivals from earthquakes near the far limit of allowable distance, resulting in a ring of false events at distances near 110º (Ambraseys and Adams, 1986).Figure 4 shows these events in Cameroon, where by chance they could be associated with a line of volcanic activity.
Catalogues may also be contaminated with non-seismic events such as explosions, and other disturbances.
A final source of false events arises simply from mistakes in copying information from other sources.Such mistakes may propagate through many generations of catalogues.Cases have occurred of transposition of latitude and longitude, and north-south and east-west confu- sion.There can also be confusion between the order of day and month.

Difficulties in early period of earthquake location
It is sometimes difficult for present-day seismologists to appreciate the difficulties our early colleagues faced.There are fundamental recording difficulties arising from several sources.

Sparse networks
Early networks developed slowly.In some areas of strong local activity, such as Japan and California, regional networks grew reasonably rapidly after the development of instruments, but on a global scale the coverage was initially sparse, enabling only the largest events to be detected and located.Nevertheless, the early global network of about 30 Milne instruments set up by the British Association for the Ad-vancement of Science in the late 1890s was the first attempt to provide global coverage (fig.5, Milne, 1900), and provided early locations, albeit with limited sensitivity and precision.

Instrumental difficulties
Early instruments were mainly insensitive and had inappropriate characteristics.For example, the Milne seismographs were undamped, of low gain and slow recording speed.They were of intermediate period (about 12s) and were not good for recording body waves.

Timing difficulties
Again, it is hard for present-day seismologists to appreciate the difficulty in obtaining accurate timing in the era before crystal clocks and radio transmission.It is not by chance that many early seismological stations were installed at astronomical observatories, for example, Mount Wilson in California and Wellington in New Zealand.Here the observatory clocks could be rated by astronomical observations, but large errors could accumulate during cloudy periods.Up till the 1960s marine chronometers remained one of the most reliable timing sources, in later periods checked against radio time signals.

Lack of knowledge of seismic phases and travel times
In early seismology knowledge of earth structure and earthquake location developed together, each helping to improve the other.The earliest travel-time tables in common use were those of Zöppritz (1907), for P and S phases, with no allowance for the core.The existence of the core was proposed in 1910, and the possibility of deep events recognised in the early 1920s, but the Zöppritz tables remained in common use until Jeffreys and Bullen developed their tables in the 1930s.The inner core was not discovered until 1936.Thus early seismologists, even if they could pick arrivals from their records, lacked the knowledge of earth structure to enable them to interpret them with certainty.

Data available
There are several sources of data for reevaluating early earthquake locations.For global coverage the publications of the British Association (BAAS) provide the fullest source.With their support Milne published lists of phases recorded at his global network from 1899 onwards.These are generally referred to by the name of his home town in the Isle of Wight off the south coast of England as the «Shide Circulars».BAAS later published epicentral estimates for the period 1899-1917, after which this work was taken over by the International Seismological Summary (ISS), originally set up in 1921 by the newly-formed International Union of Geodesy and Geophysics.ISS was reconsti-tuted as the present International Seismological Centre (ISC) in 1964.Between them the bulletins of ISS and ISC remain the most complete source of seismic readings available for re-evaluation of global seismology (Adams, 2002).
Other agencies also contributed to the collection and analysis of global earthquake information.The Bureau Central de Séismologie (BCIS) in Strasbourg published global bulletins for 1903 to 1963, after which it concentrated mainly on European seismicity.Successive governmental agencies in the United States have carried out global earthquake location since 1928; at present this is undertaken by the National Earthquake Information Center of the US Geological Survey.Many regional agencies also undertake some global analysis as well as the detailed study of the seismicity of their own region.The Institute of Physics of the Earth in Moscow and the Japanese Meteorological Agency are foremost among these.
An extremely valuable source of information on early earthquakes is the bulletins regularly published by networks and individual stations throughout the world.These often contain much more information than was submitted to international agencies, including later phases and details of amplitude and period of recorded phases, that are invaluable in the estimation of magnitude.The bulletins of the Swedish network published by Uppsala University are a particularly rich source of information.Sadly, with the growth of modern technology and automated data exchange, such bulletins have now almost totally disappeared.

Re-evaluation techniques
Some experience is required in identifying poor solutions in catalogues.Obvious clues that suggest that an event warrants closer investigation are unusual positions, unusual groupings of stations, unsatisfactory residuals and discrepancy with felt reports.
The first step is to re-assess the data.This involves looking at the given station readings to see if they could have been mis-interpreted, or if they might have been mis-associated from another event, or even if they contain systematic timing errors.Phase mis-identification is a common error, sometimes simply confusion between P and S phases, but also gross mis-identifications, such as interpreting core phases as P arrivals from a fictitious event.It is also worthwhile searching for additional readings from station bulletins or other sources, and checking for any available felt information.
If enough readings are available a computer re-location may be attempted; this may be especially relevant for earthquakes previously located only by graphical means.Often, however, the quality and quantity of data are not enough for a computer solution to converge, and in such cases simple graphical methods can improve a poor solution.
A technique that is useful in the re-interpretation of early earthquakes, particularly those recorded by Milne instruments is to make use of the reported time of arrival of the maximum phase M of surface waves (Ambraseys and Adams, 1986).Assuming that this travels at a velocity of about 3 km/s enables distances from stations to be calculated and locations estimated by graphical means.Although the timing may not be known accurately, the slow velocity reduces corresponding errors in distance.An example is shown in fig.6 for an earthquake in 1906, originally located by BAAS in the Mediterranean off the coast of Egypt.Re-interpreting later arrivals at ten stations ranging in distance from 15º (Helwan) to 69º (Batavia) showed the event to be at a more usual location in the Red Sea.The arcs drawn in this figure show that such locations are not well determined by present standards, but a gross mis-locations has been corrected.
An example of systematic re-location of early events in a given region is found in Ambraseys and Adams (2001) for earthquakes in Central America.Here a variety of techniques was used, with great reliance being given to macroseismic reports for early events.Some instrumental information was available from 1898 onwards; at the beginning of the period this could only be shown to be consistent with the felt information, but later could more   confidently confirm the macroseismic position.The earliest event for which a reliable instrumental position could be determined was on 1 July 1907, in Honduras.Figure 7 shows events for which our re-determinations in this region were at least 500 km from earlier solutions.The quality of published instrumental locations improved with time, and particularly after the advent of the Worldwide Standard Seismograph Network in 1964 there were only a very few major discrepancies.
An example of useful combination of instrumental and macroseismic information is given by Ambraseys and Adams (1993) for a damaging earthquake of magnitude 6 in Cyprus on 10 September 1953.It was well recorded by stations world wide and given a well-determined position some 15 km offshore to the west of the island.The macroseismic information, however, placed highest intensities well inland.Careful scrutiny of reported phase arrivals then revealed that many stations reported a second arrival about 10s after the initial onset.According to distance these had been variously interpreted as P*, PP, pP or PcP.When these later arrivals were analysed separately they established the existence of a second event of approximately the same size about 50 km from the first, in the area of highest reported intensity.
Failure to correctly identify crustal phases in local earthquakes can also result in significant mis-locations.Figure 8 shows two solutions for an earthquake near the south coast of France.The first, obtained by NEIC, was calculated without the benefit of readings from the closest station, Cadarache, and treating arrivals from the remaining stations as simple P. Reinterpreting arrivals at the closest four stations as the crustal phase Pg and that at the most distant as P* gave a much improved solution at a location some 50 km away.

Future work
An experienced analyst will learn to recognise signs that a particular solution may be in error, and to re-assess the data to give an improved result, but there is no easy way to improve early locations if the data are not adequate.
In the earliest period each earthquake needs to be looked at individually, bearing in mind the limitations of knowledge available to the contemporary seismologists who carried out the original location.
For the period when recordings are routinely better, it may be possible to undertake routine computer re-evaluation, but this will not necessarily reveal all deficiencies.
The correct assessment of macroseismic information can also be used as an additional tool to control poorly determined instrumental locations and to resolve ambiguities.
A combination of these techniques may be used to improve the reliability of early earthquake catalogues for use in tectonic studies and hazard analysis.

Fig. 2 .
Fig. 2. New Zealand earthquake of 4 January 1975.Stars show epicentres determined by USGS and by New Zealand procedures using standard (k = 1.0), and modified (k = 0.9) velocity models.

Fig. 3 .
Fig. 3. Small stars show the locations given by three national agencies for a Central American earthquake on 3 September 1992.Large star shows position obtained by ISC by combining all readings.

Fig. 4 .
Fig. 4. False events in Africa mis-located by LASA by interpreting core phases from Pacific earthquakes.Numerals give the proposed magnitudes.

Fig. 5 .
Fig. 5. Global network of seismograph stations installed by Milne about 1900.Numbers give approximate locations of detected earthquakes.

Fig. 6 .
Fig. 6.Solid star shows position of earthquake originally located off coast of Egypt; arcs show re-location in Red Sea, using Milne readings from stations shown by small stars.

Fig. 7 .
Fig. 7. Earthquakes in Central American region for which positions re-located by Ambraseys and Adams (2001) were at least 500 km from those originally assigned.Triangles show original positions, circles relocations.

Fig. 8 .
Fig. 8. Solutions of earthquake near south of France in 1990 with improved ISC position about 50 km from that given by NEIC after re-interpretation of crustal phases.