Abstract
A fast technological progress is providing seismic tomographers with computers
of rapidly increasing speed and RAM, that are not always properly taken
advantage of. Large computers with both shared-memory and distributedmemory
architectures have made it possible to approach the tomographic
inverse problem more accurately. For example, resolution can be quantified
from the resolution matrix rather than checkerboard tests; the covariance
matrix can be calculated to evaluate the propagation of errors from data to
model parameters; the L-curve method can be applied to determine a range
of acceptable regularization schemes. We show how these exercises can be
implemented efficiently on different hardware architectures.
of rapidly increasing speed and RAM, that are not always properly taken
advantage of. Large computers with both shared-memory and distributedmemory
architectures have made it possible to approach the tomographic
inverse problem more accurately. For example, resolution can be quantified
from the resolution matrix rather than checkerboard tests; the covariance
matrix can be calculated to evaluate the propagation of errors from data to
model parameters; the L-curve method can be applied to determine a range
of acceptable regularization schemes. We show how these exercises can be
implemented efficiently on different hardware architectures.
Keywords
Numerical inverse theory;seismology;global tomography;seismic resolution;Earths mantle
References
DOI: https://doi.org/10.4401/ag-4407
Published by INGV, Istituto Nazionale di Geofisica e Vulcanologia - ISSN: 2037-416X