A new approach for residual gravity anomaly profile interpretations: Forced Neural Network (FNN)

This paper presents a new approach for interpretation of residual gravity anomaly profiles, assuming horizontal cylinders as source. The new method, called Forced Neural Network (FNN), is introduced to determine the underground structure parameters which cause the anomalies. New technologies are improved to detect the borders of geological bodies in a reliable way. In a first phase one neuron is used to model the system and a back propagation algorithm is applied to find the density difference. In a second phase, density differences are quantified and a mean square error is computed. This process is iterated until the mean square error is small enough. After obtaining reliable results in the case of synthetic data, to simulate real data, the real case of the Gulf of Mexico gravity anomaly map, which has the form of anticline structure, is examined. Gravity anomaly values from a cross section of this real case, result to be very close to those obtained with the proposed method. Mailing address: Dr. Onur Osman, Istanbul Commerce University, Ragip Gumuspala Cad. No. 84 Eminonu, 34378 Istanbul, Turkey; e-mail: oosman@iticu.edu.tr


Introduction
A classical problem in gravity and magnetic exploration is the computation of theoretical anomalies caused by idealized models of known shapes.Many workers have published different methods for carrying out such computation, and textbooks on potential theory, e.g.Routh (1908), provided various formulas for these models.Early publications like Barton (1929) dealt with the computation of the gradients of the gravity field.Hubbert (1948) used line-integral approach for the computation of gravitational attraction of two-dimensional masses.Bhattacharyya (1964), Nagy (1966), and Plouff (1976) presented closed form of analytical solutions for prism shaped bodies.Talwani and Ewing (1960), Talwani (1965) used numerical integration techniques for the computation of the fields due to models of arbitrary shape by dividing them into polygonal prisms or laminas.Parker (1974) tried to find depth and density values using gravity data.Green (1975) studied an inverse solution of gravity profiles.Last and Kubik (1983) estimated underground density distribution with recursive inverse solution techniques.Lines and Treitel (1984) applied a Singular Value Decomposition (SVD) approach for problems in evaluation of gravity and seismic projections.Mareschal (1985) used Fourier Transform for inverse solution of gravity density distributions.Murty et al. (1990) focused on density differences of 2D and 3D gravity models.Murty and Rao (1993a,b) calculated inverse solution of gravity and magnetic anomalies of polygonal structures using Marquart algorithm.Murthy and Rao (1993b) proposed some methods in inverse solution of gravity anomalies for circular, cylindrical, and vertical discs.Mosegaard and Tarantola (1995) applied Monte Carlo method.Tsokas and Hansen (1997) studied on crustal thickness with gravity anomalies in Greece.
Artificial neural networks are part of a much wider field called artificial intelligence, which can be defined as the study of mental facilities through the use of computational models (Charniak and McDermott, 1985).They encompass computer algorithms that solve classification, parameter estimation, parameter prediction, pattern recognition, completion association, filtering, and optimization problems (Brown and Poulton, 1996).They have gained popularity in geophysics during the last decade because these tools can approximate any continuous function with an arbitrary precision (Van der Baan and Jutten, 2000).The location of the buried steel drums is estimated from magnetic dipole source using supervised artificial neural network (Salem et al., 2001).Neural networks are used to speed up the detection of ferro-metalic objects (Selam and Ushijima, 2001).Depth and radius of subsurface cavities are determined from microgravity data using back propagation neural networks (Eslam et al., 2001).Neural networks are studied to solve 1D and 2D resistivity inverse problems (El-Qady and Ushijima, 2001).For 2D modeling CNN (Cellular Neural Networks) is applied to the separation of regional/ residual potential sources in geophysics by Albora et al. (2001a,b).
Artificial neural networks can be divided into two main categories: unsupervised recurrent and supervised feed-forward networks.In the unsupervised recurrent type, the networks allow information to flow in both directions.These modals are called unsupervised because there is no teacher to set the input-output mapping relation during the learning phase.In the supervised because through a set of correct input-output pairs, called the training set, the network learns the relation between the input-output pairs.
In this paper, a new algorithm, denoted «Forced Neural Network (FNN)» is proposed.The aim of FNN is to estimate the physical pa-rameters of buried objects.It is first applied to synthetic examples and then real data.We have found satisfactory results for both cases.

Forced Neural Network
The artificial neural network is composed of many simple processing elements, which are massively interconnected and operate in parallel.The processing elements commonly known as neurons, receive the input from previous elements and send the output to other elements through synaptic connections.These connections have different weights.In order to find the effective values of inputs and outputs, these values are multiplied by these weights.The main purpose of neural networks is to compute such weights giving the best output.To obtain the eligible values for weights, back propagation method being the most popular learning algorithm for neural networks, is used in this study.

Back propagation algorithm
The error signal at the output of neuron j at iteration n, is defined by where neuron j is an output node, dj(n) is desired output and y j(n) is actual output of Neural Networks (NN).The instantaneous value of the error energy for neuron j can be defined as .Correspondingly, the instantaneous value E(n) of the total error energy is obtained by summing over all neurons in the output layer; only «visible» neurons are the ones for which error signals can be calculated directly.We may thus write, (2.2) where, the set C includes all the neurons in the output layer of the network (Haykin, 1999).Let N denote the total number of patterns (examples) contained in the training set.The average squared error energy is obtained by summing where δj(n) is the local gradient and η is learning speed (Haykin, 1999).Local gradient points are required changes in synaptic weights and we obtain Back-Propagation (BP) formula for the local gradient δj(n) as neuron j is hidden.
Figure 1 shows the signal-flow graph representation of eq.(2.5), assuming that the output layer consists of m L neurons.
The factor involved in the computation of the local gradient δj(n) in eq.(2.5) depends solely on the activation function associated with hidden neuron j.The remaining factor involved in this computation, namely the summation over k, depends on two sets of terms.The first set of terms, δ k(n), requires knowledge of the error signals ek(n), for all neurons that lie in the layer to the immediate right of hidden neuron j, and that are directly connected to neuron j which is shown in fig. 1.The second set of terms, wkj(n), consists of the synaptic weights associated with these connections.
We may redefine the local gradient δ j(n) for hidden neuron j as (2.6) (2.7) neuron j is hidden.The local field parameter v j (n) produced at the input of the activation function associated with neuron j is therefore where m is the total number of inputs (excluding the bias) applied to neuron j (Haykin, 1999).The synaptic weight w j0 (corresponding to the fixed input y0=+1) equals the bias bj applied to neuron j.Hence the function signal y j(n) appearing at the output of neuron j at iteration n is . (2.9) Next differentiating eq. ( 2.9) with respect to vj(n), we get (2.10) where the use of prime (the right-hand side) signifies differentiation with respect to the argument Haykin (1999).

Forced Neural Network for gravity anomaly
This method could be used in modeling arbitrary subsurface body geometry and density contrasts.We begin with a horizontal cylindrical structure, whose gravity anomaly function is shown below, (2.11) ∆ρ is density difference, H and X are the depth and the total length of the cross section respectively, i and j are the levels of the depth and the distance of the cylinder from the starting point, and finally xref is the concerned distance point where the anomaly value is observed.
We use as an input of the neuron, which is shown in fig.2, and there should be (H×X) inputs and these inputs are constant for every A(x ref).In fig.2, ϕ(.) is an activation function.We use partially linear activa- (Haykin, 1999), which gives linear output values between zero and ∆ρ depending on its input.
The neuron can be modeled as below: In the method, weights of the neuron are assigned as ∆ρi,j for each pixel and linear function is assumed as an activation function.After using the back propagation, ∆ρ i,j are updated and the output of the neuron gives the gravity anomaly.Although the density differences are found, the results of this system are not sufficient because of non-uniqueness and horizontal locations that are constrained.Therefore, the value of ∆ρ is set to zero if its value is very close to the zero according to the density difference which is obtained form geological features of the region.Otherwise, the value of ∆ρ is set to the density difference of the geological region after back propagation.
Forced neural network means that after sufficient epoch is applied, fixed values are assigned to the output of the neuron according to the density difference ∆ρ, and this process is continued until the mean square error of the output, A(xref) which is shown in fig.2, becomes sufficiently small.

Performance of the algorithm in synthetic data
Our synthetic data are obtained from a cylindrical structure of having a depth of 1 m and a ra- dius of 2 m for ∆ρ=1 mGal as shown in fig. 3. The anomalies of this model are considered as the input data provided to the FNN.In synthetic examples, every learning cycle is comprised of 350 epochs, and two-level quantization (∆ρ or zero) is applied after every 10 learning cycles, which is found to be optimum through experiments.The estimated geological structure obtained via FNN application results in an anomaly profile (dashed line) that is similar to the observed anomaly (solid line), as shown in fig. 3.
For a second synthetic model, we choose Ttype prismatic structure with ∆ρ =1mGal.We use the Talwani and Ewing (1960) 2D method.The estimated geological structure obtained via FNN application results in an anomaly profile (dashed line) that is similar to the observed anomaly (solid line), as shown in fig. 4. In both examples, satisfactory results are obtained.

Example of application on real data
As an example for application of real data in FNN, we use the Bouguer anomaly reported by Nettleton (1943), whose reproduction is shown in fig. 5.The anomaly was recorded in the Gulf of Mexico about 241 km away from Galveston and at a small distance inside the edge of the continental shelf.The importance of basement architecture to the hydrocarbon exploration in the Gulf of Mexico Basin has been debated on for years.Alexander (1999) studied on tectonic and stratigraphic in Gulf Basin.
The origin of the topographic feature was not established until the gravity survey indicated a large closed minimum coincident with the contours of the elevated mound that could be accounted for only by the assumption of a salt dome.The survey was not extensive enough to   define the gravity anomaly, but judicious extrapolation indicated the maximum negative anomaly to be about 9 mGal.The gravity anomaly map given in fig. 5 is obtained from Dobrin and Savit (1988).Figure 6 is composed from the AB cross section of this map and demonstrates Nettleton's interpretation of the salt structure giving rise to the anomaly.The solid line shows the observed anomaly and dotted line shows the anomaly, which is derived from FNN.The results of the proposed method are very close to the observed one.

Conclusions
The Forced Neural Networks (FNN) presented in this paper shows that the gravity field at any point due to a solid body with uniform volume density can be computed as the field due to a fictitious distribution of surface mass-density on the same body.First of all, we applied the FNN technique to two synthetic data.These tests provide successful results in fitting the calculated to observed data.As a real data application, a salt dome gravity anomaly map taken from the NW part of the Gulf of Mexico is considered.This anomaly shows a negative closure from 1060 mGal to 990 mGal.The reason for this negative closure is mostly because of the geological properties of the salt dome.The density contrast in salt dome of the Gulf is lower than those in the surrounding rock formations.The anomaly of AB cross-section is modeled using FNN and the anomaly of this model is very close to the observed one.To make a comparison between the methods of Nettleton and FNN, we can see that by using FNN the model better fits the observed anomaly.The determination of the depth of a buried body from the gravity anomaly has been solved using Forced Neural Network.Our model has approximately the upper surface at about 1 km depth, while the lower one is at 9 km, and its approximate width is 15 km.The advantage of the proposed algorithm, FNN over the classical inversion techniques is that there is no need for initial information on the parameters of the buried structure such as depth and width.

l /Fig. 1 .
Fig. 1.Signal flow graph of a part of the adjoint system pertaining to Back-Propagation of error signals.

Fig. 5 .
Fig. 5. Gravity map observed over inferred salt dome causing anomaly in water-bottom tomography in Gulf of Mexico (contour interval is 5 mGal) (modified form Dobrin and Savit, 1988).

Fig. 6 .
Fig. 6.Inferred structure of salt dome believed to be causing offshore gravity anomaly of fig. 5. Agreement between observed and calculated gravity profiles supports choice of model for structure (for AB cross-section; -• -observed anomaly; -+ +-FNN output; --Nettleton output).
over all n and then normalizing with respect to set size N, as shown by,