Moving least-squares corrections for smoothed particle hydrodynamics

First-order moving least-squares are typically used in conjunction with smoothed particle hydrodynamics in the form of post-processing filters for density fields, to smooth out noise that develops in most applications of smoothed particle hydrodynamics. We show how an approach based on higher-order moving least-squares can be used to correct some of the main limitations in gradient and second-order derivative computation in classic smoothed particle hydrodynamics formulations. With a small increase in computational cost, we manage to achieve smooth density distributions without the need for post-processing and with higher accuracy in the computation of the viscous term of the Navier–Stokes equations, thereby reducing the formation of spurious shockwaves or other streaming effects in the evolution of fluid flow. Numerical tests on a classic two-dimensional dam-break problem confirm the improvement of the new approach.


Introduction
Smoothed particles hydrodynamics (SPH) is a meshless Lagrangian method where a fluid is discretized using virtual interpolation nodes (particles) that are not fixed in any predefined mesh and where their evolution is described by the equation of motion of the fluid itself.The advantages of SPH include direct computation of all required physical quantities and their gradients, implicit tracking of surfaces (such as the free surface of the flow or internal solidification fronts in the case of thermal evolutions with phase transition), and the ability to cope with large deformations without any need for remeshing (which is typically needed, for example, with the finite elements method).
SPH was originally developed in the late 1970s by Gingold and Monaghan [1977] and Lucy [1977], with applications relating to astrophysics.Recently SPH has gained a lot of attention in other fields of computational fluid dynamics, with important applications ranging from coastal engineering [Hérault et al. 2009, Dalrymple andHérault 2009] to geophysics and volcanology [Hérault et al. 2010b[Hérault et al. , 2011]].These applications rely heavily on the ease with which SPH can model both the mechanical and thermal evolution of free-surface fluids.These extensive applications have also highlighted some of the limitations that standard SPH formulations encounter, in terms of accuracy or when fluids with high viscosity are modeled.
The purpose of this paper is to present an alternative approach that while keeping close to the spirit of SPH, improves on the general accuracy of the method, and provides a stronger foundation for better evaluation of the higher order derivatives that are necessary for the viscous term of classic fluid-modeling equations.

Standard smoothed particles hydrodynamics interpolation for fields
The mathematical foundation for SPH lies in the properties of convolutions.For any scalar field u in a domain X ⊆ R d , by definition of the Dirac delta distribution d we can write, with a typical abuse of notation: We can approximate d with a family of smoothing kernels W(•, h), parameterized by their smoothing length h, satisfying: where the limit is intended in the sense of the distributions.We then have: Physically, the domain X represents the body of the fluid, which we want to discretize with a finite set of particles with given masses m i , densities t i and volumes V i = m i /t i .Formally, let a density function t: X → R and a set of points

Subject classification:
Algorithms and implementation, Smoothed particle hydrodynamics, Moving least-squares, Mesh-free.
622 Special Issue: V3-LAVA PROJECT # at locations {x i } ⊂ X be given.We approximate t in the weak sense with a discretization: where the summation is extended to all of the particles and the values m i are called the masses of the particles.The weak approximation is read to mean that for any field u defined in X, we can write: Hence: where t i = t(x i ) and V i = m i /t i .This gives us the standard SPH interpolation for field values: The smoothing kernels are usually chosen with compact support, so that the summation is extended only to particles in a neighborhood of x.
We note that there are two errors that contribute to the SPH interpolation.The first is related to the approximation of d with the smoothing kernel W with length h, and goes to zero as h → 0. The second source of error in the SPH interpolation is given by the space discretization, and therefore it depends on the particle density or equivalently from the average inter-particle spacing H. Again, the continuous limit must be obtained for H → 0, and to ensure that the entire SPH method is consistent, we should have H/h → 0 as h → 0, although some different approaches have been devised to circumvent this issue, such as the renormalizing mesh-free scheme of Lanson and Vila [2008].In applications, the average inter-particle distance H is usually a fixed fraction of the smoothing length h; typically h = aH with 2 < a < 4, with the actual value depending on the smoothing kernel used.
The accuracy of the interpolation is related to the number of moments of W that are zero; this is always guaranteed for first moments as long as the kernel is chosen center-symmetric, such that W(•, h) only depends on |x − x i |.
To simplify the notation in what follows, we will write and W ij as a short form for W(r ij , h).Kernel symmetry ensures that W ij = W ji .We observe that for center-symmetric kernels, we can compute their gradient as: (1) When choosing an SPH kernel, it is often convenient to select one for which the factor: (2) can be computed analytically without an actual division by r.This ensures that the kernel gradient does not suffer from singularities from overlapping particles.
SPH kernels are also usually chosen as positive, so that the interpolated density: is always strictly positive.This condition, however, prevents the second moments of W from being zero, and effectively limits the accuracy of the SPH interpolation to first order at best.
An important limitation of SPH is that when the interpolation is computed on the location of a particle, the interpolated value is generally different from the interpolating field value.We have: for an arbitrary field u.
We say that a numerical interpolation scheme has order k consistency when the interpolated values and the interpolating values match on the interpolation nodes for all of the polynomials of degree at most k.A significant limit of SPH is that in its standard formulation, it does not guarantee even order 0 consistency, meaning that even constant fields are not reconstructed exactly.This is usually remedied by introducing a corrected kernel: (the Shepard correction) that ensures the exact interpolation of constant functions.It should be noted, however, that while the Shepard correction gives a lower minimum and average relative error, it can give a higher maximum error than standard SPH for nonconstant fields.We shall see in the next section that a correction based on moving leastsquares (MLS) can be used to guarantee a higher order consistency of SPH.

Smoothed particles hydrodynamics for first derivatives
For gradients, SPH takes advantage of the properties of convolutions and the divergence theorem to transfer the ∇ operator on the kernel.We have: where R = ^X and n is its outer normal.
If the compact support of W does not intersect R, the first integral is zero, and this leads us to the classic first form of the SPH interpolation of the gradient: (3) that is first order accurate in h as long as the point x is far from the boundary of X.
Near the boundary, the surface integral ∫ R u(y)d(y − x)dR is nonzero.A number of strategies have been devised to compensate for this term for particles near the physical boundaries of the domain [Liu and Liu 2003].However, near the free surface of the fluid, the term is usually simply discarded, which introduces an additional error in the evaluation of the gradient.We will see later on that this issue does not affect our proposed MLS approach.
The gradient SPH formula is rarely used in the form of Equation (3).It is indeed possible to derive other formulations, which are usually preferred.From ∇u = ∇u ± u∇1 we derive: (4) whereas from ∇u = (∇(tu) − u∇t)/t, we get: (5) Finally, we can consider (∇u) = t(∇(u/t) + (u/t 2 )∇t), to obtain: (6) The preference of one expression over another is usually driven by considerations with regard to stability or conservation.For example, Equation (4) (with the plus sign) and Equation (6) ensure that the influence of particle j over particle ī is opposite to the influence of particle ī over particle j.On the other hand, the forms of Equation (4) (with a minus sign) and Equation (5) guarantee that the gradient of a constant function evaluates to zero; the latter expression is computationally more efficient, but the former is more stable in the case of large differences in the density values between particles, such as in the case of multiple fluids.Further details about the 'golden rules' of SPH can be found in Monaghan [1992], for example.

Moving least-squares in smoothed particles hydrodynamics
The low consistency order of SPH and the errors introduced by the surface term in gradient computations can lead to highly irregular field values during the evolution of a system.This can create spurious shocks, isolated particles, potential wells, streaming and other similar artifacts.This is usually compensated for by post-processing the field values, either before or during the integration step.This can be achieved with techniques such as periodic reinitializations of the pressure or density fields (see, for example, Colagrossi and Landrini [2003]) and the XSPH method for the velocity field [Monaghan 1992], where a weighted neighborhood velocity is used to move the particle, instead of the simple particle velocity.
Although the Shepard kernel correction can be used during pressure-field initialization, this only guarantees order zero consistency, and the resulting field smoothing is often excessive.An alternative approach is based on MLS, and it can be shown that it allows SPH to achieve first-order consistency [Belytschko et al. 1996[Belytschko et al. , 2000]].This is equivalent [Belytschko et al. 1998] to correcting the kernel with an affine function B(x, p) = b 0 (x) + R a b a (x)(p − x) a where a goes from 1 to the space dimension d (i.e. the summation is extended to all of the components of the vector p − x).
To derive b a (•) (a = 0,1,... d), we impose: for the constant function u = u 0 (x) = 1 and for the affine functions u = u a (p) = p a − x a (one for each space dimension).This leads to a linear system with d + 1 equations in d + 1 unknowns, which takes the matrix form: where b(x) = (b 0 (x),..., b d (x)) T , and: (7) Formally, the vector function b(x) is the first column of the inverse of the symmetric matrix A. If A is singular, to obtain b(x) we can compute the pseudo-inverse of A, using, for example, singular value decomposition.More details about the inversion of MLS matrices can be found in Section 5.

First derivatives and the Müller approach
For a scalar field u we can write the Taylor expansion in a neighborhood of x j as (8) with the shorthand notation u j = u(x j ) and truncating this at first order.Writing this for x = x i and comparing this with the actual field value u i = u(x i ), we can evaluate the truncation error as: where x ij = x i − x j and u ij = u i − u j .Hence, the (weighted) mean square error over a neighborhood can be evaluated as: where w ij are given weights, the values of which will be discussed later on.Assuming the components of ∇u| x j are unknown, we can find values for them that minimize <E j > 2 .
By setting the derivatives of < E j > 2 with respect to each component of ∇u| x j equal to zero, we obtain the linear system: where This approach can be used with a variety of possible weights, but by setting w ij = V i W ij for an SPH kernel W, we can create a clear relationship between the MLS approach and the SPH method.This is for example the choice that was made by Müller et al. [2004], and in this case the M j matrix is equal to the cofactor of the a 11 element of the reinitialization matrix A as defined in Equation ( 7).With this choice, the MLS gradient formula becomes: (10) where is the (pseudo) inverse of M j (see also Section 5), which shows a distinct formal similarity with Equation (4), with ∇W ij replaced by: (11) With such a choice, the term can be used as a correction term for the kernel gradient in any SPH formula.This has been shown to work correctly [Narayanaswamy 2008], although this use does not have the mathematical foundation of Equation ( 10).
We note that the general gradient formula of Equation ( 9) is invariant to weight scaling, in the sense that if weights = mw ij are used (with m as a given constant), it produces the same estimate for ∇u(x j ).In particular, standard SPH kernels and Shepard-corrected kernels produce the same results as from Equation (10).
Additionally, Equation ( 9) returns exact results for constant functions and first-order polynomials, which is proven as follows.Let u(x) = k • x + c for some constant vector k and constant real number c; Equation (9) then becomes: , the right-hand side is exactly k • M j , and the solution of the system gives ∇u| x j = k (which is exactly the gradient of u) due to the symmetry of M j .This shows that the MLS approach shown so far has first-order consistency.Moreover, the first-order consistency is preserved even close to ^X, as the MLS gradient formula is not affected by missing surface integral terms.
A final remark is warranted concerning the neighborhoods in MLS and SPH.As discussed in Section 1.1, the accuracy of SPH is affected by both the smoothing length h of the kernel W and the average inter-particle distance H; indeed, improving accuracy in SPH requires that both h and H tend to zero, with H going to zero faster than h.This equivalently means that the density of particles in a neighborhood must increase as the neighborhood grows smaller.
In contrast, the MLS accuracy is only affected by the distance between the point where gradients are to be evaluated and its farthest neighbor.When using SPH kernels as MLS weights, this effectively means that the accuracy is only affected by h, and there is no requirement for the density of the particles in a neighborhood to go up when h → 0. Indeed, the compact support for kernels used as MLS weights can in general be taken much smaller than the support chosen for SPH, since the gradient will still be computed accurately as long as the point has as many nonaligned neighbors as there are components in the gradient.
For comparison, consider that in a typical application to two-dimensional fluid dynamics, the ratio h/H is such that the kernel support includes about 30 or 40 particles, whereas for an MLS gradient, three or four particles (and thus effectively a kernel with h/3 smoothing length) would be sufficient.In practice, to reduce the chance of a singular M j , the kernel support for MLS can be taken such that each particle has at least 10 neighbors [Müller et al. 2004].

Higher-order derivatives
One of the long-standing issues in SPH is the computation of higher-order derivatives, which is required when modeling the flow of a viscous fluid such as lava.Consider for simplicity the case of an isotropic, incompressible fluid with constant viscosity.The Navier-Stokes equation for the velocity is then given by: (12 where t is the density, v the velocity, P the pressure, and o the kinematic viscosity coefficient, and f collects the external forces per unit mass (typically, gravity).
To compute the Laplacian of a field with SPH, it is possible to apply the same mechanism used for the gradient, obtaining: Once again, under the additional assumption that the support of W does not intersect R = ^X, we would obtain the SPH interpolation: However, the error induced by discarding the surface integrals for points near R is now much higher than in the gradient case, and finding correcting terms to take this into consideration is even harder.
A different approach to discretize the Laplacian was first used by Cleary and Monaghan [1999] to model thermal conduction, and it was adapted by Morris et al. [1997] for the dynamic case.This is derived from a Taylor expansion of the SPH gradient, and it takes the form: (13) where f ≃ 0.01 and h is the smoothing length.The fh 2 term is included to prevent singularities in the case of overlapping particles.In the case of symmetric (non-corrected) kernels, Equation (1) can be used to simplify Equation (13) to: (14) where F ij = F(r ij , h) with F defined as in Equation ( 2).This expression has the benefit of being computable even in the case of overlapping particles, provided that F can be computed analytically.

Moving least-squares for second-order derivatives
It is possible to use the first-order MLS gradient described in Section 2.1 as a kernel correction, replacing the kernel gradient in Equation ( 13) with the corrected kernel gradient of Equation ( 11).Since we have: where is the unit vector (direction) of x ij , this allows us to simplify Equation (13) to: (15) where However, this approach does not take full advantage of the power of MLS, and suffers from all of the drawbacks of the SPH method.Instead, we can extend the first-order MLS method from Section 2.1 to allow direct computation of higher derivatives.For simplicity, we illustrate the approach in the two-dimensional case.The vector components will be (x,y) and the derivatives will be written as u ,x for ^u/^x, and so forth for the other derivatives.The Taylor expansion for u around x j can then be written up to the second error: and the truncation error when evaluating the expression for x = x i is now: Once again, we want to minimize the mean square error < E j > 2 = with respect to the partial derivatives.For simplicity, we can write: Then equating the derivatives of < E j > 2 to zero with respect to the components of gives us the linear system: (16) where The new formulae are formally identical to the first-order formulae, with an "extended" relative position vector that includes higher powers and an "extended" differential operator that includes higherorder derivatives.
A similar approach to the one described here can be found with the finite pointset method (FPM) that was developed by Tiwari and Kuhnert [2001].In the FPM, second-order MLS are used to locally and iteratively solve the Poisson equation for the pressure in an incompressible fluid.In this case, it is also necessary to include boundary conditions in Equation ( 16), both on the free surface and on the physical boundaries of the fluid.In particular, this requires the detection of the surface particles of the fluid, which loses one of the main benefits of particle methods such as SPH, which is the automatic and implicit tracking of surfaces.
Instead, we propose to use second-order MLS in an approach that fits better with the standard SPH modeling of fluid dynamics.In this respect, our approach follows the method described by Dilts [1999Dilts [ , 2000] ] more closely, with interpolants that are centered on the particle for which the derivatives are being computed.As noted in Belytschko et al. [1998], this improves the accuracy and stability of the computations, and brings our approach within the general framework described in Schrader et al. [2010].
The use of MLS over the standard SPH discretization gives us higher accuracy for first-order derivatives, as well as the possibility to directly compute second-order derivatives.For example, the Laplacian in two dimensions can be evaluated from Equation ( 16) with: where denotes the sum of the third and fifth components of the vector, i.e. the components corresponding to the pure derivatives of order two.A similar formula applies with more than two dimensions.
For second-order MLS, we can make considerations similar to those for first-order MLS.For example, with the appropriate choice of the weighting functions, we can use MLS as a gradient correction for SPH, effectively replacing ∇W ij in any SPH equation with the first components of: (18) Second-order MLS shows second-order consistency (the proof follows that for first-order consistency of firstorder MLS), and it is not affected by the surface integral terms that affect SPH derivatives of any order, so that the accuracy is preserved even near the boundary of X, as long as there are enough neighboring particles.minimum number of particles is now higher, with five neighbors as the bare minimum for the computation of all of the derivatives in two dimensions.Additionally, the particles must not be laid out on a line (or in a plane in three dimensions), like in the first-order case, nor in a cross shape, as otherwise this might prevent mixed derivatives from being computable.In general, the neighborhoods for second-order MLS must be larger than those for first order MLS, although the number of neighbors in the compact support of an SPH kernel is still sufficient.

An example
To illustrate the benefits of our approach, we compare the results for a classic two-dimensional dam-break problem, where a mass of fluid with an initial rectangular configuration H 0 = 2 m high and 1 m wide is left free to flow within a 4 m × 4 m box.
The fluid is assumed to be isotropic and quasi-compressible.Its motion is driven by the Navier-Stokes equations paired with an equation of state: where g = (0,−g) is gravity (g = 9.81 m 2 /s), t 0 = 1,000 kg/m 3 is the density at rest, c = 7, and the speed of sound c s is chosen to be an order of magnitude higher than the maximum speed expected during the flow, which in our problem is Boundary conditions are implemented using an approach similar to the one described by Liu et al. [2002].Near a physical boundary, a particle interacts not only with its own neighbors, but also with mirror images of itself and of all of its neighbors.These ghost particles have material properties that match those of their original, except for the velocity.
If no-slip conditions are wanted at the border, the mirror velocities are opposite to their original ones, whereas free-slip conditions are implemented by giving the ghost particles a mirror velocity, where only the velocity component normal to the boundary is rejected.Additional ghost particles with opposite velocities are also added at corners where two linear components of the containing box meet.As we want to simulate a viscous fluid, we use no-slip conditions in this example.
As ghost particles are not sufficient to guarantee that the boundary will not be penetrated [Liu et al. 2002], particles that exert a short-range repulsive Lennard-Jones force are also added.In contrast to Liu et al. (2002), however, we do not establish these particles a priori, but they are generated at runtime as a projection of the particle itself on the boundary.
This strategy is particularly efficient in our case, because the Lennard-Jones particle position can be found quickly as the midpoint of the original particle position and its ghost, which reduces the number of particles that need to be tracked during computation.Additionally, the force exerted by these virtual Lennard-Jones particles is constant for a particle traveling parallel to a border, which solves one of the famous issues with Lennard-Jones boundary particles [Monaghan and Kajtar 2009].
Our choice for a smoothing kernel both in SPH and as weighting function in MLS, is based on the } 3,1 function by Wendland [1995].We have W(r,h) = w(r/h)/h 2 , where: We can also define The fluid has a kinematic viscosity o =10 −6 m 2 /s (water), so we apply the viscosity formula of Equation ( 14).The discretized equations of motion can then be written as: where P ij is an artificial viscosity that takes the form: where is the average speed of sound computed at the particles i and j, and their average density.The artificial viscosity coefficient is a = 0.2 in our case, and the correcting factor fh 2 (with f = 0.01) prevents singularities in the case of overlapping particles.More details on the artificial viscosity in SPH can be found in Monaghan and Gingold [1983].
In this test, we compare with second-order MLS used as a gradient correction, with x ij ij replaced by from Equation ( 18), except for the viscous term that relies on Equation ( 17) to compute the Laplacian of the velocity.
A second comparison is carried out with second-order MLS derived from the differential Equations ( 19) and ( 20); in this case, the discrete formulation becomes: with Equations ( 16) and ( 17) to compute the gradients, divergences and Laplacians.The viscosity is obtained by adding ac s h to o when ∇ • v i < 0. In this case, Cholesky decomposition is used to solve the linear systems, as detailed in Section 5.
In all three cases, integration is carried out using an explicit predictor/corrector scheme described later, with a dynamic MLS FOR SPH   where f M denotes the maximum particle force per unit mass, and we take k = 0.3.With h = 0.02, this gives us Dt ~10 −4 .From the particle positions X n and velocities V n at time t n , we compute the particle accelerations (X n , V n ) and the maximum timestep Dt*; the predicted new positions and velocities are then given by: The correction step evaluates and a new maximum time-step Dt**.We then compute the maximum time-step as Dt = min(Dt*, Dt**), and the accelerations as F n+1 = , and integrate with: The evolution of the dam break with SPH (Figure 1) shows typical issues in SPH-driven simulations, with a noisy density field that leads to streaks of high-density particles that infiltrate the main body flow.In addition, the absence of correction terms for the tensile instability [Monaghan 2000] leads to particle overlapping in the lower area of the flow, with a consequent reduction in precision and an apparent rarefaction of the particle positions.
In contrast, the MLS-driven simulations (Figures 2 and 3) show much cleaner behavior, with the density field evolving smoothly without any need for post-processing phases or reinitialization.Some noise is still present, particularly in the region closer to the physical boundary; this effect does not have a strong influence on the main body flow, although it allows a few particles to detach when the fluid layer is very thin.
We can also compare our results with the experimental timing obtained by Martin and Moyce [1952], which match our case with a = 1 m and n 2 = 2.For the wavefront Z, the timing is scaled by ~4.43 s and the wavefront Z is scaled by a; the results are shown in Table 1.The column height H is expressed as a fraction of the original column height H 0 , and the timing is scaled by ~3.13 s; these results are shown in Table 2.
We observe that MLS gave the same timing (within two digits of accuracy) used both as a kernel correction and directly, and the results tend to be closer to the experimental ^h n g a n g a values than SPH.The major exception is for the lower values of H, for which both particle methods, and MLS in particular, tend to lag behind the experimental data.This appears to be related to the observable small peak adherent to the left boundary and clearly visible in Figures 2 and 3, an artifact of boundary conditions that appears to be exerting excessive drag in this case.If this issue is taken into account, the corrected timing for MLS fall back within the experimental range.

Implementation notes
The MLS matrices are symmetric and positive semidefinite.Rather than computing the inverse, it is therefore generally more efficient to compute the Cholesky decomposition in the non-radical form; i.e. to find a lower triangular matrix L with unitary diagonal, and a diagonal matrix D such that M = LDL T , and then to solve the linear systems associated with any derivatives we need to compute, rather than using the matrix dot vector notation synthetically used throughout this report.
Such an approach, however, is only possible when the matrix is strictly definitely positive, as otherwise the standard constructive Cholesky decompositions fail when elements of D are null or very close to zero.In such a case, a better approach is to compute a pseudo-inverse of M and then using  the formulae detailed in this report in their actual forms.Any symmetric, positive, semi-definite matrix can be written [Press et al. 1992, Golub andVan Loan 1996] in the form M = G T EG, where G is an orthogonal matrix (G T G = GG T = I) and E is a diagonal matrix where its diagonal elements are the singular values of M. To find G and E, we use the iterative Jacobi method [Golub 2000], which writes G as a product of Givens rotations.
The pseudo-inverse E + of E is then computed as the diagonal matrix: and the pseudo-inverse of M is obtained as M + = G T E + G.
In practice, both during the Cholesky decomposition and for the pseudo-inverse creation, the zero test is replaced by |A ij |< f where A ij is the matrix element to be tested and f is an appropriately small number.
Picking too small an f can lead to disproportionately large reciprocals in M + , with a consequent excess force being exerted on the corresponding particles, which can result in large instabilities during a simulation; on the other hand, too large an f will increase the numerical error in the pseudoinverse computation, with a consequent perturbation of the particle motion.
Our implementation, which is based on the work by Hérault et al. [2010a], exploits the parallel computing power offered by NVIDIA graphics processing units (GPUs) to achieve two orders of magnitude in speed-up over the equivalent code for standard central processing units (CPUs).However, it is also limited by the single precision native to the hardware by the GPUs themselves.In this case, we found that the best value for f lies in the range d M f M /2 ≤ f ≤ 2d M f M where d M is the dimension of the MLS matrix (e.g.d M = 5 for a second-order MLS with two space dimensions) and f M is the machine epsilon for single precision.

Performance
With respect to standard SPH, the MLS approach has additional computational cost that is associated with the generation and use of the MLS matrix for each particle.The impact of the extra computations on our GPU implementation running on a GTX 280 is shown in Figure 4.While the average execution time of the integration (Euler)  and the neighbor list construction (Neibs) kernels are essentially the same with both SPH and MLS, the force computation (Forces kernel) is relatively consistently about 4 times greater.
The additional cost per iteration is off-set by a slightly larger time-step in MLS that leads to a lower number of total iterations to cover the simulated time.Although this result might appear a little surprising, it is easily explained by the highly irregular fields that cause spuriously large forces in SPH.As a consequence, the total run-time for MLS (158 s) is only about 60% longer than for pure SPH (98 s).

Conclusions
MLS are typically used in SPH in the form of either a post-processing smoothing filter for the density, or a kernel correction for gradient evaluation.In both cases, their use is based on a first-order Taylor series expansion.We propose a method to extend the kernel correction approach by using second-order MLS, which provides improved approximations for gradients as well as a robust formula for second-order derivatives.
The new approach can be used both as a kernel correction in SPH, which preserves the classic SPH formalism and replacing the ∇W terms with a matrix/vector product, and also directly, using discrete equations that are specific to MLS.
The application of the method described to the solution of the Navier-Stokes equations for a viscous fluid shows visible benefits over the use of standard SPH, which reduces the need for post-processing filters, such as XSPH, Shepard or MLS itself.The computational cost of our method is, however, about 60% greater than that of standard SPH.
A more thorough analysis of the influence of the weighting functions chosen during the MLS process remains to be undertaken, together with an analysis of the opportunity to choose different forms for the derivative computation by taking advantage of the standard gradient identities from which the distinct SPH gradient formulae are derived.Such an analysis should identify the most appropriate form to guarantee that the discrete system will preserve quantities such as total momentum and angular momentum when they are conserved at the analytical level.

Figure 2 .
Figure 2. Dam-break two-dimensional evolution, at t = 0.01 s; 0.4 s; 0.8 s; 1.6 s, using second-order MLS as a kernel correction in SPH.Color coding by density. min

Figure 3 .
Figure 3. Dam-break two-dimensional evolution at t = 0.01 s; 0.4 s; 0.8 s; 1.6 s, using second-order MLS from the analytical formulation.Color coding by density.

Figure 4 .
Figure 4. Code performance for SPH versus MLS implementation of the dam break.The average Euler integration, neighbor list construction, forces computation times, and number of iterations are plotted against the total simulated time.

Table 2 .
Time ZT range (exp) T (SPH) T (MLS-corr) T (MLS) at which column height H reaches a given fraction of the original height in experiments as compared to timing achieved with SPH, second-order MLS as a kernel correction, and MLS from the analytical formulation.Scales as detailed in the text.

Table 1 .
Time to reach wavefront distance Z in experiments as compared to timing achieved with SPH, second-order MLS as a kernel correction, and MLS from the analytical formulation.Scales as detailed in the text.