Научная статья на тему 'A multi-well data inversion purpose-built for calibration of an effective basin model at overpressure prediction'

A multi-well data inversion purpose-built for calibration of an effective basin model at overpressure prediction Текст научной статьи по специальности «Науки о Земле и смежные экологические науки»

CC BY
129
37
i Надоели баннеры? Вы всегда можете отключить рекламу.

Аннотация научной статьи по наукам о Земле и смежным экологическим наукам, автор научной работы — Madatov A. G., Sereda A.-V.I.

The calibration of a basin scale hydro-dynamic model based on a multi-well data set has been introduced as an ill-posed inversion problem. Global and local optimisation strategies have been reviewed in this connection. In terms of the conditionally unique definition according to Tikhonov this problem is getting a range of locally unique ε-equivalent solutions. All of them are a potentially resolve calibration problem. It is shown that for the multi-well case the relevant inverse problem solution can be arranged with additional criteria, which provide globalisation and lead to the area unique solution of the calibration problem. The approach, strategy and computation algorithm have been developed. The approach implies inverting of porosity - pore pressure data sets attached to the calibration well positions within the range of established and upscaled in the advance effective basin model. The numerical implementation has been checked on theoretical and real data examples. It reveals acceptable converging speed at practically justified data quality stop criterion. The application of the developed algorithm and numerical scheme for the 12-wells real data example (the North Sea) has been discussed in details.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «A multi-well data inversion purpose-built for calibration of an effective basin model at overpressure prediction»

A multi-well data inversion purpose-built for calibration of an effective basin model at overpressure prediction

A.G. Madatov, A.-V.I. Sereda

Higher Mathematics Chair of the Polytechnical Faculty, MSTU

Abstract. The calibration of a basin scale hydro-dynamic model based on a multi-well data set has been introduced as an ill-posed inversion problem. Global and local optimisation strategies have been reviewed in this connection. In terms of the conditionally unique definition according to Tikhonov this problem is getting a range of locally unique e-equivalent solutions. All of them are a potentially resolve calibration problem. It is shown that for the multi-well case the relevant inverse problem solution can be arranged with additional criteria, which provide globalisation and lead to the area unique solution of the calibration problem. The approach, strategy and computation algorithm have been developed. The approach implies inverting of porosity - pore pressure data sets attached to the calibration well positions within the range of established and upscaled in the advance effective basin model. The numerical implementation has been checked on theoretical and real data examples. It reveals acceptable converging speed at practically justified data quality stop criterion. The application of the developed algorithm and numerical scheme for the 12-wells real data example (the North Sea) has been discussed in details.

1. Introduction

The generic strategy of a geo-phenomenon prediction implies calibration of the relevant Earth model against practically available data. Generally, there are two possibilities for establishing of the relevant Earth models to be used for prediction: Inverse (data derived) and Forward (formally derived) model types. The first option does not require any calibration phase since the data fitting is a part of inverse model establishment. The second option requires formulation and resolving of an inverse problem in order to be calibrated (see for more details Madatov, Sereda, 2005).

The basic tendency in design of the modern Earth models aimed for over hydrostatic formation pressure (overpressure) prediction can be recognised in the direction of more close incorporating forward and inverse models to be applicable for both data simulation and mechanism explanation purposes (Alberty, McLean, 2003; Swabrick et al., 1999).

There are two reasons for this.

The first one is that the amount of the empirical Earth models created for data driven overpressure estimation-prediction in the last century seemed to reach saturation level but still does not satisfy industry accuracy and scientific transparency requirements (Huffman, 2002).

The second one is that the forward models of geo-fluid dynamics are extensively developing during the last two decades in connection with applications to soil mechanics, ground water exploration, basin modelling, reservoir engineering and so on.

One can state that the "black box" or "old man advise" models, which do not require an explicit calibration phase for the excess pore pressure prediction, are now a part of history. At the same time more sophisticated forward models (HD basin and production models) are poorly adopted for the calibration phase. Thus, a data inverting is a missed component of the relevant forward model on their development trend.

Indeed, the classical data inversion approach (Tikhonov, Arsenin, 1979; Menke, 1984; Tarantola, 1987) perfectly suits for filling this gap in a forward model class and for creation a new generation of hybrid Earth models (Invertable Basin Models) well adopted for calibration tasks and for producing well balanced quick predictions. Besides, the methods of inverse problem solutions combine statistical and empirical approaches in natural proportions (Yu et al., 1995).

The attempts of using the data inversion approach regarding to calibration of basin models, Kinetic models of HC generation and Fault Analysis are not new (Lerch, 1991; Zhao, Lerch, 1993; Fox, 1995; Watterson, Walsh, 1994).

The authors have had proposed using a data inversion approach closely tuned for the specific forward problem for pore pressure pre-drill and while drilling prediction firstly by 1995 (Madatov et al., 1995). Since then the development of this approach is focused in two main directions:

1. Proper upscaling of the raw Earth model in terms of optimal dimension for the forward modelling operator; minimisation of the number of independent, sensitive parameters to be calibrated (Madatov et al., 1997; Madatov, Sereda, 2003).

2. Perfecting algorithm and computer routine aimed for an inverse problem solution to be unique and certain enough for continuous generating possible overpressure prediction values with most likely case estimation (Madatov, Sereda, 2000a).

In (Madatov, Sereda, 2005) we have introduced the concept of Effective Basin Models aimed for accommodation of practically invertable Earth models. In particular, a combine 3-D hydro-dynamic basin scale model has restricted the number of tuneable model parameters, which could be inverted against normally available well data sets. In this paper we use an introduced EBM type of 3-D Earth models to extend a single well data inversion approach (Madatov et al, 1999; Madatov, Sereda, 2000a) toward a more practically demanded multi-well data set inversion strategy.

The paper contains both theory background and real data test parts with references on basis terminology, computations and specifications detailed in appendixes.

2. General definitions

Basically, all the background items mentioned in this subsection have rather common interpretation in geological society. In connection with the effective basin model concept they have been discussed in the context of the overpressure prediction problem (Madatov, Sereda, 2005). Still, in order to introduce and formulate an inverse problem aimed for Earth model calibration in more formal and specific way let us now define more rigorously necessary terminology. Each determination itself will be marked below in this subsection with highlighted from left side margin with the reference bold title at the beginning. Some of them will be arranged with an explanatory text and figures (where it is necessary).

An Earth Model available in the finite and specific 3-D spatial area is a general representation of a priori and a posteriori local geology setting suitable for numerical modelling/simulation and data processing.

Normally it can be made available in a digital form, in a form of grid stacks defining discrete distribution of geological descriptors within the area RE.

A Geo-phenomenon is accessible for direct geophysical measurements field or interpretable geophysical data associated with the local geology setting (Earth model), which can be made available from the relevant data transformation and also can be simulated and/or computed theoretically based on physical and/or numerical simulators.

A Target Phenomenon (p(x,y,z)) is a kind of the geo-phenomenon chosen as a prediction goal and representable in the form of spatial distribution of synthetic data.

In our case the target phenomenon is overpressure distribution for some target location. In agreement with the classification introduced in (Madatov, Sereda, 2005) and in the context suitable for prediction Earth model we will distinguish between the forward and inverse models of the target phenomenon based on the following definitions.

A Forward Model of Phenomenon is formal description of the geo-phenomenon within the range of a physical (casual) system allowing numerical computation the synthetic (theoretical) data sets {p} before any raw calibration data set1 becomes available based on system descriptors (a model parameter set) and some math

determined rules (a forward modelling operator).

An Inverse (data derived) Model of Phenomenon is a formal description of the geo-phenomenon within the range of a speculative (fitting) system allowing numerical computation of the theoretical data sets {p} based on posterior system descriptors, which only can be made available while system identification phase (Eykhoff, 1981).

In our case both forward and inverse models of an overpressure phenomenon have certain areas of applicability for resolving of the relevant prediction problem (for more details see reviews: Swabrick et al., 1999; Huffman, 2002; Alberty, McLean, 2003). The physical systems and the relevant descriptors in forward models of the target phenomenon can be reduced to the fundamental equations of geo-fluid dynamics and methods of their numerical solutions (Bear, Bachmat, 1991). The speculative mimic systems and the relevant descriptors can be defined as a class of predefined or abstract self-organising data fitting math models capable for simulating the target phenomenon. In particular, when the data fitting function(s) are predefined the

1 Here and below the cursive font will mark the terminology introduced in the given subsection.

vi ■ nfl

__.-"'V-<

x

Ark]

b

Fig. 1. The operator schemes of the forward (a) and inverse (b) model functionality

relevant inverse model is associated with the class of empirical models like Terzaghi's effective stress model or Athy compaction model (Magara, 1978). In more general case the only basis functions are predefined and the relevant inverse models are associated with linear or non-linear (mimic) systems simulating the target phenomena by using calibration data filtering, artificial neural networks (ANN-system), multi-component annealing systems, etc. (Ingberg, 1989; Goldberg, 1989; Goffe et al, 1994; Huang et al, 1996). Clearly, that the major difference between an inverse model and a forward model consists in accommodation of a priori information and approach to the data available for identification and calibration, respectively. This difference could be understood from schematic representation given in Fig. 1.

Here, forward (X) and inverse (Y) model parameter vector spaces are connectable with Data Comparison vector space (P) by means of mapping operators2 M[x\ and S[ Y(f)\, respectively (see the definition below).

In contrast to the data driven inverse model space Y, the forward model parameter space X could accommodate a priori information in the form of an upscaled compact sub-space XU. The relevant forward modelling operator M[x\ defined over it is capable to reproduce the synthetic data prototype p e P without real data f eF. Therefore, the synthetic vector p = M[x\ cannot be used for prediction purposes like the vector P = S[Y(f\, unless the relevant model parameter sub-space X is properly calibrated.

A Forward Model Parameter vector and relevant Set/Space are defined in connection with forward models of the phenomenon as a model parameter vector x sorted according to its coordinates in a multidimensional Euclidean Model Parameter Space X, which stores all possible model parameter variations in the general case. See (Madatov, Sereda, 2005) and Appendix 1 to get details about specifications.

An Inverse Model Parameter vector and relevant Set/Space are defined in connection with the class of data fitting systems used for simulation of the target phenomenon. The set of the relevant descriptors defined in accordance with the system identification can formally be treated as an inverse model parameter vector y determined on the multidimensional Inverse Model Parameter Space Y which unites in the general case all possible descriptors' variations.

The forward model parameter variations for each component of Model Parameter Space X can normally be predefined in agreement with the relevant forward model assumptions and restrictions in the form of the relevant a priori compact subset XA c X (see Fig. 1a).

Conversely, the range of variation for each component in the Inverse Model Parameter Space Y is not certainly defendable in advance. It could be made available a priori based on rather uncertain empirical experience. But in the general case it becomes available posterior based on the system identification (Eykhoff, 1981).

A Forward Modelling Operator M[x] is defined in connection with the forward model of the phenomenon as a numerical solver of the relevant forward problem. In agreement with the general definition (see Definition 10 in Appendix 3) it provides unique, continuous and stable mapping of the model parameter vector x (see Definitions in Appendix 3) on the Euclidean Space P of all physically possible solutions - theoretical (synthetic) data values.

In our case the forward modelling operator provides the continuously stable and unique solution of the forward problem for pressure scale in the form of Theoretical (Synthetic) Data Set.

A Data Derived Model Simulator can be defined in connection with the inverse models of the phenomenon as a specific inverse modelling operator S[y\ (computer engine) routinely allowing numerical simulation of the target phenomenon based on pre-processed Real Data Sets.

In our case the data driven simulator provides an effective simulation of the target phenomenon in the form of the synthetic (theoretical) data set by using the relevant bijective operator S[y\.

A Synthetic (target phenomenon) Data Set can be computed for any finite number of points {x,y,z} belonged to the given Earth model and allows representation in the vector form p eP, where P - the Euclidean Space of all physically possible theoretical data values. Since the physical scale(s) of the target phenomenon and its theoretical data prototype coincides, the relevant vector space P can be used for a quantitative misfit analysis and will be called Data Comparison Space (Domain).

The target of prediction to a great extent determines the background forward model and consequently the choice of the data types required for calibration. In case of the effective basin model purpose-built for overpressure prediction it looks rather natural to consider any overpressure data accessible from the wells to be the key type of the real data. The question however rises: is this data type the only possible and suitable for calibration?

2 Here and below the underlined font will mark reference on Appendix. Except of Appendix 3 "Important math definitions", other such references will include number of Appendix explicitly. The relevant hyper links will directly address the reader of this paper in the electronic (MS Office document) form to Appendix. In this particular case see Definitions 5-9, Appendix 3 to get more details about mapping operator.

The answer depends on output of used forward modelling and sensitivity of corresponding control model parameters to the prediction target. Strictly speaking, any synthetic data could be used for comparison with their measurable prototype. However, not each of them is closely related to the target overpressure phenomena. Well grounded argumentation about this problem is far beyond the scope of this paper. General conclusion could be formulated as the following.

The more data types are involving in calibration process, the more general and stable Earth model is pretended to be. On the other hand, the required extension of deterministic background implies more physical processes to be mutually linked and formally described. Evidently, such a model complication inevitably leads to high non-linearity of corresponding forward modelling operator and extension of model parameter space with no practical use at data inversion.

In context of overpressure prediction there we suggest using two types of comparison scales: porosity and pressure for calibration of a purpose-built Earth model (see also Madatov, Sereda (2000a) and Appendix 2 for more details).

A Real Data Set is the set of physical measurements available on finite number of subsurface points {x,y,z} belonged to the given Earth model and representable in the vector form/ eF, where F - the Euclidean Space of all possible (measurable) data values.

Depending on sensitivity to the target phenomenon and the data error level, the real data sets could be used for prediction problem as well as for identification of the inverse models or/and for calibration of the forward models. In case, when the real data scale coincides with the scale of the target phenomenon the relevant data vector space does not require mapping or transforming into Data Comparison Domain. In this particular case the subspace in the Real Data Space coincides with the Data Comparison Space (i.e. P c F).

For example, real data about formation pressure could be represented by its direct measurements: RFT data, kicks indications (see also Appendix 2 for more details). Such kind of data can be used for all mentioned purposes: identification, calibration and prediction.

Generally, the real data can contain some valuable information about the target phenomenon, which can be extracted after some data pre-processing (data transformation). There are two possible kinds of data, which could be recognised as a target sensitive data type and target indicators' data type. The difference between these two data types is that the target sensitive data set normally can be routinely transformed (mapped) into the Data Comparison Domain like it does, for example, well-log transformation into porosity scale whereas the target indicators require some subjective expert interpretation more common for wellsite overpressure indicators (see Appendix 2). In the context of the data inversion we will not distinguish here between these two types, but assume that mapping of the real data vector into the Data Comparison Domain can be done uniquely with certain accuracy by using the relevant data transformation.

A Data Transformation Operator can be defined in connection with target sensitive data sets and/or target indicators as an operator T[/] (computer engine) routinely allowing transformation of any real data sets given in the form f eF into the synthetic vector space P and the relevant scale for purposes of using the result at calibration or/and identification.

Generally, the data transformation operator can be understood as a pre-processing of the raw real data set before using it for identification of the inverse model or for calibration of the forward model. Depending on the data type, the data transformation operator can be based on rather simple one-by-one scale conversion routine. For example, converting sonic or density log into porosity scale (see Appendix 2) and then using log driven porosity data as an input for Calibration or Identification routine. On the other hand, a seismic data transformation required for mapping seismically derived parameters into pressure scale implies much more complex and less unique operations (Madatov, 2005).

Calibration of the forward model is a routine used for specification of the relevant model parameter set against the transformed real data set.

Identification of the inverse model is an integral part of the phenomenon simulation needed for specification of the inverse model parameter set against the transformed real data set.

In our context the difference between the terms Calibration and Identification is a matter of the particular definitions as well as difference between a forward overpressure modelling operator and a data driven overpressure simulator.

It is important to pick out, however, that both routines imply implementation of some data imaging operator T[f capable to transfer the data vector into the model parameter space. More strict definition of this operator as a back inverting (or just data inverting) operator will be given in the next section. Here we just stress that the data inverting operation is an integral part of inverse model functionality, whereas the forward models do not require it unless they are going to be calibrated and then used for prediction purposes.

Both calibration or/and identification routines require certain control points' set (x,y,z}c c RC where the real data and the target phenomenon measurements are available.

A calibration or identification data set {p*} is a finite element sub-space of the vector space P suitable for calibration or identification purposes.

A calibration/identification area RC c RE is a local part of the Earth model area RE where the calibration / identification data set can be made available.

In connection with the purpose of data inversion as a general approach to a forward model calibration it is important to formulate also Prediction Problem associated with target geo-phenomenon - overpressure.

A Prediction Problem is determined as a quantitative assessment of expected values of the target phenomena and their range of possible variation (in terms of possible scenarios, risk estimations, etc.) within the predefined target location in the given Earth model before it can be measured directly or indirectly in situ.

A target location or prediction area RP c RE is a local part of the Earth model area RE where prediction of the target phenomenon is required.

As it follows from definitions the calibration and target areas should belong to the same Earth model. There are three possibilities of their mutual locations:

1. RP c R - is the best case, when the prediction problem could be reduced to the model parameter interpolating.

2. RP <X R and R n R ^ 0 - is the intermediate case, when the prediction problem could be partially reduced to the model parameter interpolating and partially requires some extrapolating.

3. RP n R = 0 - is the worst case, when the prediction problem can only be reduced to the model parameter extrapolating.

A prediction data set PP = {p} is a part of the vector space P suitable for resolving the prediction problem.

There is one class of prediction aimed models based on direct well-data (in particular, pressure data) interpolating (Dobrynin, Serebryakov, 1978), where the calibration phase is skipped. For this class of the Earth models the calibration and prediction data sets formally coincide.

The seismic data is a main source of the prediction data set for a wide class of prediction aimed Earth models based on the inverse model approach. The availability of prediction data set for the forward models used in retrospective of the prediction problem is not an obligatory requirement. The carefully upscaled and calibrated effective forward model can be used to provide successful prediction based on model parameter interpolationextrapolation into the target area. In other words, prediction of target phenomena (overpressure) can be done based on forward modelling inside of properly calibrated area without any additional measurements within the target location. Any of while-drilling data in this context should be treated as a calibration data set since they are used for correction of prediction, i.e. before the in situ analysis of the target phenomenon.

Thus, the crucial difference between the forward and inverse model implementation for prediction is dependence of inverse models on a prediction data set as an obligatory input.

A purpose built prediction model of phenomenon is a calibrated forward or a tuned inverse model suitable for

resolving a Prediction Problem by implementation on it the relevant Forward modelling operator or Data driven model simulator capable for:

• Reproduction of all measured or transformed while data processing values of the target phenomenon (p) with given in advance data accuracy.

• Resolving the prediction problem.

• Correction of predicted target phenomenon values wherever it is necessary inside of the target location based on new available for calibration / identification data.

Clearly, that in our case the prediction aimed model can be based on both forward and inverse classes of the models mixed in different proportion depending on amount and type of real data available and specification of prediction problem.

Basing the introduced definitions it is important now to get systematic summary of Earth model types practically implementing at the overpressure prediction problem (see review in Madatov, Sereda, 2005). The scheme in Fig. 2 illustrates the ranges of applicability for three major classes of the prediction purposed models and the relevant optimisation technique suitable for their calibration - identification as a draft function of dimensions for both Model Parameter Space and Real Data Space.

In a very general manner and in connection with the definitions introduced above we should now define an inverse operator on the vector space P as a specific data transformation operator capable to map (perform image projecting) the given vectorp e P on certain model parameter space (X or Y). Still this definition covers

a very wide class of the data transformation operators. In particular, multiple regression between pressure data and pressure indicators founded within the range of predefined empirical relationships can be formally treated as an inverting operation. Moreover, all data transformation operators implemented in the class of Inverse Models and recognisable as a data fitting can be formally treated as an inverting operator.

Below we will investigate the only class of inverse operators defined in connection with a deterministic forward model type and the relevant operator M[x]. According to the theory of a data inversion (Tikhonov, Arsenin, 1979; Menke, 1984; Tarantola, 1987), an inverse operator requires: specification of a forward model parameter space and a data space to be inverted as well as resolving non-uniqueness and instability problems. The next section covers these requirements on the level of an operator theory of optimal algorithms (Traub, Woznjakovski, 1980). The details of the model space X and data space P specification are available in Appendixes 1-2. ......

Fig. 2. A speculative scheme showing place of the Earth models purposed for overpressure estimation - prediction: inverse data driven models associated with the standard estimation approach (yellow); dynamical forward models associated with the Effective Basin Model concept (green) and £ hybrid seismic data derived models associated with the standard prediction approach against the averaged dimension of Data and Model spaces. The suitable methods of jjn optimization normally applicable at the relevant data inversion | are indicated. The plot area where the dimension of model % "i parameter space exceeds the dimension of available data space corresponds to under-determined data fitting problem with infinite number solutions (Lawson, Hanson, 1974).

U

>

I'

K f-f

Price for the dala acquisition

I ^ X

jILINLIIl' IMIIIllk'l of 3Phl[i«LI

rjxaillJl \ i^ihnntiEim win vnnaniLiiiiij! «1

tlijüüVi tJS ""V Hilntiaw

ciKimihtiiou

Dimension nf itat.i cortfnrispo qnce \ n

3. Formal definition of an inverse problem

The Effective Basin Model approach in connection with overpressure prediction purposes brings the data simulation problems into the class of dynamical forward models, where the target phenomenon (overpressure) is described basing on fundamental differential equations of geo-fluid dynamics written in basin time scale (Allen, Allen, 1990; Lerch, 1990; Madatov, Sereda, 2000a). The terms of those differential equations can be interpreted as non-linear functions of some fundamental geo-fluid system parameters (model parameters). In the context of the introduced above definition for the relevant Model Parameter Space X the full set of model parameters required according to specification forms the model parameter vector x. Then any single run of the relevant forward modelling operator M[x] allows mapping of any current vector x e X on Data Comparison Domain in the form of the synthetic data setp eP (see Fig. 1a).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The inverse problem for this class of models is reducing to construction of the formally inverse to the forward one operator defined on the Data Space P and aimed for reverse-mapping of any certain data set p ePback to the origin Model Parameter Space X (Menke, 1984). According to the definition (see Definition 10 in Appendix 3) the relevant inverse problem is ill posed in Hadamard sense (Hadamard, 1932). Moreover, the full-scale 3-D final difference basin modelling operator requires a hundred of thousands components for the vector x to be specified. Even in case when the over-determined calibration data set could be found for this problem, the relevant inverse problem solution appears among the Global Optimisation methods (see Fig. 2). Practically, however, the amount of calibration data in multi-well calibration area seldom exceeds 103-4 samples (see Appendix 2 and (Madatov, Sereda, 2000b) for more details). Thus, the problem of realistic attributing of the model parameter space and its compact specification get high order priority.

Introduced in (Madatov et al, 1997; Madatov, Sereda, 2000a) basin scale model for geo-fluid system development in its 1.5-D hydro-dynamic approximation allowed reduction of the Model Parameter space dimension down to 5NF, where NF is the number of the formation units in a local stratigraphy column. Still, the price for such significant shrink was a restriction of the relevant forward problem toward essentially one-dimensional. The 3-D pressure communication effects in such a simplified basin model were accounted effectively by introducing an effective Sink term for each formation unit.

Later development of this effectively invertable model (Madatov, Sereda, 2001) has resulted in creation of a combine 3-D hydro-dynamic basin scale model enabled a lateral communication. The relevant concept of Effective Basin Model and implementation of Earth model Upscaling principles (Madatov, Sereda, 2003) allows generation of 3-D HD models purposely designed for a data inversion, i.e. having about the same dimension of the corresponding model space X and potential for accommodation in it a priori available information (geology, stratigraphy, petrophysics, etc.). It forms prerequisites for extension of local optimisation algorithms implemented by us earlier at the single well data inversion problem toward a multi-well data set case.

We address our readers to the appendixes to get more details about specification of the Model Parameter Space Xin combine 3-D hydrodynamic approximation (Appendix 1) and multi-well data sets suitable to be converted into the Data Comparison Space P (Appendix 2). Here we start from a level, where the necessary data input and desired model output have already a vector form. Namely:

A multi-well data set is representable in the form:

X = {[(x1\ Xt1, x31),•••,(xl'/, x2 , x3J)\• ^[(x11, x2l, x31),•••,(xl'/, x2 , x3J)\^;

[(x4 , x4 x4 )1,--,(x4 , x4 x4 )m\} .

A desired model parameter set is representable in the form:

11 122 2 11 122 2 P = {(p1 , p2 ,•.., Pd ; p1 , p2 ,•••, pD )1, (p , p2 ,•••, pD ; p1 , p2 ,•••, pD )t, — ,

(p11, p2,--, Pd1; p12, pT2,—, Pd2)k}t,

where the components' upper and lower indices are in agreement with the relevant space specifications (see appendixes).

The forward modelling operator M[x\ as it follows from its continuity features (Madatov, Sereda, 2005) provides stable and unique mapping of any model parameter vector x e X on the Data Comparison Space P in the form of the theoretical data vectorpA = M[xj e P.

Let the relevant continuous mapping performed from an abstract Universal Model Parameter space UX provide an ideally "exact" forward problem solution, where all the model peculiarities supposed to be accounted for. It is intuitively clear, that such a speculative case can only be considered hypothetically. Practically, however, the model digitization, refinement and upscalling leads to limitation of the dimension and variability ranges for the relevant workable Model Parameter space X c UX, which is treatable now as a sub-set of Universal one. Formally, any of the model simplifications inevitably leads to introduction of some inherent modelling uncertainty measurable in the form of errors (fuzzy forward problem solution) within the vector space P. These errors can be represented there as a bunch of all possible "exact" solutions achievable on the Data Space P after consequent scanning through all the parameter directions and parameter variation ranges skipped as a non-sensitive during optimal Model Space simplification (Upscaling): UX ^ X (Madatov, Sereda, 2003). Due to continuity of the operator M[x\ and low sensitivity of result p to the skipped model vector components this bunch of fuzzy solutions is representable in the form of a closed simply-connected subset of the exact solution. We also will use the term "e-locality" - {pA}e of the vector solutionpA in this context (Definition 2-4 in Appendix 3).

Thus, the dynamic of the consequent upscalling of Universal Model Parameter space can be simply illustrated on the space diagram in the form of gradual shrinking of the current Model Space boundary relative to the Universal Model Parameter space UX from one side and relative smearing of single synthetic data vector image around "exact" solution pA (see Fig. 3). As it was shown in (Madatov, 1990; Madatov, Sereda, 2000a) the optimal average size of the apparently fuzzy forward problem solution must be equal to an average e-locality of the calibration data vector defined in agreement with an averaged error of real data transformation (see below). According to the theory of optimal computation algorithms (Traub, Wozniakowski, 1980) appearance of e-locality subset around the "exact" solution pA has reasons there and so far where and since the alternative model parameter sub-spaces can be optimised.

Below in the paper we will assume that our specific Inverse Problem is formulating with respect to an optimally simple .- —. -

(upscaled) Model Parameter Space. So, the averaged diameter of the e-locality for any of synthetic data prototypes achievable on Data y k

Comparison Space is comparable with data accuracy estimation. !

The dimension of Upscaled space specified for some EBM C_ l\ X, J

is given by N = 3J x K +1 x M (Appendix 1). where J is the number . (.■■.

of formation units; K and M are numbers of 1.5-D and 2.5-D HD

models included into the framework, respectively; / - the number of (f ~ ...

faults required for all 2.5-D HD models. The dimension of a data " - - - .

comparison space is given by I, = 21.) x K (Appendix 2). where D is . . / *

the well data grid size and K is the number of offset wells (number of P*

1.5-D HD models) introduced within the calibration area. It is

normally the case that the amount and density of well data available Fig. 3. The scheme of consequent for calibration provides exceed of the data space P dimension over shrin]dng °f oprimising Model Space X¡ the model space X dimension in one-two orders. Thus, practically boundary the relative to Universal Model achievable condition L >> N allows formulation of the aimed specific Parameter space U and the rek-v^ inverse problem within the range of the over-determined LS problem creeping of the single synthetic data (Lawson, Hanson, 1974; Dennis, Scnabel, 1983) (Fig. 2). vector image around "exact" solution

In agreement with requirements for the raw data to be pAeP. (X1 cX0c U ^ E1> E0> 0)

transformed into the common Data Comparison Space P every single "real" data set p* e P is in turn formally3 a result of some unique and stable vector mapping from the raw data space F by using the relevant Data Transformation operator Tf]. Note, that the definitive estimation of the real data quality - e - is composing from raw data uncertainty and processing errors introduced by data transformation. From this point of view it is important to be ensured about uniqueness and accuracy of data vector mapping into the Data Comparison Domain.

Then the data - model fitting could be schematically represented as it is shown in Fig. 4. Here, the fuzzy image p* of the raw data vector f* on the data comparison space P has a finite diameter in agreement with the average data quality estimation - e.

Let us now define an inverse operator in the class of non-linear data fitting operators, which are formally inverse to the forward modelling operator M|x]. Taking into account that the forward modelling operators on the space of the model parameters X are not linear and cannot be linearly approximated (Madatov, Sereda, 2005) we cannot define a desired inverse operator M^|p*] in the class of bijective transformations (see Definition 9 in Appendix 3). This is a reason why there are generally several possibilities existing in the model parameter space to find the acceptable model vector for fitting to the real data image with finite accuracy.

Generally, an inverse operator for the given data fitting problem can be defined within the class of conditional optimisation procedure (Menke, 1984) on the data comparison space P. In the context of the introduced above definitions and general definition of the inverse problem given in Definitions 10-11, Appendix 3 the desired operator M^|p*] inverse to the forward modelling operator M|x] can be defined for any calibration types of the data vectors p* and the relevant vector spaces P as the following:

MV] =X argmin [RNpA,p*)] VxeX; Vp*,pAeP, (1)

where RNpA, p*) is the mismatch function defined on the common vector space P according to the metric and norm introduced in it (see Definition 12 in Appendix 3 for more details); pA = M|x] - the theoretical data vector; p* = T|f] - the calibration data vector.

Regardless to the choice of the mismatch function and specification of the EBM parameter space X the inverse problem formulated for the calibration data vector with respect to the forward modelling operator M[x] is generally ill-posed in Hadamard sense (see Definition 11 in Appendix 3 for more details). I.e. the target model parameter vector cannot be found uniquely for the whole space X. Furthermore, each particular solution will generally be non-stable (see Definition 8 in Appendix 3).

The bigger is the uncertainty of data transformation (real data quality), the more possible model vectors can be found to satisfy the criterion (1). In other words, the inverse problem solution becomes less and less unique.

What can be done in practically common case when the data quality cannot be significantly improved and data amount is also restricted?

The general approach consists in constraining of the search area on the Model Space X by imposing on it a priori defined subset XA с. X (Ivanov, 1970; Tikhonov, Arsenin, 1979). These constraints can be strengthened by introduction of additional conditions on the solution vector to be inside of the desired subset: x e XA. The minimising procedure (1) in this case should include the relevant regularisation procedure in the form of hard and/or soft imposed on obtained solutions (Definition 13 in Appendix 3).

Fortunately, the forward model, in general, and the EBM for the target phenomenon, in particular, allow prior determination some domain(s) of the physically possible solutions XA within the model parameter space X. It can be done in terms of allocating some realistic finite ranges of continuous variations for each lithology controlled model parameter components.

Now it is possible to formulate the aimed Specific Inverse Problem (SIP) as a conditionally correct on the compact sub-space XA с X. It can be done based on Tikhonov's theory of regularisation (see Definition 13 in Appendix 3 for more details).

To do this we have to satisfy three following conditions:

1. The solution of SIP given in form (1) exists and belongs to some compact sub-space X0 с XA of possible solutions.

2. The relevant forward operator M[x] is bijective for Vx e X0 с XA and M[x] = pA e P0 с P Vx e X0 с X.

3. The inverse operator is continuous on P0 с P.

3 The exclusion is connected with the case when the data vector represents measurements performed directly within the Data Comparison Scale. For example, RFT or MWD pressure data in our case.

+

ж

T 11 h

jf - flrf

p

Fig. 4. The operator schemes of the data fitting problem for the EBM

Existence of compact sub-spaces X0 of EBM parameters, where the relevant inverse problem can have possible solutions follows from existence of physically possible solutions XA, and possibility to subdivide it on more local sub-spaces as it was shown in (Madatov, Sereda, 2000b).

Still, condition 2 cannot be potentially provided inside of any possible compact sub-spaces X0 unless general global minimisation algorithm (1) would locally be adjusted in the way to account for finite data accuracy. It can be done by substituting of global minimum criteria (1) on more soft conditionally local minimum criteria (discrimination criteria) as the following:

Ml\p*] = x: RN(pA, p*) <e, VxeX; Vp*,pAeP, (2)

where p* e P is the data vector defined with the final accuracy e with respect to its most likely estimation E[P]. The relevant sub-set {ep*} of the e-equal data prototypes can be defined from condition:

RNp*, E[P]) <e. (3)

Now, the first possible forward modelling solution hitting e-equal locality of data prototype will deliver conditionally unique solution of the relevant inverse problem.

Nevertheless, even this conditionally unique solution of our SIP remains to be local. The massive sensitivity analysis performed for the various physically possible combinations of the model parameters (realisation of the vector x e XA) was made. It reveals existence of numerous forward modelling solutions (vector pA) that are all close to each other in terms of acceptable mismatch (3) at rather different distribution of sensitivity vector components. In other words, the L2 distance inside of the normalised vector space P between two solutions of the forward problem based on arbitrary chosen xj, x e XA can belong to the £-locality of some likely data vector estimation E[P]. Whereas the pair xj, x does not belong to any one of the 5-locality (compact 5X) of some xs e XA. This situation can be recognised as a Global non-uniqueness problem in the context of the discussing specific inverse problem. In accordance with general definitions (see Appendix 3) let us define this specific non-uniqueness problem in more formal way as the following:

{x }N§XcXa : \M\Xj\ -pe|| < e, V j {x}N. (4)

Let us distinguish the Global non-uniqueness of SIP from the Local non-uniqueness problem defined on the set of a priori possible solutions XA as the following:

{x8} c8XcXA : |M\x] -p£||< e, Vx/e{x8}. (4*)

Here SX is a compact simply-connected subset; e and 8 are small values.

It is clear that the local non-uniqueness problem for the model parameter space can be resolvable in the same way like it does for data space, i.e. by introduction of corresponding 5-locality (compact 8X) of some most likely inverse problem solution xs e XA. Indeed, any of the representative x e 5X will get non-detectable difference from xs and consequently will belong to the 5-equivalent locality within the model space X.

The continuity of locally unique solution (4*) and simple-connectivity of the subset {x8} c 8X follows from locally monotonous behaviour of the forward modelling operator M\x] (see Madatov, Sereda (2005) for more details). Formally it can be expressed in the form:

0<5i<52<53: 0<£1<£2<£3...^...0 * {x81}c {xs2}c {xs3}c 8X. (5)

Conversely, the sub-set {x}N of globally non-unique solutions (2) is not simply-connected, but can be represented as the following:

{x}N = \{5x}iu{5x}2 u {5x}3...u {5x}n], (6)

where {8x}j * 0,Vj = 1,2,.,N; and [{5x};n{5x}j] = 0, Vi ^ j: i, j e (1,2,.N).

Thus, the condition of existence of the global non-uniqueness problem for discussing SIP is reduced to the following:

V j = 1,2,.N3x c {x}N: xi{8x}j. (7)

Note that the global non-uniqueness (4) exists unless {§x}j 3XA Vj = 1,2,.,N, which is normally not the case. It is clear also that the global non-uniqueness of SIP is prerequisite for existence of many local solutions of the relevant discrimination problem (2).

In the context of the conditionally correct solution of the SIP problem condition 2 according to Tikhonov could evidently be satisfied within any 5-locality of the local solution 8X as soon as the data stop criteria in terms of £ value will be introduced.

Condition 3 according to Tikhonov's continuity condition can be satisfied in agreement with the upscaling procedure introduced for the model parameter space X in (Madatov, Sereda, 2003). Indeed, the

continuity of the inverse operator within the local solution 5X interval can be guarantied if the relevant first derivative dM~l\p\ldpn will exist wherever onp e P0 or conversely, the Freshet derivatives dM[x\ldxk will not approach to zero for any component of x e 5X (Tarantola, 1987). The part of upscaling strategy connected to recognition and "freezing" on non-sensitive to the forward solution components of the model parameter vector evidently provides required continuity condition on the inverse operator at any e-locality {ep0} of arbitrary vector solution pA e P0 c P. In terms of regularisation theory (Tikhonov, 1943; Tikhonov, Arsenin, 1979) such kind of upscaling plays the role of stabilising factor for the relevant inverse problem solution.

Thus, all three conditions required for the conditionally correct SIP problem can only be satisfied locally. At that they require upscaling of the model parameter space and assigning of physical constrains within the model parameter space in terms of the prior defined sub-set XA.

The problem of global non-uniqueness should be addressed to a priori and a posteriori analysis of expected and achieved solutions, respectively (Jackson, 1979; Madatov, 1990). The speculative example in Fig. 5 illustrates possible strategy of getting effectively unique global solution of the inverse problem for some 2-D model parameter space. Here, a priori established constrains imposed on globally unique equivalent solutions' (green frame) area are combined with posterior established restrictions (blue frame). Intersection of the relevant physically (a priori) XA and empirically (a posteriori) XE grounded windows of possible solutions within the X space allows allocation of a single sub-set {§x*} satisfying condition (2). Evidently, any representative of {5x*} can be treated as conditionally unique e-equivalent global solution of the SIP.

The specific of the combine 3-D HD forward model allows to get K independent solutions for the relevant 1-D inverse sub-problems formulated for every single well calibration data set in agreement with general formulation (2). Then final adjustment of the globally non-unique solution (4) in terms of choosing the effectively unique vector solution x can be addressed to a 3-D formulation of SIP for the multi-well data set case. Below in this section we consequently consider 1-D and 3-D formulations of the specific inverse problem for single and multi-well data set cases, respectively.

Min

(ox 4*

iftjf?v

Fig. 5. Contour map of misfit function (2) for speculative 2-D model parameter space (left) and position of e-equivalent solutions against a priori given subset XA: (AX1xAX2)a and a posteriori detected subset XE: (AX1xAX2)e (right). All local minimums in right map satisfy condition (2) with data stop criteria highlighted on misfit scale. The non-empty intersecting XE n XA allows definition of an effectively unique e-equivalent solution {5x*}<= XE c XA on 2-D model parameter space X.

4. The inverse problem solution for single well data set versus effective basin model

The inverse problem for the single well data set is a typical calibration task for 1-D formation well model. In particular specification oriented on overpressure prediction4 it was introduced by us earlier (Madatov et al, 1995) and developed (Madatov et al., 1999; Madatov, Sereda, 2000a). Here we reproduce the main its features to be consistent with further extension of SIP for multi-well data set against combine 3-D basin scale model.

In agreement with the introduced above definition of the calibration problem the definitive data set (data vector) p* should belong to the same scale and grid that their synthetic prototypes pA: pore pressure and porosity. In this context we will not distinguish the calibration data type (scale) and accuracy, but just assume that they can be made available together with the finite error level e in the form of the data vector locality {ep*} defined by condition (3) in agreement with data comparison space P specification.

Let us the relevant to (2) data stop criteria be assigned. Let the 1.5-D model parameter space be specified and upscaled in agreement with required data quality constrains. Thus, the vector model parameter space XA with closed

4 A definitive porosity & overpressure single well data set versus 1.5-D model approximating a geo-fluid system development in basin time scale.

sub-space on it XAA accommodating all a priori possible inverse problem solutions is established. Note, that specification of model parameter space for 1.5-D case is differ from the one for 3-D combine EBM case, whereas data comparison space keeps the same specification (see Appendixes 1 and 2). The subscript index "A" indicates this.

In order to avoid scaling problems within the multidimensional range of variations it is convenient to get the model parameter space XA normalised as the following:

Ik = X -Xkmm)/(Xkmax- Xkmm), (8)

where xk, xkmm, xkmax are the current, minimum and maximum value of k-th component of the model parameter vector x* e XA. Since normalisation routine (8) is universal regardless to the given specification of the model parameter space XA, we will not introduce new additional identifications, but assume that the model parameter space in the SIP context is always normalised.

Thus, the conditionally correct inverse problem for a single well data set can be defined on the model subspace Xaa and the relevant e-solution can be formulated in agreement with general definition (2) as the following:

A"1[p*]=x: Rn(Ea, E) < £, VxeXA; V p*, pAeP, (2*)

where A and A"1 indicate a forward and an inverse operator to the relevant 1.5-D modelling problem, respectively; scalar mismatch function RN is given according to the definition (Definition 12 in Appendix 3):

rn[M(X),E] = I W (M[x] -E*) ||/||E* I.

In agreement with the definitions (Definition 13 in Appendix 3) the e-solution of this 1.5-D inverse problem will be stable and locally unique. The search strategy of the solution vector x* was developed for this problem (Madatov, Sereda, 2000a) in agreement with local optimisation approach (Lawson, Hanson, 1974).

It is important to stress, that any optimisation scheme based on weighted LS method particularly implies multiple computing of the relevant forward problem. The use of the EBM approach and prior upscaling of the model parameter space allows reducing of computation complexity for the relevant forward modelling operator M[x] in orders in comparison with standard basin models given on the regular 3-D grid (see (Madatov, Sereda, 2005) for more details). As it was stated above it allows to keep on the local optimisation approach to 3D formulation of the forward model problem defined on the relevant model parameter space X.

Still, the searching strategy aimed for the fastest possible converging speed in satisfying condition (2*) is the third important factor of computer optimisation.

Regardless to 1.5-D or 3-D modification of the forward modelling operator and the relevant model space specification the resolution of the given in (2-2*) data fitting problems can generally be expressed in the form of recurrent iterative search strategy as the following:

Xk+1 = Xk + qk Sk, k = 0,1,2,...K. (9)

Here, the vector sk defines the direction of converging at current k-th iteration step, the scalar qk gives the length of the relevant step in multidimensional model parameter space. This consequence supposed to start at some start point inside of the model parameter sub-space XA c X leads to the locally unique e-solution x* in K steps.

There are two groups of possible search strategies defined according to (9) depending on using or not using the first derivative of mismatch function for computation of each next step. The heuristic methods such as a conjugated directions' method (Menke, 1984) are normally oriented on specific of the forward problem and do not require derivative computation. The advantage of such methods is relative simplicity of computing. At this they are all highly depended on assumption about specific of the involved forward problem. The second group of the methods integrates the family of Gradient and Newton-like methods (Dennis, Scnabel, 1983). They are more general and allow account of local multi-dimensional landscape of minimising functions. The common disadvantage of the Gradient and Newton-like methods is trapping into the local minimum locality and a big number of necessary steps in gradually converging consequence. Since each next step implies computation of the partial derivatives the total computing time required for locally unique solution could be significant.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Apart from continuity, the other properties of the final difference operator of the forward problem solution cannot be specified regularly for whole model parameter space. Its sensitivity and monotony are sharply varying depending on a start point in the model parameter sub-space XA and local geology - lithology settings and predefined model constants.

Thus, the common and unique for all practical cases strategy in the form of (9) cannot be found. Still the use of self-adopting procedure in general approaches can be suggested as a general approach. Adaptation of the search strategy to local features of multidimensional landscape of the model parameter space and the relevant correcting allows maximal accommodation of a priori information and significant reduction of computer complexity of inverting.

The application of Gradient and Newton-like methods for a single well data inversion problem in the context of calibration of 1.5-D forward models was considered in (Madatov, Sereda, 2000a). Below these

computation strategies are discussing in connection with more general 3-D modification of the forward problem developed in the EBM concept (Madatov, Sereda, 2005). The numerical features of the relevant computation algorithms are available in Appendix 4 and Appendix 5.

4.1. Gradient strategy

The Gradient methods (see, for example, Himmelblau, 1972) include a wide range of iterative procedures where the direction of the next step in search strategy (9) is reversed to the current gradient of the mismatch function RN[M(x),p] (see Definition 12 in Appendix 3). I.e. it allows approaching to the optimum along the steepest ascent. Common disadvantage of this class of methods is long iteration process slowly converging into the nearest to the start point local minimum trap. Still, they are extremely important from the theoretical point of view, since they are relatively simple and allow quick analysis of local multidimensional landscape, sensitivity range, effects of model space upscaling, etc.

In terms of the common definition of the search strategy, the Gradient methods imply determination of full gradient as the direction vector sk = V{RN [M(x),p]} and computing length of the simple step qk at each next k-th point of the approaching process. The scalar is defining as a solution of 1-D minimisation problem. Thus, the steepest descent strategy in agreement with general definition (9) can be expressed as the following:

Xk+i = xk - qk V R [M(xQ,p], k = 0,1,2,.....,K

{ Rn [M(Xk+i№ = min{R[M(Xk+i),p]}. (10)

q>0

The final difference approximation is used for the numerical calculation of the gradient components, i.e.:

8Rn[M(x),p]/8x^1 = Rn[M(x + 5x&),p)] -Rn[(x,P)]/3x, , i = 1,2,...,N, (11)

where dx¡ - the small increment of i-th component of the model parameter vector x; e - the single vector having only y-th component non equal to zero but unity; N - the dimension of the vector model parameter space X according to its specification (see Appendix 1).

From computing point of view it is important to stress that each new iteration requires solution of N forward problem for the combine 3-D HD model for detecting of the component for the direction vector s*. Plus to this some more forward problem solutions could be required for defining length of the simple step qk along the steepest descent of gradient (12) (see Appendix 4 for more details).

Several important comments in conclusion of this general algorithm should be made.

The first remark is about converging speed of the given gradient process. Since this scheme belongs to the descent group of methods, the sequence of mismatch values sorted according to iteration number is monotonously decreasing series. At that, the length of the simple step of each next iteration generally becomes smaller than previous. As a result the converging speed of the process decrease and achievement of the e-solution can require rather big numbers of iterations.

The second remark is about locality of the achieved e-solution. According to definitions (10-11) the given Gradient strategy provides non-conditional getting of the e-equivalent solution of the SIP within a priori given sub-space of anticipated solutions XeA in terms of definitions (2-3). Being local by definition, it is very depended on the start point x e XA. It means, in particular, that the iteration process will be captured to be converging toward the nearest local minimum of the mismatch function and stopped in its locality if the relevant depth of multidimensional socket will be e-sufficient (data stop criterion is reached).

In general, the Gradient strategy in getting solution of the considered SIP appears to be powerful tool for investigation of local features of the target mismatch function in the close proximity to the anticipated result. It cannot be recommended as a main and only possible strategy in the context of the considered calibration and prediction problems, but as a rather advanced tool for combining with the more global but less specific searching strategy, which is discussed below.

4.2. Newton-Gauss strategy

The classical Newton-like optimisation methods (for example, Newton-Gauss method) use a matrix of the second derivatives H (Hesse matrix) of the mismatch function for resolving non-linear LS problems. Naturally, use of a final difference approximation for computing of matrix H at each step of the iterative process is a main time consuming operation for the algorithms of this class (Dennis, Scnabel, 1983). In connection with the formulated calibration and prediction problems we have developed modification of the Newton-Gauss method adjusted for the specific EBM forward problem.

Let us review the general definition of e-equivalent solution for the relevant inverse problem given in (2-3) in the context of solving a system of non-linear equations. To do this let us reformulate the minimisation problem into the following non-linear problem.

Define the vector x eXeA c Xs, delivering solution to the system of non-linear equations

M'1[M]=x e X¿a : r[(x,p)] = 0, (13)

where the vector mismatch function r in contrast with the early introduced scalar mismatch function R is given by r[(x,p)] = W(M [x*] -p)/|p| and the solution requires a finite accuracy defined in agreement with given in advance the data stop criteria.

System (13) is normally over-determined since dimension of the data grid L is greater than the dimension N of the upscaled model parameter space. The LS solution of this problem can be constructed as the following.

Let's define an approximate value of mismatch function at k + 1 step of general search strategy (9) via the previous step value and Jacobian operator (Golub, Van Loan, 1983)

r[(x*+i,p)] = r[(XfcP)] + J(Xk) (Xk+1 - Xk), (14)

where J(xk) represents the Jacoby matrix for the vector function T(xk,p). The relevant Jln components can be defined in final difference approximation of the vector function T(xk,p) as the following:

J~ln = n [(x + SxnZnp)] - n [(x,p)]/5xn, l = 1,2,.,L; n = 1,2,.. ,,N, (15)

where rl - the l-th component of the vector function (Ti, r2, r3, ..., rL)T. The final difference approximation of Jacoby matrix (15) will be denoted in this section as Jk~ with the low index indicating current position within the model parameter space according to the iteration step.

By analogy with the gradient strategy for the scalar mismatch function RN[M(x),p], computing of the components for the Jacoby matrix requires N solutions of the relevant forward problem. If the current k-th increment of the model vector in agreement with general construction (9) of the search strategy is sk, its definition at each new step is reduced to the solution of the over-determined linear problem:

r[(x*,p)] + Jk~Sk = 0 (16)

with respect to the unknown component (s1, s2, s3, sL)T.

Note, that the initial mismatch level T[(x0,p)] is given in accordance with the start point xo e XeA within the model parameter space.

Thus, the target Newton-Gauss problem can be finally written in the compact form as the following:

{ x*+i=x+q^kSk, k = 0,1,2,...,

i Jk~ ■ Sk = - r[(x*,p)]. ( )

Clearly, that the difference in formulas (10) and (17) is in the way of detecting the appropriate step in the multidimensional model parameter step. Generally, the direction of each new step in the Newton-Gauss strategy does not coincide with the reverse to gradient R[(xji,E)] direction (Dennis, Scnabel, 1983).

The definitions of the scalar qk and vector sk in (17) represent a crucial sub-problems of the designed solution algorithm. They are considered in Appendix 5 in more details.

By analogy with the gradient strategy it is important to make a short conclusion about converging speed of this process and its locality.

The converging speed of the Newton-Gauss strategy is generally higher since it uses the local approximation of the mismatch function by a quadratic form. Practically, in connection with the posed inverse problem the converging speed of the procedure appears to be (on average) on one-two order bigger than the one detected for the gradient strategy. The price for the high speed is as usual the quality of solution in terms of finally achievable mismatch value. Still, the concept of e-equivalence in the locally unique SIP solutions allows to neglect this for most of the practically important applications.

The locality problems for the e-equivalent solution achieved by a final step are the same like for the Gradient strategy. However, the higher on average converging speed allows fast checking several local solutions getting from different start points x e XA within the model parameter space.

The relevant approach is implemented in multi-well data inversion strategy, which is considered in the next section.

5. The extension of the inverse problem solution for multi-well data set case

Here, the computation scheme developed for resolving of the SIP formulated for a single well data set is extending to be applicable for more general multi-well data set situation, which is normally the case at area

5 This scalar is formally introduced to refine the achieved LS solution (Moisejev et al, 1978) and also account for non-successful termination of searching strategy (9) and getting penalty step (see slsection "Jumping into the next start point" in Appendix 7.

calibration. As it follows from the specific of the inverse problem defined for the EBM class and analysis of its local non-uniqueness, the possible strategy of regularisation and globalisation of the relevant optimisation problem (2) implies involving some posterior6 limitations constraining conditionally correct solution. The key idea of such constrains can be formulated as the following.

Repeated resolutions of SIP for single well data sets associated with calibration wells within the geologically common area being posterior summarised can deliver additional information about lateral distribution of the model parameters and their range of variation within the area. It could be interpreted and used in the form of posterior constrains imposed on the subspace XeA (Goffe et al, 1994) (see the speculative example in Fig. 5).

5.1. Principles

Within the range of the local optimisation approach to the SIP solutions without involving additional statistically driven criteria it is still possible to build aimed limitations on the subspace XeA and to found effectively unique solution based on posterior analysis available in case of the multi-well calibration data set.

According to the specification of the data comparison space (see Appendix 2) any multi-well data set can be made available as an integration of the single well definitive data sets (pore pressure and porosity curves) attached to the geologically common calibration area RC in agreement with the offset well positions.

Clearly, that the results of SIP solutions achieved for each single well and sorted out according to the relevant well locations formally deliver independent input about area distribution of model parameters. This input can be used for generalisation the mismatch function RN[M(x),p] as an additional regularisation factor.

Let us consider the speculative example shown in Fig. 6. The single well data inverse problem solutions schematically displayed in Fig. 6a are based on effective satisfying of the data stop criterion on a priori defined sub set of possible solutions XA. A priori available information normally is rather general and does not allow sufficient constrains on model parameter space to be imposed. Thus, every single solution {x} = M"1[g*] achieved for SIP on every single calibration well will generally suffer from non-uniqueness. In contrast, the SIP solutions {M'1[g*]} achieved for multi-well data set (Fig. 6b) allow to control over configuration of model parameters and their variability within the sub-space XA. This in turn enables generalisation G{ } of the goal function RN[M(x)p] and provides posterior regularisation of the solution performed simultaneously with the inverting process.

Thus, multi-well data set inversion potentially could get the conditionally unique e-equivalent solution within the calibration area RC.

It is important to stress that the non-uniqueness problem associated with local solution of the ill-posed SIP is caused more by problems with not global enough goal function than with finite data quality. The more general optimisation criteria can be elaborated based on model assumptions and satisfied based on available data, the more unique inverse problem solution can be reached. According to the general principles of a system analysis (Houberman, 1987), the constrains posterior imposing on the model parameter space should be elaborated simultaneously with data analysis in order to avoid subjective factors. Since, the multi-well data set provides repeating of the SIP solution for the same Earth model from many different start positions7 these requirements of system analysis can be potentially satisfied.

5.2. The approach to globalisation of goal function

The approach to globalisation of the goal function RN[M(x),p] in connection with multi-well data set inversion is based on the following statements:

• The single well data inverse problem is based on effective satisfying of data stop criterion on a priori defined sub set of the possible solutions XeA and the relevant component of vector-solution is interpretable in terms of distribution of model parameters in space. Namely, they can be interpreted as lateral variations of the relevant formation properties within the calibration area RC.

• On condition of appropriate formation setting, the lateral variation of such fundamental formation properties like compaction and geo-fluid conduction constants are not random but expected to be smoothed in agreement with continuous and trend-like spatial changes in sedimentation environment expected according to the burial history of the relevant formation.

6 I.e. following from calibration experience.

7 Different SIP solutions achievable in this case for the same calibration area could be interpretable as points of view on multidimensional model space landscape.

Fig. 6. The operator schemes of the inverse problem solution for the same Earth model based on single (a) and multi (b) well data sets

• The more uniform is spatial distribution of SIP solution components achieved within the calibration area, the more accurate result of their interpolation on target well position will be.

• The information about sensitivity and variability of the formation model parameters can be stored and analysed based on current SIP solutions consequently achieved on all available calibration wells in the area. It allows elaboration of more general combine stop criteria, which incorporates the data quality 1-D stop criteria at each individual calibration wells with current solution quality: range of model parameters' variation, flatness of model parameters' area distribution and repeatability of area solution from different start points of multi-well data inversion. Below, these posterior criteria are considered in more details and are illustrated with MW data inversion examples.

5.3. Posterior estimations of EBM parameter variability. RV-criteria

The range and variability criteria aim for minimisation of the final variation range and lateral gradient (variability) for every component of the current MW data inversion solution in agreement with the second and third statements of the goal function globalization approach. Note, that any and all of them are developed in the form of soft andlor hard constraining (Definition 13 in Appendix 3) to be included into the optimised goal function.

Let the area distribution for j-th component of the current solution x e XeA of multi-well data inversion problem be given in agreement with calibration well positions within the restricted calibration area RC. Let also the list of calibration wells contain M elements and each of them (well) can be used as a reference k-th elements (base well) at a current trial test.

Let's define the k-th variation range and variability for j-th component of the current solution x as the following.

Range of j-th component variation

(18)

xm _ x Ref

where / = , ^ ^ respectively; m = 1,2, . reference points (K< M).

/ x.

M.

Xj , Xj are the j-th component values at m-th and reference calibration wells,

.. The E=ik {

}k denotes the operator of most likely value estimation from K

Gradient of j-th component variation

E=1,K {gm]k},

(19)

where \gjm\k = [rjm\k l D(x,y); D(x,y) - distance between current (m) and reference (k) calibration wells according to their position within the area RC. The Ek=i,K { }k denotes the operator of most likely value estimation from K reference points (K< M).

Then the RV-criteria can be associated with the following scalar estimations:

Average / Maximum Range of j-th component variation scalar criterion:

E=1,K {[v/f},

where [v/T = [Z (r/")7(K-1)]/max m=i,K-m=1K-1

-1E(r/m)k, m = 1,2, ..., K, fern.

The Ek=1K{ }k denotes the operator of most likely value estimation from K reference points. Average / Maximum Gradient of j-th component variation scalar criterion:

Ek=1K{[G/m]k},

(20)

(21)

where \Gjm\k = \vj"\kl D(x,y); D(x,y) - the distance between current (m) and reference (k) calibration wells according to their position within the area RC.

At multi-well data inversion tests (see the next section) the RV criterion reveals monotonous and continuous behaviour on conditions that a priori given range of the model parameter variability XA is finite and the inter well distance wherever inside the area RC is > 0. The example in Fig. 7 illustrates this.

Fig. 7. Range - Variability criteria example.

(a-d) - four consequent maps of model parameter "specific surface"

distribution based on SIP solution for 12 wells (marked by circles); (e-f) - values of Averaged range (e) and MAX variability (f) criteria sorted in agreement with the list (a-d)

Ref

5.4. Posterior estimations of EBM parameter similarity. Sim-criterion

The similarity criterion aims for minimisation of the risk in getting locally non-unique final solutions of the multi-well data inversion problem. As it was stated above the vectors of e-equal inverse problem solutions that belong to the same simply connected subset {x}8 c 8X are rather close to each other in distance scale introduced within the multidimensional model parameter space X (Definitions 1 and 12, Appendix 3). Evidently that the area distribution of multi-well data inversion solutions achieved from K offset wells for the same calibration area and expressed in the map form should reveal similarity. This quality of multi-well solution does not yet guaranty achievement of local uniqueness but at least can be treated as a guide line on the way to it. The relevant area similarity criterion is introduced below for two arbitrary discrete maps representing the lateral variation model parameter components within the calibration area RC.

Let the area distribution of some x,y-discrete target value g within the restricted area Q be given together with its (x,y)m coordinates by series S: {(x,y,g)1, (x,yg)2,..., (x,y,g)M}, where m = 1,M is the list of measurement points (wells).

Let M sets of M-elements series be determined and (x,y)m coordinates at each equal index be the same.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The continuous and monotonous scalar similarity criteria Q (j,i) (Sim-criteria) between two compared series S and S is aimed to be constructed in the way that Q (j,i) ^ 0 when S and S series are absolutely identical.

Define M*M matrix R attached to S series with the n,m elements given as the following:

rn,m = (gmj - gnJ)Dn,m, n,m = 1,2,.--M.

(2)

where Dn,m is the Euclidean distance between n-th and m-th elements of S series. The diagonal element of the matrix R is evidently equal to zero.

The average similarity element for two compared series Sj and S can be defined for every raw of the matrixs R and R as the following:

P/ = 1/p„jiZ(r„,mj • r„,mi), if P„j'> 0

m=1

,n = 1,2,...M,

(23)

where p„ji = Z(max{

P/ = 1,

})2, n = 1,2,.,M.

if p„ji = 0

m=1

The similarity criteria for two compared series S' and S finally can be expressed by the formula:

M_ ^ (j,i) = (1 - Pji)/2, (24)

where pjI = (1/M)E p j and Q (j,i) varies monotonously between 0 (series are absolutely identical) and 1 (series

are reversal). n=1

At multi-well data inversion tests (see the next section) this criterion (Sim-criterion) reveals monotonous and continuous behaviour on conditions that a priori given range of the model parameter variability XA is finite and the inter-well distance wherever inside the area RC is > 0. The example in Fig. 8 illustrates this.

Fig. 8. Similarity criteria example.

(a-d) - four consequent maps of model parameter "specific surface" distribution based on SIP solution for 12 wells (marked by circles); e - area averaged map; f - values of Sim-criterion sorted in agreement with the list (a-d).

M

r

r

n,m

n,m

5.5. Definition of the specific inverse problem for multi-well data set. The step-by-step algorithm of its solution

The aimed definition of the considered inverse problem for multi-well data case follows from the definitions introduced above (see slsection 4) for a single well case, where the mismatch function is going to be extended based on posterior analysis of the solution quality. These extensions will be made in agreement with the RV-criteria and Sim-criterion introduced above.

Let {p*} be the full set of definitive calibration data vectors attached to each calibration well from the list k = 1,2,...,K and defined on Data Comparison Space P in agreement with its specification (see Appendix 2) and together with common for area estimation of e-locality.

Let also the Effective Basin Model (EBM) X for inversion of the multi-well data set {p*} be predefined and upscaled in agreement with Model Parameter Space specification and strategy of its upscaling (see Appendix 1).

The conditionally correct SIP solution for multi-well data set {p*} is searching on a priori given model parameter sub-space XeA c X in terms of conditionally correct e-equivalent solution and in agreement with (2). Namely:

The first possible vector x e XeA c X, which stably delivers satisfaction to the following generalised mismatch criteria:

{ Rn \(x),p\ ^ e (25)

t U (x) < «

will be treated as a stable e-solution of the SIP for multi-well data sets.

Here, RN is the scalar mismatch function (see Definition 12, Appendix 3) defined in agreement with multi-well estimation of e-locality (area data quality criteria); U is the scalar penalty function, which constrains solution in agreement with a priori given and a posteriori elaborated limitations on Model Parameter Space.

Note that in contrast with formulation of SIP for a single well case, this definition is arranged with restrictions imposed on model parameter space in the form of hard and soft restrictions (see Definition 13, Appendix 3). In particular, constrains assigned inside a priori given search sub-space X A are supposed to be achievable based on a posteriori estimations of the EBM variability (RV-criteria) and consistency (Sim-criterion) within the calibration area.

The generic strategy of getting SIP solution (23) can be expressed in the form of two embedded loops along K available calibration wells. The internal and external loops are aimed for consequent satisfying of data quality and model quality criteria, respectively. Namely:

The internal loop includes four consequent operations to be repeatable:

1. getting the SIP solution for some k-th single reference well picked as a base well;

2. spreading the achieved solution on the rest of the wells - offset wells;

3. getting the SIP solution for every m-th calibration well from list 1,2,...,K-1;

4. consequent shrink of initial variation range in agreement with the last step and checking RV-criteria.

The external loop process implies running over all base wells available from calibration well list k = 1,2,...K with successive achievement of the SIP solution for the same calibration area together with redetermination of the relevant model quality criteria. Thus:

• the data-model misfit stop criteria (RN \M(x)p\ < e) is implementing for every single well SIP at the internal m-loop;

• the model variability RV-criteria (URV(x) < a) is implementing for every multi-well IP achieved for k-base well at the external k-loop;

• the model consistency Sim-criterion (USim(x) < P) is implementing by the end of the external k-loop along all available base wells.

This computation scheme can be expressed in the form of the following step-by-step algorithm.

STEP 1. Set up a priori acceptable variability range for every lithology controlled model parameter XeA c X and

predefine start position for the model parameter vector x0 e XA. STEP 2. Start the external k-loop over the list of base well k = 1,K.

2.1. Get locally unique e-equivalent solution of the single well inverse problem for the first base well according to combine Gradient and Newton-Gauss strategy (see slsection 4).

2.2. Spread the solution vector xk on the rest of the calibration wells.

2.3. Start the internal m-loop over the list of calibration well m = 1,2, ..., K, k^m.

• Get locally unique e-equivalent solution of the single well inverse problem xj" for every m-th calibration wells involved in the internal m-loop.

2.4. Based on set {xk"} evaluate range and variability of the k-th area solution.

2.5. Check whether or not the RV-criteria are satisfied:

if NOT: Shrink MP range in agreement with current MAX variability evaluated over {xk"}, spread updated solution vector x on all calibration wells, include the k-th base well into the list of internal m-loop and return on 2.3.

if YES: Include the achieved solution into the list for establishment averaged area model distribution.

2.6. Check whether or not the list of base wells is finished.

if NOT: Take the next calibration well as a base one (k = k + 1) and return on 2.2. if YES: Form the current averaged image of the area solution Ek=XK [{xk"}] and check whether or not the Sim-criterion is satisfied:

if NOT: Set up a new global start point and return on 2.2. if YES: Go to finish STEP 3.

STEP 3. Get the area unique e-equivalent solution of the multi-well SIP.

The short computation summary of this scheme can be done as the following.

The Converging speed of the multi-well data inversion process is mainly controlled by a number of base well involved into the external loop and averaged speed of inverse problem solution for a single well.

The quality of solution in terms of its applicability for prediction purposes is in agreement with the finally reached for area solution non-random scatter in values (range) and flatness of 2-D picture (variability), which are controlled by RV-criteria.

The quality of solution in terms of EBM consistency within the area is in agreement with the finally reached similarity of area solutions, which is controlled by the Sim-criterion. Moreover, if the set of area solutions {{x1"}, {x2"}, .. ,{xk"}} belongs to simply-connected model sub-set {5x} inside a priori defined set X" i. then the given solution is unique for the area.

6. The real data test of multi-well data inversion

The North Sea basin, in general, and the Viking graben, in particular, represents an ideal area for investigation and simulation of different overpressure mechanisms occurred in nature (see, for example, Chiarelli, Duffaud, 1980; Jhonson, Stewart, 1985; Buhrig, 1989; Knott, 1993; Thorne, Watts, 1989; Yardley, 1998). The solid empirical background for basin analysis is available for this region (Glinie, 1998). It allows generation of the relevant geo-fluid system model (Nordgard Bolas et al., 2004) and its upscaling based on EBM principles (Madatov, Sereda, 2003). The big amount of released for pressure investigation exploration wells, on the other hand, creates favourable conditions for application developed SIP solution strategy at calibration of such models. Note that the original name of the well used for calibration was changed.

6.1. The framework input information and calibration data

The Gullfaks area of the North Viking graben with 12 wells available (see Fig. 9) was taken as a calibration area for testing described above algorithms and corresponding computer programs.

The local geology settings is well documented (Glinie, 1998). It can be illustrated by area common litho-stratigraphy column and main aquifer formation structure map represented in Fig. 10.

Fig. 9. The test area with the calibration well (little red circles) marked. The black filled regions represent main HC fields; brown curve - western Norwegian coast line; blue - local GIS block cells. The number of wells is fictive.

Fig. 10. The area common stratigraphy column with the main aquifer formations marked by arrows (left) and structure map associated with Fm_5 top with the faults tectonic interpretation. The test area contour (dotted blue polygon) with calibration well markers (white circles) are imposed

a K j- LOO to porosity transform aliens * - 95% confidence

interval « - Definitive

porosity curve A - FÏFI measurerTienls '"pj - Equivalent mud weight — - Definitive overpressure curve

Fig. 11. Getting definitive data curves for calibration well 802

The main geological features having impact on the present day pore pressure distribution within this region according to (Buhrig, 1989) could be listed as follows.

- Two different types of subsidence during rifting and post-rifting phases, which lead to quite isolated pressure regimes above and below the base Cretaceous unconformity within the post rift and rift megasequences, respectively.

- Fault-block tectonics on the rift megasequence with downward rotation and listric geometry, which can create lateral sealing between different pressure compartments.

- Several periods of non-deposition or uplifting and erosion during Jurassic-Early Cretaceous time which could cause simultaneous local release of the overpressured pore fluid.

- Higher temperature gradient in the pre-axial part of the graben due to different sedimentation and cooling rates. Consequently different start times for the maturation and HC-generation in source rocks and different phase content of the pore fluid at the present day.

As a result the mechanisms responsible for the pressure development vary from one pressure compartment to another and from the upper level to the deeper level in the section. More detailed review of the overpressuring mechanisms acting in the region is available in the published literature (Chiarelli, Duffaud, 1980; Buhrig, 1989; Helth et al, 1994).

The multi-well data set was constructed from definitive single well data sets in agreement with the specification rules introduced for Data Comparison Space (see Appendix 2). Particular content of the raw data sets for every calibration well involved into the test is summarised in Table 1. The example of multi-source data interpretation resulted in establishment definitive data is represented in Fig. 11.

The quantitative analysis of the area representative data quality was based on the general conclusion about measurements' accuracy and least square error estimations in agreement with introduced for above data space vector norm. The summarised for all available calibration wells' data quality stop criterion was estimated within dimensionless units as 0.03.

Table 1. The data input available for definitive calibration data sets

Well Name Data available for definitive curve interpretation

Overpressure scale measurements Indirect data Porosity scale measurements Related well logs

023 RFT (F2, F5, F8, F11) EMW, Kicks, Gas log (F2, F5, F8, F11) BCS

035 MWD CDE, Gas log BCS, FCD

101 RFT (F2, F5); MWD EMW (F2, F5) BCS

105 RFT (F2, F5, F8, F11) EMW, Gas log (F2, F5, F8, F11) BCS, CN

807 RFT (F2 F5); MWD CDE, Gas log (F2, F5) BCS

804 MWD CDE, Gas log BCS

202 RFT (F5) EMW (F5) BCS, FCD

801 RFT (F5, F8) EMW, CDE (F5, F8) BCS, FCD

802 RFT (F5, F8) EMW, CDE (F5, F8) BCS, FCD

803 RFT (F5, F8) EMW, CDE (F5, F8) BCS

401 RFT (F2, F5); MWD Gas log BCS

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

301 RFT (F5); MWD EMW, Kicks (F5) BCS

Abbreviation used in the table: RFT - Repeated Formation Test; MWD - Measurements While Drilling; EMW - Equivalent Mud Weight; CDE - Compensated Drilling Exponent; BCS - Borehole Compensated Sonic log; CN - Compensated Neutron log; FCD - Formation Compensated Density log. See s/section Appendix 2 for more details.

15 30 45 Porosity, J%J

I

a

1.0 tv5 2.0 Overpressure, [g/ccl

Fig. 12. The upscaling of the Earth model in terms of sensitive formation units.

a - the area formation column before (left) and after (right) upscaling; b - the burial history

backstripping sketch; c - the relevant present day overpressure model (solid curve) before (upper) and after (lower) upscaling. Well 802 is taken as an example. The pressure definitive data set is represented by markers.

6.2. The Earth model upscaling and getting effective basin model ready for calibration

Formally the aims of 3-D Earth model upscaling within the calibration area consist in organising of geology input in agreement with the rules defined for the model parameter space X specification and in optimisation of this space dimension. Practically, the upscaling routine includes repeating sensitivity analysis of a combine 3-D hydro-dynamic model and successive removal from it the least sensitive elements (formation units and fault segments) at every current stage of simplification. The final level of acceptable modelling error should be constrained by area representative data quality estimation. One of the most important additional issue of upscaling is capability to allocate the sub-set of the possible inverse problem solutions XeA within the upscaled model parameter space X6.

The strategy and technique of upscaling were discussed in details in (Madatov, Sereda, 2003). Here we just summarise them for our calibration area as the following:

1. The number of the formation units sensitive to the target model was reduced from initial 13 down to 7. The reduction process repeated on all calibration well reveals consistency in terms of specification and consequence of removable formation units. The final formation content has occurred common for the area and looks as it is shown in Fig. 12.

2. The CPU time required for a simple iteration of the SIP solution at single well data inversion before and after upscaling was estimated as 7 and 0.7 sec., respectively. Since the averaged converging process before getting the data stop criterion took about the same numbers of iterations (4-8), the achieved acceleration averaged over all of the involved wells was estimated to be around 10 times. Note, that the CPU requirements for multi-well data inversion are estimated to be around (TS)(3NUref /2 NUups), where TS - the CPU time required for a simple iteration at a single well; NUref, NUups - the number of formation units involved in data inversion at reference and upscaled model, respectively. Thus, the total acceleration effect from this upscaling for the planed multi-well data inversion test can be estimated as about 1027-1028 (500-630 times). Note that this time economy is getting possible without loosing required SIP solution accuracy.

3. The aquifer formation F5 was recognised as a major pressure communication unit at the target Jurassic interval of the section, responsible for lateral pressure compartmentalisation. The fault system associated with this formation was upscaled in terms of sufficient sensitive segments according to the generic upscaling strategy. Thus, 3-D combine HD Earth model finally was composed as it is shown in Fig. 13.

4. The upscaling routine in agreement with its strategy is finished at the stage, where the single well data inversion problem is getting resolved for every of the offset wells. Despite the fact that the EBM structure at this stage assumed to be optimally simple in terms of the number of the formation units and fault segments, the area distribution of the model parameters is generally not optimal. This is because of the local non-uniqueness of the SIP solutions X generally achieved for any j-th calibration well during upscaling. The model parameter maps and sensitivity vector area diagram in Fig. 14a and 16b illustrate this common situation achievable by the end of upscaling stage.

Thus, the multi-well data inversion problem can practically be reduced to some rearrangement of the single well SIP solutions X achieved by the end of upscaling phase based on involving more global criteria and application of area regularisation routine described above.

Fig. 13. The 3-D image of combine HD Earth model of the calibration area after upscaling in terms of formation units and fault segments. (The Z axis scale is 10 times bigger than X,Y ones)

Fig. 14. The polar sensitivity plots (Appendix 6) attached to the calibration area mesh (blue lines) in agreement with offset well positions. The sensitivity vector was computed for the SIP solution achieved at the single well (a) and multi-well (b) data set inversion modes.

Fig. 15. The summary of multi-well data inversion iteration process:

a - converging of fitting process to the data stop criterion at one of the calibration well (No.807) during the external loops of iterations attached to the different base well (code of bar plots). The vertical scale is misfit measure. b - misfit between synthetic (solid curve) and definitive real data (markers)

representing overpressure for test well 807 in specific gravity scale. c - converging to the minimal area variability of model parameters (VR criterion) during the external loops of iterations attached to the different base well (code of bar plots). The horizontal scale is variability measure in normalised model parameter space. The lightest bar against the relevant formation unit corresponds to the start variability. The shrinks of variability by the end of each internal loop are marked with darker colour of the relevant horizontal bar.

6.3. The SIP solution test at multi-well data inversion

The step-by-step algorithm described in s/section 5.5 was implemented as a computer program to the 12-wells data set in order to invert it into the upscaled model parameter space. A multiple inversion of the same 12-wells' data set was repeated from different start points associated with 12 base wells according to the step-by-step algorithm. The variability ranges of each adjustable parameters were continuously shrunk and RV criteria (see definition in s/section 5.4) were controlled by the end of every single internal loop attached to the given base well (see Fig. 15c). Finally, the quality of current solution available by the end of the given external loop was checked against previous solutions based on Similarity criterion.

The dynamics of iterative process converging to the satisfactory SIP solution is displayed in Fig. 15.

Apart from formal criteria, the results of testing were analysed based on the review of model parameter maps, misfit maps and sensitivity polar plots as it is shown in Fig. 14 and 16. One of the primary aims of this computer test consisted in checking and verification of expected improvement of IP solution in comparison with the rough IP solution composed from the set of single well data inversions achieved for each individual calibration well and then interpolated and mapped as it is shown in Fig. 16b. From this point of view the results of the given test can be summarised as the following:

1. The simultaneous satisfaction to the data quality criterion (misfit criterion) and to the model quality criteria (RV- and Sim-criteria) are not achievable based on solution composed from well-by-well single well data set inversions. Namely: an acceptable from data misfit point of view IP solution reveals mosaic model parameter map for every sensitive component of the vector solution x e XA. In other words, it is not acceptable from both

RV-criteria and Sim-criterion points of view (Fig. 16b). The reverse statement is also true. For example, spreading of single well SIP solution from the well where it was achieved (marked with the stars in Fig. 16a) along the rest of the wells results in fully uniform area distribution of model parameters. This physically absurd result naturally leads to non-acceptable misfit for some of the offset wells (see Fig.16, upper raw). In contrast, the multi-well data set inversion converges toward simultaneous satisfying of the data quality (misfit) and model quality (range, variability and similarity) criteria as it is demonstrated in Fig. 16c.

2. The 12 wells data set appears to be sufficient for getting area consistent and unique solution that satisfies to reasonable data quality and model quality criteria. The set of the final solutions represented in the map form like it is shown in Fig. 16 reveals good similarity when reviewing the same model parameter distribution achieved from different base wells8. Since the image of every map was perfecting during the external loop independently for each base well, the final area solution of multi-well inverse problem was not forced in this test to be close to the area averaged image by soft or/and hard constrains. Despite of slightly bigger CPU time requirements of such free forcing iterative process, the final results allow to be convinced that it converges to the same, unique for the area e-equivalent solution regardless of the base well position. It allows in turn to state that the developed algorithm is globally convergent.

Fig. 16. The example of SIP solution in the form of maps: compaction constant in [1/m] (left) and specific surface constant - logarithm scale in [1/m] (middle) against achieved within the area data-model misfits in percent (right map) for single well data inversion (upper raw "a" with the calibration well marked with the star); well-by-well single well data inversion performed consequently for all the calibration wells (middle raw, "b") and multi-well data inversion averaged along all base wells (lower raw, "c").

1 See step-by-step algorithm in s/section 5.5.

3. The quality of the achieved multi-well SIP solution was additionally estimated based on sensitivity analysis. The sensitivity vectors computed for every calibration well reveal much more consistency in comparison with the ones based on single well IP solutions (compare Fig. 14a and 14b).

4. The multi-well SIP solution computer routine allows interactive mode with model parameters' map browsing; start model adjustments; real time tuning of adjustable routine parameters, etc. and fully automatic mode. User controlled routine normally can save up to 80 % of the required CPU time, but it can be recommended only for an advanced user. The application of the SIP solution program for the Earth model calibration in new regions will obviously require some time for adaptation of main adjustable parameters and tuning data quality and solution quality (RV and Sim) criteria as well as refinement of the EBM settings. The active user interface is the only option for such implementation of the SIP solution program.

7. Conclusion

The class of the effective basin models (EBM) within the deterministic phenomenological approach assumes definition of continuous forward modelling operator capable to reproduce major overpressuring mechanisms recognisable in nature. The inverse problem in regard to normally available porosity and formation pressure well data can be posed in both single well and multi-well modes. In the context of Earth model calibration purpose-built for overpressure prediction such approaches appear to be practically resolvable within the range of non-linear optimisation algorithms with prior and posterior model driven constrains. In particular, the 1.5-D forward model appears to be suitable for single well data inversion despite the fact that the relevant SIP solution is locally non-unique.

The generic strategy of the single well data inversion implies producing the set of e-equivalent local solutions belonged to the simply connected set XeA of possible solutions a priori defined within the upscaled model parameter space. The developed algorithm of the SIP solution incorporates the advantages of Gradient and Gauss-Newton methods. It provides the relevant computer engine with high stability, convergence speed and flexible strategy of the process termination. Still, the non-uniqueness problem remains to be non-resolvable within the range of the single well data inversion problem.

Thus, application of the single well data inversion approach to the calibration problem seems to be risky, since it inevitably will suffer from global non-uniqueness. Namely, here the set of local SIP solutions belonged to rather different model sub-types is formally integrating within the common framework. Despite of the computation efficiency and apparent simplicity this strategy is not optimal and potentially can lead to generation of formally equal but physically and geologically rather different prediction scenarios.

Incorporation of the EBM in combined 3-D modification (Madatov, Sereda, 2005) with multi-well data set inversion strategy appears to be much more optimal in terms of prediction risk minimisation. Still, without resolving the single well data inversion problem it would be hard to construct solution strategy for this more general and practically more useful inversion problem.

The multi-well inversion process implies repeatable solutions of the SIP for single well data sets against the same sub-set XeA of a priori possible solutions within the upscaled model parameter space X6. The converging speed at every particular single well data set is generally going to be increased during perfecting of start point position. The typical definitive single well data set requires few seconds to be inverted against the upscaled EBM. Thus, the given strategy provides practically acceptable SIP solution.

The key point of multi-well data inversion strategy is that the constrains imposed on a priori defined sub-set XeA of the possible solutions are elaborating simultaneously with the multiple solution of data fitting problem for every single well and independently from them. It allows regular refinements of area solution based on satisfying to rather general model quality criteria: minimal Range Variability (VR-criterion) and maximal similarity of adjacent SIP solutions (SIM-criterion). Proceeding with this process converges to globally unique SIP solution for the calibration area.

The computer tests reveal practical applicability for the single well and multi-well data inversions. The SIP solution program provides data fitting workable computing engine for Earth model upscaling and calibration processes.

APPENDIX 1

Specification of a model parameter space

The implementation of the Effective Basin Model concept for overpressure prediction problem requires two general groups of geo-fluid system descriptors to be distinguished: local control functions of a geo-fluid system response and global control functions of a framework basin process (Madatov, Sereda, 2005).

The sketch in Fig. A1 clarifies this.

The basin subsidence, eustatic sea level and paleo bathymetric curves are recoverable within the calibration area due to their global regional nature (Allen, Allen, 1990; Miall, 2000). The sedimentation and heating rates in steady state approximation along formation unit can also be reconstructed based on bio-

stratigraphy and paleo-temperature markers normally available as a general geology setting. This information together with geometry image of sub-surface available from 3-D seismic data interpretation creates framework for the effective basin model in agreement with the EBM concept. The relevant level descriptors assumed to be fixed in agreement with this framework.

The rest local level descriptors control lithology depended phenomena: porosity losses (rock matrix compaction), pore fluid conduction and pore fluid generation. Part of them is taken in the EBM concept to be physical constants in the relevant equations of state (pore water isothermal compressibility and isobaric extensibility, sand grain and clay particle density, etc.). The rest of them form a set of model parameters, treatable as uncertain local descriptors sensitive to porosity and overpressure phenomena and adjustable against the relevant present day data.

Namely, every j-th formation unit in the EBM has to be arranged with its framework descriptors and model parameter set (Ci-3j). In addition, the set of /-th fault transmissibility parameters (F) has to be specified for every fault segment intersecting aquifer formation. All together they form attributes of the relevant EBM elements. Thus, specification of the Model Parameter space X includes subdivision of calibration area section onto formation elements, detecting fault segments affecting aquifer formation integrity and setting most likely default values and variation limits for descriptors: (Ci-3j) and (F). Note, that simplification of the EBM accessible after upscaling does not change the structure, but only dimension of the relevant Model Parameter space X3^ X.

Fig at

Absolute Iffnii wain |M yiiiirs)

. A1.1. Backstripping technique (Sleep, 1971) burial and compaction history reproduction

Fig. A1.2. Specification of the combine 3-D EBM model in the context of its calibration against the relevant multi-well data set. a - an area formation column with the stratigraphy (left) and lithology (right) settings. Aquifer formations are orange coloured; b - three structure maps representing the present day sub-surface geometry of aquifer formations with the depth scale in feet. The 1-D formation models attached to the relevant calibration well positions are marked with red circles; c - representation of the combine 3-D HD model with 1-D models attached along section A-A'.

Let the number of formation units for some combine 3-D EBM be equal J, and M of them (M<J) are recognised as aquifer formations within the calibration area enabling whenever during burial history formation pressure communication. Let the number of faults detected at each of M aquifer formation be equal I (I could vary from aquifer to aquifer). Let, finally, K offset wells assumed to be used for the given combine 3-D EMB calibration. The relevant 1-D model must be included in the common framework in agreement with the 3-D EBM architecture, i.e. in the form of the incorporated set of 1-D and 2-D HD models intersecting in common hydrodynamic aquifer formation units (see Fig. A 1.2). Then the specification of the given model parameter space can be represented as the following:

Set of 1.5-D (well) models: {Cj, C2, C3}fc * {= numbber°f^tHD^d ,

V k = 1,2., K - number of 1.5-D HD models.

Set of 2.5-D (aquifer) models: {F'}„

V i = 1,2.../ - number of faults;

V m = 1,2...,M - number of 2.5-D HD models.

A total amount of components for specification of the given model parameter vector is given by N = 3JxK+IxM. Thus, the N-dimensional Euclidean vector space X6, which stores in a general case all physically possible model parameter combinations representable in the form of the model parameter vector as the following:

X = {[(x1\ x2l, -^S^X. • *,(x1J x2 , x3J)] [Cx11, x21, -^S^X. • • x2 , x3J)]k;

[(x4 , x4 x4 )1,.,(x4 , x4 x4 )M]} ,

where J, K, I, M are in agreement with parameterisation sorting rule.

The initial default values and relevant variation ranges are defining for every Ck or F*m component in agreement with lithology setting for the given j-th formation unit or i-th fault segment, respectively. Additional constrains could be imposed based on prior knowledge about litho-facial conditions, in which the given formation was deposited.

For example, no one of lithology types can have initial porosity bigger than 1.0 according to the definition of porosity as a representative measure of void space in porous rock (Oil and Gas Geology handbook, 1984). The summarising of experience collected for each particular lithology allows differential ranking of this range for different lithology. More close classification according to facies and local sedimentary basins condition allows more detailed classification with even higher shrinking of variation range for this parameter (Miall, 1996).

Table 2. Prior range of variation for BM parameter (initial porosity)

Lithology type Expected range of variation Additional conditions

Shale 0.75-0.50 Turbid 0.75-0.65

Shallow sea 0.70-0.55

Deep sea 0.60-0.50

Sand 0.60-0.35 River 0.60-0.50

Turbid 0.55-0.40

Shallow sea 0.50-0.45

Deep sea 0.45-0.35

Carbonate 0.85-0.25 Shallow sea Dolomite 0.55-0.35

Chalk 0.45-0.25

Deep sea Dolomite 0.85-0.60

Chalk 0.70-0.55

APPENDIX 2

An empirical background required for setting of forward model and multi-well data set required for its calibration. Specification of the data space

The discussed forward model according to the EBM concept implies establishment of two levels of controls over target phenomena: global-basin scale and local-formation scale (see (Madatov, Sereda, 2005) and Appendix 1 for more details).

In agreement with this there are two levels of empirical background to be considered in connection with forward model setting and calibration.

The upper level includes so-called framework indicators of mutually depended basic process: stress, temperature and sedimentation development (Miall, 2000). They are all considered within the common basin framework as a priori given and fixed information (see Fig. A1.1). From this point of view they are considered as a not tuneable during calibration inherent part of the framework model. As steady state process approximation is accepted in the EBM concept on the formation level of integrity, the available indicators used for unique recovering of the relevant gradients back to history.

The lower level of empirical data provides input for the data inversion procedure aimed for the EBM calibration. As the particular forward modelling operator was designed to produce the type of synthetic data: overpressure, porosity and permeability, the relevant well data are considered as components of multi-well data set.

Note, that such synthetic data like overburden gradient can be computed, but have not measurable data prototype and vice versa - such well data as current temperature is normally available, but cannot be used for inversion within the range of the EBM approach apart from recovering the present day thermal gradient.

Below the process indicators and multi-well data are reviewed.

Non-vertical stress indicators

The basin level information about general tectonic environment is used for correction (if necessary) of a single stress component assumption taken for effective stress model. Essentially it can be made available from geo-dynamical type of the sedimentary basin (Stuve, 2002) which assigns a global stress framework for the region.

The local level specification of a 3-D stress environment could be made available from structure interpretation of seismic data in 3-D space and basin time scale including fault analysis, salt tectonics investigation and so on. A present day image of subsurface geometry definitely delivers some information about non-vertical stress acting in the past. However, it is hard to use such present day snapshot for unique paleo reconstruction of both global and local tectonic activity and related lateral stress field back in basin time. Normally, the main tectonic events are defined in the basin stratigraphy column as activation time gates. They can be used as gross ideas about tectonic compression history and maximum reached horizontal stress (Yass/r, Bell, 1994). A more advanced seismic stratigraphy approach (Rowan, 1993) allows speculative restoration of paleo-geometry and paleo-deformation 3-D images formation-by-formation in agreement with tectonic activation time-space markers9.

The measurable indirect indicators of non-vertical present day stress environment associate with borehole breakouts markers and leak off tests.

Borehole breakouts are horizontal stress-induced well-bore elongation parallel to axy, that occurs mostly in mechanically weak rocks such as shale, if significant horizontal stress anisotropy exists close to well (Yassir, Bell, 1994).

A leak off tests deliver the information about minimal horizontal stress based on formation pressure measurement at the micro-crack closure level (Miller, 1995).

Both measurements are usually available in few points of exploration well.

It is clear that a computer simulation of 3-D forward problem for stress field evolution in layered porous media on global or local levels involves too much unknown independent model parameters to be calibrated against accessible data. Still, the global and local level structure geology settings allow at least distinguish between less and more appropriate for effectively vertical stress environment Earth models.

Burial history indicators

The burial history model requires for its verification-calibration information and data about tectonic subsidence and the relevant sediment loading-unloading curves vs. basin time. The reasonably accurate for purposes tectonic subsidence vs. time evaluations can be obtained after the removal of the sediment load and corrections for paleo water depth and eustatic sea level fluctuations (Sleep, 1971; Thorne, Watts, 1989) (see also Fig. A1.1).

The first sub-problem is the heaviest one in terms of amount of the required data. Still, the backstripping technique generally allows recovering of 1-D loading-unloading limbs on formation-by-formation level (Allen, Allen, 1990).

The most general and full source of information for this is regional stratigraphy setting commonly available for each explored sedimentary basins and provinces. Local formation model in the area implies subdividing of the sub-surface on several seismically driven horizons tied to the relevant formation tops. An average litho-facial content of each formation unit is a matter of the combine well-log and seismic data interpretation (Moushin et al, 1990; Madatov, Sereda, 2003).

The age of the given formation units is recoverable based on bio-stratigraphy markers and radio-carbon analysis with accuracy up to ^x103-4 years in agreement with the absolute geo-time scale subdivision (Barker, 1996). The true vertical thickness corrected for possible bed dipping can be recovered with accuracy up to Mx10°-2 m from migrated 3-D seismic data on condition that the dipping is small (Seismic Stratigraphy, 1977). Note that the other situations are unlikely for tectonically relaxed basins, where the one-axial stress model is valid. Finally, the rate of sediment load (sedimentation rate) in 1-D approximation of burial history can be recovered based on consequent decomaction of the formation units in agreement with backstripping technique (Allen, Allen, 1990; Schneider et al., 1994; Miall, 2000). The average accuracy of sedimentation rate recovering based on such approach is estimated to be at about Kx 100-1 m/myears.

9 The relevant "GE0SEC-20" software package incorporates decompaction, isostatic compensation and paleo-bathymetry determination. In connection with salt tectonics the restoration method does not impose any constraints on the kinematics or area balance of salt induced deformations, but just provides step-by-step restoration of subsalt and suprasalt sections.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The porosity and density data available from well logs and core analysis are discussing below in connection with calibration of compaction trends. Here we just note that the requirements to amount and quality of these data for calibration of compaction trends exceed the relevant requirements to data supply for the inverse 1-D loading-unloading model.

The erosion episodes remain the least certain attribute of the 1-D sediment load model in the context of its paleo reconstruction. The approaches to this problem are based on 1-D (along depth axis) analysis of the maximal reached depth or temperature markers (Barker, 1996) or on 2-3-D seismically driven unconformity interpretation (Madatov, Sereda, 2000b).

Porosity and organic materials are changing irreversibly with the depth. Thus, if the maximum reached depth or temperature do not exceed some partially eroded formation during the next stages of its burial history, then the amount of uplift could be uniquely recovered. The relevant 1-D depth trend interpretation can be based on porosity or sensitive to porosity data or temperature markers. The relevant data accuracy requirements coincide with the one sufficient for the detecting and least square extrapolation of linear (temperature markers) or exponential (porosity data) trends. More accurate and more expensive is uplift recovering methods based on temperature markers (Doligez et al, 1985; Barker, 1996).

The paleo water depth and eustatic sea level variations vs. basin time can be reconstructed based on paleo-bathymetric data analysis and generally available information about first-second order cycles of sea level variation vs. geological time (Allen, Allen, 1990).

Information on paleo-bathymetry comes from a number of sources given in the following list according to accuracy:

benthic micro-fossils analysis; interpretation of sedimentary facies; analysis of distinctive geo-chemical signatures.

The modern techniques allow paleontologists to have a meaningful interpretation of environmental factors with the accuracy sufficient for the purpose of bijective setting of the inverse 1-D burial history model.

Thermal indicators

The data available for estimation of the present day thermal environment include direct temperature measurements in wells, massive near surface or surface radioactivity and heat flow measurements. Any porosity-related data including interval seismic velocity also provides some information for establishing of empirical relationships which define thermal conductivity of rock and mean thickness of the radioactive crust and reduced heat flow sub-crustal level (Lachenbruch, 1970). Similarly to the present day tectonic image of basin, the presently existing temperature regime, even if it could be accurately recovered, cannot serve for unique verification and all the more calibration of thermal evolution basin model. Some paleo-temperature markers are required.

From various temperature indicators measured in wells the most popular is vitrinite reflectance ^(z) (Thermal modelling in sedimentary basins, 1985). Using this data fit it is possible to obtain so-called Time Temperature Index (TTI) developed by Waples (1981), who modified a vitrinite maturation index based on Lopatin's method (Oil and Gas Geology handbook, 1984). This popular method is based on the assumption that the rate of vitrinite maturation doubles for every 10°C increment during burial & heating. In the assumption about a constant geothermal gradient model for each specific formation unit this 1-D heating-cooling model becomes inverse with respect to TTI data. I.e. it allows unique accommodation of the relevant thermal indicators within the range of a certain geothermal gradient vs. formation unit model, where TTI data are available. According to (Waples, 1981) in such inverse formulation of the paleo temperature problem the entire temperature field can be replaced with the problem of defining the constant paleo-heat flux at a given burial time interval. Since the burial history can be recovered uniquely for every single formation model by using the backstripping technique, the relevant geothermal gradient history can also be determined uniquely against ^(z) observations (Barker, 1996).

As to the more sophisticated thermal history models describing a 3-D paleo flux field based on the relevant heat conservation equation, it seems to be excessively complex for our purposes. Indeed, the sensitivity of the basin scale overpressure build up model to the paleo temperature uncertainty is secondary important relatively to uncertainty in compaction trends and the relevant hydraulic conductivity of rocks (Madatov, Sereda, 2000a). Besides, the TTI data transformation routine required for calibration of the basin scale temperature evolution forward model turns into a non-linear inverse problem (Lerch, 1991).

Porosity and permeability well data

Porosity and permeability data constitute an important empirical ground for establishing of compaction and geo-fluid conduction relationships in homogeneous approximation of porous media and also for calibration of the relevant trends involved in the basin models (Maraga, 1978; Lerch, 1990). It is important to stress that both types of data have several modifications with each own different meaning and the averaged level of values despite of the same physical units ("specific volume" unit - for porosity and "Darcy" unit for permeability). For

example, the terms "open porosity", "connected porosity", "fractured porosity", etc. could be attributed for slightly different specification of the same integral volumetric property of rock. In the context of the target forward model calibration we consider both porosity and permeability as effective characteristics of a geo-fluid system observable in present day and reproducible by using a wide class of basin models. In the context of data inverting, neither porosity nor permeability is model parameters subjected to be estimated by inverting, but input for the relevant procedure.

There are three sources for estimation of porosity and permeability data in practice: in situ measurements, well log data transformation and seismically driven estimations. Naturally, in situ observations are the most accurate data, which could be recommended for calibration of the relevant basin models. The trouble is that systematic in situ measurements are normally concentrated in the narrow locality of potential reservoirs. More representative for a whole section and hence more suitable for purposes are well logs, that normally cover wider depth gapes (especially sonic logs). The seismically driven porosity estimations are based on rather bulky and heavy in data inverting models10 (Nur, Dvorkin, 1998). The relevant seismic data transformation is based on heavy full waveform inversion processing, which is potentially workable at narrow target intervals, where high resolution there - component records of reflection waves can be obtained on the regular 2-D grid. These data transformations cannot be recommended for model calibration purposes due to their low accuracy and fixation at local geological objects (HC-reservoirs).

Below we focus on data transformations based on well logs. Since porosity data are a main source for the relevant conversion into permeability scale we start this brief review from porosity.

The well log transformations to porosity scale have been successfully used in compaction studies for many decades (Reike, Chilingarian, 1974; Magara, 1978; Oil and Gas Geology handbook, 1984). The Formation Density Compensated log, Borehole Compensated Sonic log and Compensated Neutron log are commonly recognised as acceptable for purposes of a compaction trend calibration.

The sonic (BCS) log measures interval transit time (ITT) for a compressional sound wave emitted from a sonic shot source to travel across certain distance to a sonic receiver. The ITT is inverse to a sonic velocity, which is in turn a function of current effective stress and rock matrix connectivity (Zimmer et al., 2002). In gross approximation acceptable for calibration of porosity vs. depth trend curves ITT measurements can be considered as sensitive to porosity data. When a lithology type of logged formation is known, the sonic driven ITT porosity can be calculated from the Wyllie time average equation (Reike, Chilingarian, 1974):

4 (0j,0h) = (9 - 9j)/(9h- 0,-), (A2.1)

where ^ - the porosity; 0h, 9,- - the transit time (acoustic slowness) in fluid11 andj-th rock matrix, respectively; 9 - the measured ITT value.

Validity of Wyllie's formula widely varies from the lithology type, confined pressure regime and current level of porosity. In particular, the sonic driven porosity at the shallow intervals (non-consolidated rock with porosity greater than 25-30 %), the 9 values tend to overestimate the "true" formation porosity and the relevant empirical corrections are suggested (Raiga-Clemenceau et al., 1988).

The density (DC) log measures a secondary emitted gamma ray radiation (GRR) sensitive to the density of electrons in the porous media. This kind of density essentially coincides with a measure of bulk density for most of the rocks. Although some of the rock basis minerals such as coal, evaporite, sylvite, etc. cause significant perturbation and the relevant bulk density estimations require corrections (Oil and Gas Geology handbook, 1984). Still, the Wyllie type of the formula (see time average equation - A2.1) appears to be robust for the GGR density based on estimation of porosity:

10 At low (seismic) frequencies, the effects of the presence of pore fluid on the velocities of compressional (P) and transverse shear (S) waves in practice are described by Gassmann's theory (Gassmann, 1951). This theory, however, is valid in very rough (homogeneous half-surface) approximation of the media. More general model which accounts for non-homogeneous and isotropic solid rock frame (conglomerates) and porous fluid at the given differential pressure is based on Biot theory (Biot, 1956) and involves more independent coefficients to be specified. At high frequency, models for P-velocity and S-velocity become frequency depended (King, 1966) and determine a velocity dispersion depending on attenuation mechanisms (Madatov et al., 1996). In addition, Biot's theory predicts the existence of two P-waves travelling at different velocities (fast and slow waves) and one S-wave. Recently, one more generalization of Gassmann's velocity model came from the Stanford University research group. A. Nur and J. Dvorkin showed that empirical relations generally fail to predict velocity-porosity relations outside the range of values for which they are estimated. In order to improve this situation they proposed to distinguish between two levels of porosity for in situ sedimentary rock; below and above the Critical porosity - Thus, any more extensions of seismic velocity - bulk porosity relationships require more additional model parameters to be inverted for. As a result seismically driven porosity is highly uncertain and locally valid estimations.

11 9h = 189 ^s/ft for fresh water; 9- = 51-55 ^s/ft - for sandstones; 43.5-46 ^s/ft - for limestones (Oil and Gas Geology handbook, 1984).

^(Pj,Ph) = (Pgrr- Pj)/(Ph - P/X

(A2.2)

where ^ - the porosity; ph, p/ - the fluid and j-th rock matrix densities, respectively; pGGR is measured by the GRR density related to the bulk porosity (pB = ^ph+ (1 - ^)p/) as the following: pGRR = pB + Vc(pc - P/). Here, Vc is the volumetric fraction of rock occupied with clay, evaporite or other distorting mineral.

The value of the GRR driven porosity requires more corrections at the shallow non-consolidated part of the clay reach sections, where the difference (pc - p/) cannot be ignored. The dissolved or second (third) phase oil (gas) may also lower the GRR density values and, hence, lower the relevant porosity estimation. Again, the fluid phase density differences are more contrast at shallow intervals.

The neutron (NC) log is based on the effect of capturing of neutrons emitted from a radioactive source and collided with formation material nuclei. The loss of energy in returned neutrons is highly sensitive to amount of hydrogen nucleus in rock. This is the reason why the neutron log essentially measures a total hydrogen content closely related to the liquid-filled content of the porous media. Consequently, this log responds to all the hydrogen presented in the given formation, including inside crystal water, bound water, etc. This effect is especially high for clay reach rock matrix, where the neutron porosity ^cn is always overestimated in comparison with effectively "true" value Strictly speaking, the neutron log was purposely developed for porosity estimation in limestones. Other lithology requires corrections (actually reduction of the neutron porosity ^cn) in agreement with the following relationships:

^CN = ^ + Vc^cn, (A2. 3)

where Vc is the volumetric fraction of rock occupied with clay (Oil and Gas Geology handbook, 1984).

There is a range of methods for porosity detection from resistively logs, spontaneous polarisation logs, etc. They normally require more complex empirical relationships to account for water mineralisation, pore space geometry, etc. (Oil and Gas Geology handbook, 1984). Recently, the well-logs set ^ porosity transformation based on Artificial Neutral Network (ANN) simulators have become popular (Bhatt, Helle, 2002). However, local validity of the relevant mimic model identification restricts application of such kind of data transformations for calibration purposes. Our own experience in well logs ^ porosity evaluation is based on using a big amount of calibration wells in different areas from all over the world (Madatov, Sereda, 2000b; 2003). It allows us to come up to the following conclusion:

1. The most informative and world-wide consistent source of well data suitable for purposes is a sonic log, which is normally available for whole depth interval of exploration well.

2. The clay contain corrections for porosity estimation based on all log sources are obligatory as well as correction of sonic and density logs for shallow non-consolidated intervals and neutron log reduction for non-carbonate lithology.

3. Using the complex of BCS, DC and NC logs jointly with porosity measurements allows better adjustment of lithology-dependant constants in the relevant log-porosity equations (A2.1-A2.3).

4. Amount of well log data normally available for evaluation of compaction trend attached to the given formation interval is sufficient to get reliable least square estimation of porosity data fit quality. This estimation could serve as an element of data quality stop criteria at the effective basin model calibration (see Fig. A2.1).

In contrast with porosity, permeability data are hard to obtain from direct measurements or from simple well log processing.

Fig. A2.1. Log data based on porosity trend estimations for the single 1.5-D HD model (the North Sea):

a - corrected in agreement with

(2.11-13) well log data; b - combined compaction model in trend approximation.

Direct permeability measurements are usually expensive and can be afforded only in a few points associated with the reservoir depth intervals. They are normally based on a drill system test of the formation pressure build-up vs. pore fluid leakage (liquid or gaseous phase flow) during production or before abandoning (Nordahl et al, 2003). In all cases, the quantitative measure of permeability is a matter of the relevant "Flow curve" interpretation. Thus, the systematic use of permeability measurements for BM calibration purposes cannot be considered as a common routine.

In practice, the only source of permeability estimation is provided by well-log data. At that, the well log driven porosity discussed above considered to be a key factor influencing permeability. Lithology depended empirical relationships between porosity and permeability are normally used for conversion (Reike, Chilingarian, 1974; Magara, 1978; Baldwin, Butler, 1985). However, the errors of well-log to porosity transformation increase severely due to highly non-linear (most often power law) form of these empirical links. Thus, acceptable for calibration purposes porosity data quality is converting to practically unacceptable permeability data quality.

In some of the modern data transformation technique aimed for permeability evaluation the mimic inverse models used for simulation of these phenomena are based on a set of porosity sensitive well-logs (Bhatt, Belle, 2002; Huang et al, 1996). In such approaches permeability is considered as a complex function of several interrelated factors of lithology, pore space geometry, fluid content, etc. Naturally, the relevant inverse models become non-unique and non-stable by definitions. We consider using of such kind of an inverse model for data simulation purposes as a locally valid and highly uncertain procedure required additional regularisation to be stable and to give consistent results for the whole section.

The closer investigation of permeability at different scales indicates a big variability of this petrophysical property in heterogeneous media depending on a sample volume size (Nordahl et al., 2003). At the small sample volume (core plug scale) the measured permeability varies greatly since its value strongly depends on sampling position. The variations are minimising and the value is getting stable at some Representative Element Volume scale (REV scale), where the sample becomes effectively homogeneous. Normally REV driven permeability value has a significant shift in comparison with the well-log or well-measurement driven values. A general conclusion is that core plug and wire log scales could be too small to produce consistent measure of this rock property. The relevant sources of information appear to be not appropriate for the EBM calibration purposes.

Moreover, we consider calibration of certain basin model against burial history indicators and porosity -formation pressure data as an additional way for effective estimation of a sediment permeability on the formation level of integrity. Indeed, permeability is rather effective attribute of the Earth model, which highly depends on the spatial and time scale of investigation and the introduced model assumptions. In most of the reservoir Engineering applications permeability is considered as an effective geo-fluid system property, that controls multi-phase fluid flow at the given subsurface geometry and pressure regime. Naturally, permeability estimation based on the calibrated EBM will give quite different quantity for the same in average lithology unit. Thus, permeability data in the discussing context can be rather output result of the EBM calibration than input data for it.

Formation pressure data

As it has been stated above, the formation pressure data represents a key real data input at the calibration of the given purpose-built EBM.

In the context of overpressure prediction purposes it is important to stress the following.

1. The direct in situ formation pressure measurements, for example Repeated Formation Test (RFT), can only be made available against good permeable depth interval of the section, which are normally associated with few reservoir targets in the well. Still, even here, the final value in pressure scale is a matter of interpretation and Representative Element Volume scale (Nordahl et al., 2003).

2. The estimation of the relevant value against poor permeable intervals is always a matter of combine interpretation of multi-source indicators of formation pressure. It is commonly agreed, that no one single indicator can be used in isolation from other to get a consistent quantitative result for a well section. Resistivity, for example, is strongly affected by environmental factors such as temperature and pore brine mineralisation. The example in Fig. A2.1 illustrates a combine interpretation of drilling indicators with well log derived estimations at getting definitive wellsite picture of the formation pressure.

3. The pressure prediction based on surface seismic data is not only a matter of interpretation but to big extent is a result of rather complicated transformation of the low-resolution data (Campbell et al., 2000). Basically, this transformation implies surface seismic data inversion with respect to the effective stress sensitive elastic Earth model, which itself is a non-unique and unstable problem (Dutta, 2002; Sayers et al, 2002). It is clear, that involving output from Seismic Inverse Problem solution as an input for the EBM calibration, which is in turn a nonlinear Inverse Problem, has no practical sense. Substitution of the seismic data inversion with a bijective transformation of seismically derived parameters like velocity, acoustic impedance or quality factor into the pressure scale (Huffman, 2002) is even a more meaningless approach since it implicitly replaces more sophisticated

Earth model by much less advanced one. In addition, accuracy of similar seismic data transformation cannot be accepted as sufficient for calibration purposes (Madatov, 2005). An exception can be made for 4-D seismic data (seismic-laps monitoring) (Sayers et al, 2002; Alberty, McLean, 2003). In combining with other pressure indications, such seismically derived data can be useful for the calibration purposes. The trouble is that similar data are available normally only in area proximity of local intervals of HC reservoirs during their production process.

There are a lot of well data and drilling information types relevant to estimation of this target phenomenon after drilling. The most important and well proven sources are represented in Table 3 against the lithology types where it can be made available (Doyle, 1998).

Different quality of pressure data available in good and poor permeable formations (see Fig. A2.2) and natural difference in the relevant overpressure regime for SS and SH lithologies (Stump et al., 1998) often creates even different terminology for pore pressure expected in sandstone and shale, respectively (Dobrynin, Serebryakov, 1978). In the discussing context we do not distinguish formation pressure in sands and clays to be a different kind of target phenomena, but rather contrast indication of predominantly 1-D and 3-D origins of pressure communication in basin time scale for aquifer and aquitard formation units. In the end, this is a main prerequisite for decomposition of the full scale 3-D pressure model combination of 1-D and 2-D HD models (Madatov, Sereda, 2005), which is in turn an open possibility for the practical use of data inversion principles at the EBM calibration.

The conclusion about applicability of different sources about the present day formation pressure regime for aimed calibration purposes could be formulated in connection with the list represented in Table 3 as the following: Table 3. Formation pressure data, indicators and estimations

No Description Availability against lithotype

Direct measurements:

1 RFT SS (Sandstone)

2 Pressure kicks SS

Wellsite overpressure indicators:

3 Equivalent mud weight SH (Shale)

4 Shale cutting-caving SH

5 Flow-line temperature SH, SS

6 Losses in circulation SS, SH

7 Connection gas SS, SH

8 Other gas peaks SS, SH

9 Chromatograph gas ratios SH

10 Rotary drilling torque SH

11 Shale factor SH

10 Corrected drilling exponent SH

11 Borehole caliper SH

Overpressure estimations:

1 Based on offset pressure data and data interpolation SS, SH

2 Based on well-log data and Effective stress model SH

10 Based on 4-D seismic data and Hybrid Earth models SS

RFT data,

- Kicks,

- Mud weight:

- Corrected D-exp:

- Well-log estimation; Combine interpretation:

- SH interval;

- SS i met val

Fig. A2.2. Getting a definitive overpressure curve for a single exploration well. (The Western Canadian offshore example based on measurements and combine interpretation of drilling indicators and well-log based estimations)

Direct measurements:

• Do not require background model

• Available only in aquifer formation

• Can be used as it is as the most accurate calibration data set.

Wellsite overpressure indicators:

• Can be treated as expert based independent estimations

• Can be made available for significant part of well section

• Imply joint interpretation by expert. The relevant results in pressure scale can be used as a calibration data set with finite depth dependant accuracy.

Overpressure estimations:

• Are not recommended for usage as an independent source

• Well-log, MWD and SWD based 1-D estimations could be jointly used in combine interpretation: to reduce uncertainty, to ensure interpolation between firm pressure markers, as a pressure input in WD model updating mode, etc.

• 4-D seismically derived estimations in shallow reservoirs can be used in joint 3-D interpretation.

Specification of Data Comparison Space

Let us now introduce specification for the Data Comparison Space in agreement with the analysis of

data types suitable for calibration (see s/section 2.3).

Two types of the relevant well data sets can be sorted according to the scale of comparison with

synthetic prototype as formation pressure data and formation porosity data arranged in the form of definitive

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(unique) values with the attached averaged error estimations against the given sub-surface coordinate.

Then the multidimensional Euclidean Data Comparison vector space P can be establishing for the given

calibration area according to this specification in order to accommodate all possible calibration data sets in scales

and grids adjusted for comparison with their synthetic prototypes. Thus, every real p* e P or synthetic pA e P

data vector is representable in the form:

11 122 2 11 122 2 p = {P , P2 ,..., Pd ; P\ , P2 ,..., Pd )1, (p\ , P2 ,..., Pd ; P\ , P2 ,..., Pd )2,...,

(Pi, P21,., PD1; P12, P22,., Pd2)k}t,

where the upper indexes indicate the scale (1 - for pressure; 2 - for porosity); D is the well data grid size and K is the number of offset wells (number of 1.5-D HD models). The P space dimension L = 2DxK is normally one-two order bigger than the dimension of the relevant X space: N = 3 JxK+IxL.

APPENDIX 3 Important math definitions

The below definitions are in agreement with common math terminology (Hadamard, 1932; Tikhonov, Arsenin, 1979; Traub, Wozniakowski, 1980; Dennis, Scnabel, 1983;Menke, 1984; Cheney, Kincaid, 1985; Tarantola, 1987).

Definition 1. The N-dimensional vector space X is defined as metrical when P-metric as for its elements x e X is introduced as the following:

||x|| = (xiP + x2P + x3P +...+ xNP)1/P, (A3.1)

where p - the index of metric. In particular, the Euclidean vector space has the metric index p = 2 (L2-metric)

and L2 Vector Norm.

||x|| = (xj2 + x22 + x32+...+ xN2)05. (A3.2)

We will only consider the L2 metric which, for scalar product, gives:

(X1X2X1 = X2rCx"1x1, (A3.3)

where the covariance matrix C has a diagonal structure, meaning that there are no linear relationships between the arbitrary vectors in any space. In particular, Cu element represents a square of the scaling factor for the model space and a square of the weight coefficient for the data space12.

Definition 2. The set AX c X is defined as a ComPacted Set in X if any sequence of its elements {x} contains sub-sequence converging to the single element x0 e X.

The Diameter (size) of the compacted set AX is defined as:

diam(AX) = sup(||x, - £■ ||). (A3.4)

Vx(,xicAX

12 The weights and scale coefficients are normally introducing to account for the different confidence levels for the different data sources and to arrange the sensitivity levels for the different variation range of the model parameters.

Definition 3. The Compacted Set in X set 5X is defined as a Compacted itself or just Compact in X if it is closed, i.e. if any sequence of its elements {x} is converging to a single element x0 eSX. The sufficient condition for any sub-set on X to be recognised as a compact is its closure.

The Diameter (size) of compact SXis defined as:

diam(SX) = max(|x - xj II). (A3.5)

VXj,Xi<=3X

Definition 4. The s-locality {ep0} of the arbitrary vector p0 e P is defined as a compact eP c P centered to p0 such that:

eP|Po = {epo}: max(|p--po||) ^ e. (A3.6)

VpzsP

Definition 5. Let's there be two Euclidean spaces X, P and on some sub-space X0 c X the rule of imaging G: X^P is defined to map X0 into some sub-space P0 c P. The G[x] will be specified as a Mapping Operator (or just Operator) such that:

V x e X0 ^ G[x] = peP0. (A3.7)

Below we will assume that Vp e P0 3 x ^ X0.

Thus, let P0 c P be Image of X0 c. X on the vector space P and X0 - Prototype of P0 on the vector space X.

Definition 6. The operator G[x] performs a Single-valued (Unique) mapping of the vector x e X0 if the unique image G[x] = p e P0 exists.

Definition 7. The operator G[x] performs a Continuous mapping of the compact set 5X c. X0 on P0 c. P if reducing the size of SX (see (A3.5)) implies the relevant reducing its image sP c P0. More strictly:

8i< 52 = diam(SjX) < diam^X) ^ G[81X] c G[52X| = £lP c E2P. (A3.8)

Definition 8. The operator G[x] performs a Stable mapping of the vector x e X0 on P0 c P if small perturbations of x do not create the big perturbation in its image p. More strictly:

G[x +5x] = p + ep : \\8%\\^ 0 ^ |MK 0. (A3.9)

The continuous on X0 c Xthe operator G[x] is stable on the same sub-space.

Definition 9. The operator G[x] performs an Bijective mapping of the vector x e X on P, if based on the unique image G[x] = p e P, it is possible to recover the prototype vector x uniquely as the following: G"1[p]= x e X.

Definition 10. The operator G[x] in this context is defined as a Forward operator of bijective transformation (or just Forward operator) and the operator G_1[p] - as an Inverse operator of bijective transformation (or just Inverse operator). Both operators are Monotonous on the relevant sub-space of their monotony.

The wide class of linear (matrix) operators has an inverse matrix and is monotonous wherever on the vector spaces of their definitions. The pair of bijective transformations can be written as the following:

{Gx = R e P - Forward transformation G~lp = x e X- Inverse transformation.

The typical example of bijective transformations and the relevant numerical operators are forward and inverse Fourier transformation. Generally, however, the unique and stable inverse operator G_1[p] cannot be globally (wherever) defined on the Euclidean vector space P based on the globally stable and unique forward operator G[x] defined on the Euclidean vector space X. The relevant inverse problem is ill-posed in Hadamards' sense.

Definition 11. According to Hadamard, the math problem Gx = p for p e P and x e X is well-posed (correct) if three following conditions could be accomplished:

1. The solution can be found for the whole vector space P.

2. The solution is unique.

3. The solution is stable.

Definition 12. The mismatch function R between any vector p e P and its prototype pA computed basing on the forward modelling (pA = Gx) is representable in the form of the weighted vector norm:

R(Gx,p) = ||pa p\\w = || W(pA-p)|, (A3.10)

where W is one diagonal (LxL) matrix of weights; L - the space P dimension.

The normalised mismatch function RN comparable with the dimensionless data error measure is given by

Rn(Gx,p) = ||pa p|| w/||p|| w. (A3.11)

Definition 13. According to Tikhonov, the inverse operator G"1 can be treated as a Regular OPerator on the subspace P0 c P if it is possible to build there such a stable G/1 operator that the biggest diameter of the compact {8x0} getting on the vector space X0 c X as a result of its application to the compact {ee0} defined wherever on P0 is continuously decreasing with decreasing of e. The condition of regularity for solution of the relevant inverse problem defined on the sub-space P0 c P for the target sub-space X0 c X can be then written as the following:

lim[RNpe,Gxs)] = 0, Vpe e {sp,} c P0; V xs e {8x0} c X0. (A3.12)

When these conditions can be accomplished, the ill-posed inverse problem becomes Conditionally Correct.

The inverse problem can be treated as conditionally correct according to Tikhonov if three following conditions can be satisfied:

1. The solution of IP exists and belongs to a Priori known sub-space X0 c. X of possible solutions.

2. The relevant forward operator G[x] is bijective for Vx e X0 c X and G[x] = pA e P0 c P V x e X0 c X.

3. The inverse operator is continuous on P0 c P.

Thus, the local sub-space X0 of the vector space X is also considerable as a sub-space of the unique solution or just sub-space of uniqueness for the conditionally correct inverse problem definition.

A typical example of regularisation of the ill-posed inverse problem can be found on data fitting by using the least square minimisation of synthetic data O real data mismatch formulated as the following (Menke, 1984; Tarantola, 1987):

G"1[p*] = x*: argmin [J(G[x],p; Cp) ] Vx e X0; Vp* e P0, (A3.13)

where J( ) - the misfit function given by:

J(G[x],p; Cp) = || (G[x] -p)TCp"1 (G[x] -p) ||, (A3.13*)

where Cp is the symmetrical data covariance matrix of real data.

The stabilisation of the irregular solution of (A3.13) is finding by attaching additional hard or soft constrains on the vector solution x*.

The example of a linear Hard Constraint is:

||Ux*||<a, (A3.14)

where U is the matrix and a is the constant. When using hard constraints, the solution is the model that minimizes the misfit function and verifies the constraints. Hard constraints may appear in various disguises.

Soft constraints appear as an extra penalty term in the misfit function. For a least-squares problem, the complete misfit function is then:

J(G[x],p; Cp) = || (G[x] -p)TCp"1 (G[x] -p) || + I (x - xYc/Cx - xA) ||, (A3.15) where xt is the prior model, and Cx - the a Priori covariance matrix. Roughly speaking, such a penalty term will prevent the solution from being "too far" from the prior model. In particular, when xt = 0 and Cp = XI, this stabilisation method is referred to as the Marquardt-Levenberg method (Marquardt, 1963) or ridge regression.

APPENDIX 4

Refinements in determination of the length of the simple step qk at Gradient optimisation strategy

Implementation of the full-scale minimisation algorithm (10) at defining the proper value for the simple step length qk at each current k is often unjustified from the time consuming point of view. The effective approximate solution of this problem can be based on satisfying the following condition:

R[x - qkVR(x*,p),p] < PR(xk,p), Pe(0,1)], (A4.1)

by using the following numerical algorithm:

• Set up constants: a > 0, 0 < y > 1 h -q > 0.

• Assign qk = a.

• Check condition R[xk - qkVR(x^,p),p] < R[(xk,p) - uqk|| VR[(x^,p*)] || W where 0 < u > 1 arbitrary small value.

If condition (A4.1) satisfied the process is stopped at the given qk, else qk = y qk and repeat checking (A4.1) until qk < -q. In case qk < -q is reached, assign qk = ^(0).

APPENDIX 5

Refinements in determination of the step vector sk at Newton-Gauss optimisation strategy and regularisation of SVD solution

The determination (17) of the step vector s^ requires resolving the linear problem JkA-sk = -r[(xt,p)], where over-determined Jacoby matrix is defining in its FD approximate form (15). This problem will be called a target linear Problem in the given subsection. The LS solution of the target linear problem is finding in the

generalized-normal form (i.e. for minimal norm of solution (Lawson, Hanson, 1974). This solution also suits to the concept of the e-solution defined for the considered IHD problem in the calibration purposes.

The minimal norm solution of this linear problem can be found basing on the Singular Value Decomposition (SVD) approach. Note, that the SVD approach has been extensively used for resolving linear and slightly non-linear problems at geophysical applications (Tarantola, 1987; Ursin, Zheng, 1985). The singular value decomposition has mostly been used in spaces with a purely Euclidean metric (where the covariance matrix is the unit matrix). However, as it was stated in the special literature (see, for example, Cheney, Kincaid, 1985) it is fully possible to pose the SVD directly for the general weighted L2 metric (Definition 1 in Appendix 3) without having to perform re-parameterisation. The relevant SVD problem is named in such general case "generalised SVD problem" and requires a careful analysis of the singular values closed to zero. Below we will use the term "SVD" having in mind its generalised form.

According to the theory (Golub, Van Loan, 1983; Lawson, Hanson, 1974) let us determine the matrix J as the following:

J = U-A-VT, (A5.1)

where U - the orthogonal LxL matrix; V - the orthogonal NxN matrix and A - LxN matrix having the following structure:

A =

Xi 0 ...... 0

0 X2 ............0

0 0 ............XN

0 0 ...... 0

0 0 ...... 0

(A5.2)

where Xj, j = 1,2,..., N - the singular values of the matrix JA.

The target over-determined linear problem can then be rewritten as the following:

(U-A-VT)-sJi = - rfep)]. (A5.3)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Finally, taking into account the rectangular structure of the matrixes U, A, V the generalized solution can be represented as:

Sk = - VA+ UTr [(xkP)]. (A5.4)

The solution given in (A5.4) reveals instability features when the singular values of the inverse matrix A+ approach to infinity (i.e. the relevant Xj, j = 1,2,., N values approach to zero). There are two possibilities at this situation:

(a) set {X} of singular values can be certainly separated on two groups: "big" and "small" (close to zero) Xj values;

(b) set {X} of singular values cannot be certainly separated on two groups: "big" and "small" (close to zero) Xj values.

The second situation corresponds to an ill-conditioned linear problem (A5.4), which is the general case in the generalised context of the SVD application reduced to the ill-posed inverse problem solutions.

Practically, the analysis of the singular values can be built in agreement with the well-known recommendations (Lawson, Hanson, 1974). Namely, they are reduced to the following.

• Introduce the small value ^ as a discrimination threshold between the stable and non-stable part of the SVD solution (A5.4).

• Sort out the set {X} according to their decrease and pick out the first m singular values, which satisfy the condition Xj >

• Introduce a square m x m sub-matrix Am+ of the matrix A+ and represent the matrixes A +and U, V in the block form as the following:

' Am-1 0 0 0

U = (Um : Uq); V = (Vm ] Vq), (A5.5)

where the matrixes Um, Vm are constructed from the columns of the matrixes U,V containing the "valuable" singular values (i.e. Xj >^ ,Vj = 1,2,...m) and the matrixes U0,V0 are constructed from the remained columns of the matrixes U,V containing the singular values below the discrimination threshold.

The generalised solution (A5.4) can be rewritten as:

Sk *-Vm AmlUmT r[(Xk,p)]. (A5.6)

Evidently, (A5.6) solution will be more stable but having bigger L2 norm in comparison with (A5.4).

This stabilisation approach appears to be sufficient enough for the situation certain (a) at the SVD analysis. More powerful stabilisation must be reached for more general situation (b). The relevant stabilisation scheme was built in agreement with the Lavrentjev's approach to the ill-conditional operator equations (Lavrentjev, 1962). It implies substitution of the target linear problem by its coarsen prototype:

(JA + 8XI) s ~ - T[(x,p)], (A5.7)

where I - the unit diagonal NxN matrix; - the small value (regularisation parameter) assigned in agreement with

Thus, the main elements of the target linear problem resolution are the following.

1. Getting SVD representation of the Jacoby matrix JA according to (A5.1-A5.4).

2. Analysis of the singular value set {X}.

3. Getting the regularisation strategy (A5.6) or (A5.7) depending on the situation around ill-conditional matrix A+ and resolve the target linear problem.

In agreement with the theory of SVD based inverse operators (Golub, Van Loan, 1983), the quality of the relevant inverse problem solution in terms of stability and accuracy can be improved both prior and/or posterior the target linear problem solution.

The prior solution measures include:

• Normalising of the model parameter space (see section 4, formula (8)).

• Fixing some of the model parameters based on sensitivity analysis of the model space.

• Weighting the equations of the target system in agreement with data accuracy (Definition 12 in Appendix 3).

First two operations are part of the Upscaling procedure discussed in (Madatov, Sereda, 2003). The last operation is included in computation of mismatch function. The practical aspects of the weighting of the mismatch norm are discussing in connection with differentiation of the data quality at single well data analysis (see Appendix 2).

The posterior refining of the target problem solution is based on updating of the sensitivity analysis while computing the Jacoby matrix (15) and SVD analysis of zero-multiplicity N-m space of the vector solution (A5.4).

Let us now describe the approach to stabilisation of solutions (A5.6-7) based on instant sensitivity analysis.

The n-th components of a sensitivity vector can be introduced as the following (Madatov, Sereda, 2003):

Sn = \\M(x + sx - M(x)||/|| M(x)||, Vn = 1,2,.,N, (A5.8)

where 8„% coincides with the introduced above simple model parameter vector perturbation 5x„e„.

The relative sensitivity of model components will not be changed if divide this value on non-zero constant. The produced sensitivity vector can be respectively named Data Normalised Sensitivity vector.

Let this constant value be equal to the weighted data vector norm ||p|| in agreement with Definition 12 in Appendix 3. Then the relevant Data Normalized Sensitivity vector component can be computed basing on the current final difference approximation of Jacoby matrix (15) as the following:

SLn = ||JA(x)5„x ||/|H|, Vn = 1,2,.,N. (A5.9)

Clearly, that sensitivity analysis based on (A5.9) does not require any noticeable computer recourses in addition to the ones required for the generalised scheme of the SVD analysis. At the same time such kind of current sensitivity analysis allows instant orientation within the model parameter space. In particular, it can be important at instant stabilisation of the target problem solution in terms of detection & fixing of currently poor sensitive directions within the search model parameter area XA.

The SVD analysis of zero-multiplicity N-m space of solution (A5.4) vector can be used for the final adjustments of the achieved 1.5-D SIP solutions against the global consistency criterion established for the multi-well case. The idea of such adjustments comes out from the following definition of zero-multiplicity N-m space (Cheney, Kincaid, 1985).

If the vector s gives the minimal norm LS solution of system (A5.4) representable in form (A5.5-6) and N - m = r > 0, then the columns of the rxN matrix V0 form a zero-multiplicity basis within the model parameter space, such that

JAV0 = 0. (A5.10)

Thus, the generalised minimal norm LS solution sG of system (A5.4) can be represented formally as the following: r

sG = s +LajV0J, (A5.11)

" " j=1 ,

where a,, j = 1,2,.,r - coefficients of linear combinations; V0j, j = 1,2,.,r - columns of the matrix V0.

Indeed, solution (A5.11) does not change the LS norm of the over-determined problem (A5.4) due to (A5.10), but allows adjustment of the model parameter vector x delivering solution of the relevant SIP problem. The adjustment can be made in accordance with the external criterion globally introduced for the set of 1.5-D SIPs resolving simultaneously (see section 5). The local adjustment of the model parameter vector xk at k-th step of search iteration process (17) is also possible basing on a re-determination of the current SIP solution sk within

zero-multiplicity basis of the model parameter space X. The corresponding choice of the correction constants a,-in (A5.11) allows, for example, adjustment of the vector s\ toward the descent direction (Dennis, Scnabel, 1983).

The procedure of determination of the step length qk with the direction of the vector sk generally is quite analogous to the one introduced in the Gradient strategy for the gradient vector VR(Xfcp) (see Appendix 4). The specific of the Newton-Gauss method demands to start the optimizing process from initial value of the step length qk = 1 (Moisejev et al, 1978).

APPENDIX 6

Representation of a sensitivity vector in the polar plot form

It is practically convenient to get instance sensitivity signature of an effective basin model currently updateable during the data inversion. Since the dimension of the sensitivity vector is the same like it is for the model parameter space (see formula (A5.8) in Appendix 5), where it is normally about 101-2, the problem of express visualisation rises. One of the possibilities for expression of the vector components defined for a multidimensional space consists in utilising a polar plot option for visualisation.

Fig. A6.1 illustrates this method in connection with plotting sensitivity vector computed for the single 1.5-D formation model. Note that the same approach is applicable for the 3-D combine model.

Converting of the rectangular sensitivity diagram (Fig. A6.1a) in the polar coordinate system requires the following succession of steps.

1. The polar plot circle area is subdividing onto J sectors, where J is a total number of formation units (Fig. A6.1d).

2. Every sector is splitting onto 5 sub-sector domains13 in the predefined consequence.

3. The components of the sensitivity vector then plotting in formation-by-formation consequence filling the relevant sub-sectors according to their magnitude (see Fig. A.6.1a-c).

4. The final single colour of the polar sensitivity chart is getting as a weighted mixture of the relevant RGB components according to the colour legend when true colours are used for plotting. The weights at mixing are assigned in accordance with the sensitivity magnitude.

Note that neither major nor minor circle grid lines are displaying on the final plot (Fig. A.6.1c).

SPHsiinmy

*..... MK1 - nulkOl pdrtHlli

»»»■■■ UP? - pominwJion .

— MF1) - Sp«iAc Mtac*;

MP4 - Elf 9*n*fAion con turn. - MPS - fcIT Vik camla*

3

FftX

Fm3

Low

High

MPl

MPi

Fms b

FmT

Fig. A6.1. Getting polar sensitivity diagram for the 1.5-D model parameter vector

13 It depends on the number of independent model parameters specified per a single formation unit. For example, it can be equal to 3 for 3-D EBM. Still, some sector area should be reserved for specification of 2.5-D aquifer formation model in this case.

References

Alberty M.W., McLean R.M. Emerging trends in pressure prediction. PaPer Presented on Offshore Technology

Conference held in Houston, Texas, USA, May 5-8, 2003. Allen P.A., Allen J.R. Basin analysis principles and applications. Blackwell scientific Publication, Oxford, London, p.393, 1990.

Baldwin B., Butler C.O. Compaction curves. The American Assoc. of Petroleum Geologists Bull., v.69, N 4, p.622-626, 1985.

Barker C. Thermal modelling of petroleum generation: Application and theory. Elsevier, Amsterdam - New

York - Oxford - Shannon - Tokyo, 395 p., 1996. Bear J., Bachmat Y. Introduction to modelling of transport phenomena in porous media. Kluwer Academic

Publishers, London, p.553, 1991. Bhatt A., Helle H. Committee neural networks for porosity and permeability prediction from well logs.

GeoPhysical ProsPecting, v.50, p.645-660, 2002. Biot M.A. Theory of propagation of elastic waves in a fluidsaturated porous solid. I. Low-frequency range. AppI. Phys, v.13, p.35-40, 1956.

Buhrig C. Geopressured Jurassic reservoirs in the Viking Graben: Modelling and geological significance.

Marine and Petroleum Geology, v.6, p.35-48, 1989. Campbell A., Elam S., Lahann R., Patmore S. Pressure prediction from seismic derived velocities. Leacon learned at 204/19-1, West of Shetlands, UK. Paper presented on "Overpressure 2000" workshop, 4-6 APril, London, 2000.

Cheney W., Kincaid D. Numerical mathematics and computing Brooks. Cole Publishing ComPany. Monterey,

California, USA, p.562, 1985. Chiarelli A., Duffaud F. Pressure origin and distribution in Jurassic of Viking basin (United Kingdom -

Norway). The American Assoc. of Petroleum Geologists Bull., v.64, N 8, p.1245-1266, 1980. Dennis J.E., Scnabel R.B. Numerical methods for unconstrained optimization and nonlinear equations.

Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 440 p., 1983. Dobrynin V.M., Serebryakov V.A. Methods of prediction of abnormally high pore pressure (in Russian).

Moscow, Nedra, 232 p., 1978. Doligez B., Bessis F., Burrus J., Ungerer P., Chenet P.Y. Integrated numerical simulation of the sedimentation heat transfer, hydrocarbon formation and fluid migration in a sedimentary basin: The Temispack model. In "Thermal modelling in sedimentary basins" collection of papers from 1-st Exploration Research Conference. IFP, France, p.149-172, 1985. Doyle E.F. Data and model integration results in accurate quantification of geopressure at the wellsite. PaPer presented on conference "Pressure Regimes in Sedimentary Basins and Their Prediction", Sept. 2-4, DelLago, Texas, USA, 1998. Dutta N.C. Geopressure prediction using seismic data: Current status and the road ahead. Geophysics, v.67, N 6, p.2012-2041, 2002.

Eykhoff P. Trends and progress in system identification. Pergamonpress. Oxford-Paris, 397 p., 1981. Fault mechanics and transport properties of rock. Edited by B. Evans, Academic Press, London, 523 p., 1992. Fox D.R. Estimation and inference in inverse methods for kinetic models of hidrocarbon generation.

Mathematical Geology, v.27, N 8, p.923-938, 1995. Gassmann F. Uber die elastizat poroser medien. Natur. Ges. Zurich, Vierteljahrssch, B.96, S.1-23, 1951. Goffe W.L., Ferrier G.D., Rodgers J. Global optimization of statistical functions with simulated annealing.

J. Econometrics, v.60, p.65-100, 1994. Goldberg. Genetic algorithms in search optimization and machine learning. Addison Wesley Publ. Co, p.281-292, 1989. Golub G., Van Loan C. Matrix computations. The John Hopkins University press, 311 p., 1983. Hadamard I. Le probleme de Cauchy et les equations aus derivees partielles lineaires hyperboliques. Hermann edition, 1932.

Helth A.E., Walsh J.J., Watterson J. Estimation of effects of sub-seismic sealing faults on effective

permeabilities in sandstone reservoirs. Norwegian Institute of Technology, p.327-347, 1994. Himmelblau D.M. Applied nonlinear programming. Me Graw-Hill Book Company, p. 534, 1972. Houberman Sh.A. The non-formal data analysis in geology and geophysics. M., NEDRA, p.251, 1987 (in Russian). Huang Z., Williamson M.A., Katsube J. Permeability prediction with artificial neutral networks modelling in

the Venture gas field, offshore eastern Canada. Geophysics, v.61, N 2, p.422-435, 1996. Huffman A.R. The future of pore pressure prediction using geophysical methods. The Leading Edge, February, p.199-205, 2002.

Ingberg L. Very fast simulated re-annealing. Math., Comp. Mod, v.12, p.967-973, 1989.

Ivanov V.K. To linear ill-posed problems. RAS USSR, v.196, N 145, p.270-272, 1970 (in Russian).

Jackson D.D. The use of a priori data to resolve non-uniqueness in linear inversion. Geophys. J.R. Astr. Soc., N 57, p.137-157, 1979.

Jhonson H.D., Stewart D.J. Role of clastic sedimentology in the exploration and production of oil and gas in the North Sea. Geological Society Special Publication, N 18, p.219-231, 1985.

King M.S. Wave velocity in rock as a function of changes in overburden pressure and pore fluid saturants.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Geopghysics, v.31, p.50-73, 1966.

Knott S.D. Fault seal analysis in the North Sea. The American Assoc. of Petroleum Geologists Bull., v.77, N 5, p.778-792, 1993.

Lachenbruch A. Crustal temperature and heat productivity: Implications of the linear flow relation. Journal of Geophysical Research, v.75, p.3291-3300, 1970.

Lavrentjev M.M. About some ill-posed problems of math physics. Moscow Academy of Science, Siberia Branch issue, 92 p., 1962 (in Russian).

Lawson Ch.L., Hanson R.J. Solving least squares problems. Prentice-Hall Inc., Englewood Cliffs, New Jersey, p.263, 1974.

Lerch I. Inversion of dynamical indicators in quantitative basin analysis models. 1. Theoretical considerations. Mathematical Geology, v.23, N 6, p.817-832, 1991.

Lerch I. Theoretical aspects of problems in basin modelling. In "Basin Modelling Advances and Applications", Norwegian Petroleum Society, Special publication, Elsevier, Amsterdam, p.35-65, 1990.

Madatov A.G., Sereda A.-V.I. The effective basin model concept and fast 3-D overpressure modelling in basin time scale. Proceedings of the Murmansk State University, v.8, N 1, p.5-43, 2005.

Madatov A.G. Optimisation of the multi-dimensional basis of the Model Space at the seismic processing applications. Soviet Geophysics Journal, v.13, N 1, p.45-56, 1990.

Madatov A.G. The overpressure driven seismic velocity response. Review of standard models and methods for extraction in context of basin modelling approach to overpressure prediction. Proceedings of the Murmansk State University, v.8, N 1, p.84-119, 2005.

Madatov A.G., Helle H.B., Sereda V-A.I. Attenuation from inversion of surface seismic data - application to the pore pressure prediction. Paper presented in SEG'96 summer workshop "Big Sky", Montana, 1996a.

Madatov A.G., Helle H.B., Sereda V-A.I. Modelling and inversion technique applied to pore pressure evaluation. Paper (E048) presented at 57-th EAGE Conference Glasgow, 29 May - 2 June, 1995.

Madatov A.G., Sereda A.-V.I. The decomposition of 3-D overpressure evolution model in basin scale and its application to the fault seal analysis. Proceedings of the Murmansk State Techn. Univ., v.4, N 1, p.79-96, 2001.

Madatov A.G., Sereda A.-V.I. The forward and inverse problems of the fluid dynamics in basin modelling applied to the pore pressure prediction within the sedimentary basins. P.2. The practical aspects. Proceedings of the Murmansk State Techn. Univ., v.3, N 2, p.351-366, 2000b.

Madatov A.G., Sereda A.-V.I. Upscaling of a basin model required for pore pressure prediction before and during drilling. Proceedings ofthe Murmansk State Techn. Univ., v.6, N 1, p.119-144, 2003.

Madatov A.G., Sereda V.-A.I. The forward and inverse problems of the fluid dynamics in basin modelling applied to the pore pressure prediction within the sedimentary basins. P.1. Theory aspects. Proceedings of the Murmansk State Techn. Univ., v.3, N 1, p.89-114, 2000a.

Madatov A.G., Sereda V.-A.I., Doyle E.F. Integration and inversion of well data into the basin evolution model: Way to the new generation of pressure prediction technologies. Paper presented on American Assoc. of Drilling Engineers (AADE) Forum "Pressure regimes in sedimentary basins and their prediction", 2-4 September 1998, Houston, Texas, USA, 1999.

Madatov A.G., Sereda V.-A.I., Doyle E.F. Pore pressure prediction by using inversion before and during drilling. Paper presented on Workshop "New methods and technologies in petroleum geology, drilling and reservoir engineering", 19-20 June 1997, Krakow, Poland, 1997.

Magara K. Compaction and fluid migration. Elsevier Scientific Publishing Company, p.319, 1978.

Marquardt D.W. An algorithm for least square estimation of non-linear parameters. Journal of Society of Industrial and Applied Mathematics, N 11, p.431-441, 1963.

Menke W. Geophysical data analysis. Discrete Inverse Theory. Academic press, New York, 312 p., 1984.

Miall A.D. Principles of sedimentary basin analysis. Springer-Verlag, Berlin, p.616, 2000.

Miall A.D. The geology of fluvial deposits sedimentary facies, basin analysis, and petroleum geology. SpringerVerlag, Berlin Heidelberg, p.582, 1996.

Miller T.W. New insights on natural hydraulic fractures induced by abnormally high pore pressure. The American Assoc. of Petroleum Geologists Bull., v.79, N 7, p.1005-1018, 1995.

Moisejev N.N., Ivanilov Yu.P., Stoljarova E.M. Optimisation methods. M., Nauka, 352 p., 1978 (in Russian).

Moushin I.A., Brodov L.U., Kozlov E.A., Hatjanov F.I. Structure-formation interpretation of seismic data. M., Nedra, 299 p., 1990 (in Russian).

Nordahl K., Ringrose P., Wem R. Petrophysical variation as a function of scale and implication for core-log integration. Paper (F-25) presented on EAEG conference, Stavanger, June 4-7, 2003.

Nordgard Bolas H.M., Hermanrud Ch., Gunn Teige M.G. Origin of overpressures in shales: Constrains from basin modelling. The American Association of Petroleum Geologists Bulletin, v.88, N 2, p.193-211, 2004.

Nur A., Dvorkin J. Critical porosity: The key factor to relating velocity to porosity in rocks. Paper presented on conference Pressure Regimes in sedimentary basins and their prediction, DelLago, Texas, USA, Sept. 2-4, 1998.

Oil and Gas Geology handbook. M., Nedra, p.480, 1984 (In Russian).

Raiga-Clemenceau J., Martin J.P., Nicoletis S. The concept of acoustic formation factor for more accurate porosity determination from sonic transit time data. The Log Analyst, v.29, p.54-60, 1988.

Reike H.H., Chilingarian G.V. Compaction of argillaceous sediments. London, New York, 424 p., 1974.

Rowan M.G. A systematic technique for the sequential of salt structures. Tectonophysics, v.228, p.369-381, 1993.

Sayers C.M., Woodward M.J., Bartman R.C. Seismic pore pressure prediction using reflection tomography and 4-C seismic data. The Leading Edge, February, p.188-192, 2002.

Schneider F., Potdevin J.L., Wolf S., Faille L. Modele de compaction elastoplastique et viscoplastique pour simulateur de bassins sedimentaires. IPF Revue, v.49, N 2, p.141-148, 1994.

Seismic Stratigraphy. Edited by Ch.E. Payton. Memoir of the American Assoc. of Petroleum Geologists, Tulsa, Oklahoma, p.839, 1977.

Sleep N.H. Thermal effects of the formation of Atlantic continental margin by continental break up. Geoph. J. of the Royal Astronomical Society, v.24, p.325-350, 1971.

Stump B.B., Flemings P.B., Finkbeiner T., Zoback M.D. Pressure difference between overpressured sands and bounding shales of the Eugene island 330 Field (Offshore Louisiana, USA) with implication for fluid flow induced by sediment loading. Paper presented on Workshop "Overpressure in Petroleum Exploration", Pau, April, 1998.

Stuve K. Geodynamics of the lithosphere (An introduction). Springer-Verlag, Berlin Heidelberg, p.449, 2002.

Swabrick R.E., Huffman A.R., Bowers G.L. Pressure regimes in sedimentary basins and their prediction. History of AADE forum. The Leading Edge, April, 1999.

Tarantola A. Inverse problem theory: Methods for data fitting and model parameter estimation. Elsevier, Netherlands, p.386, 1987.

Thermal modelling in sedimentary basins. The collection of paper from 1-st Exploration Research Conference. Edited by J.Burruse, IFP, France, p.600, 1985.

Thorne J.A., Watts A.B. Quantitative analysis of North Sea subsidence. The American Assoc. of Petroleum Geologists Bull., v.73, N 1, p.88-116, 1989.

Tikhonov A.N. About stability of inverse problem. RAS USSR, v.39, N 5, p.195-198, 1943 (in Russian).

Tikhonov A.N., Arsenin V.Ja. The methods of solution of ill-posed problems. Moscow, Nauka, 285 p., 1979 (in Russian).

Traub J.F., Woznjakovski H. Theory of optimal algorithms. Academic press, New York, 183 p., 1980.

Ursin B., Zheng Y. Identification of seismic reflections using singular value decomposition. Geophysical Prospecting, N 33, p.773-799, 1985.

Waples D.W. Organic geo-chemistry for exploration geologists. Minneapolis, Burgess, 151 p., 1981.

Watterson J., Walsh J. Fault seal prediction: Impossible, or just very difficult? Geological Society Special Publication, N 18, p.219-231, 1994.

Yardley G. Can lateral transfer explain the high pressure in the Central North Sea? Paper presented on Workshop "Overpressure in Petroleum Exploration", Pau, April, 1998.

Yassir N.A., Bell J.S. Relationships between pore pressure, stress and present day geodynamics in the Scotian shelf, Offshore Eastern Canada. The American Assoc. of Petroleum Geologists Bull., v.78, N 12, p.1863-1880, 1994.

Yu Z., Lerch I., Bour Q. Inversion of dynamical indicators in quantitative basin analysis models. II Synthetic multiwell information and two-dimensional case history. Mathematical Geology, v.27, N 1, p.41-68, 1995.

Zhao K., Lerch I. Inversion of dynamical indicators in quantitative basin analysis models. II Synthetic test and case history. Mathematical Geology, v.25, N 2, p.107-120, 1993.

Zimmer M., Prasad M., Mavko G. Pressure and porosity influences on Vp-Vs ratio in unconsolidated sands. The Leading Edge, N 2, p.178-183, 2002.

Glinie K.W. (ed.) Petroleum Geology of the North Sea, basic concepts and recent advances. Oxford, Blackwell Scientific Publication, 547 p., 1998.

i Надоели баннеры? Вы всегда можете отключить рекламу.