ESTIMATION OF HAZARD AND SURVIVAL FUNCTION FOR COMPETING RISKS USING KERNEL AND MIXTURE MODEL IN BIMODAL SETUP
JA. M. Rangoli, 2A. S. Talawar
1 Research Scholar, Department of Statistics, Karnatak University, Dharwad. India.
[email protected] 2 Professor, Department of Statistics, Karnatak University, Dharwad. India.
Abstract
Aim of the present paper is to find suitable model for bimodal data. We have modelled mixture of two Weibull distributions in the presence of competing risks and also used Epanechnikov kernel to estimate hazard and survival functions. We considered prostate cancer data for application of the mixture model and kernel. We used maximum likelihood estimation (MLE) to estimate parameters of the mixture model, as the equations have no closed form, so we considered expectation-maximization (EM) algorithm. The mixture model and kernel gave good fit to the bimodal data. The prostate cancer data consists of three causes, we have estimated hazard function for these three causes using mixture model and kernel. The asymptotic confidence interval for the parameters of mixture model to all three causes were estimated. Also compared survival curve of mixture model with kernel and Kaplan-Meier survival curves for all the three causes.
Keywords: Weibull mixture model, EM algorithm, kernel, hazard, bimodal.
I. Introduction
The general Statistical analyses were different from survival analysis because of the presence of censoring. Basically, censoring means incomplete data. In survival analysis or medical studies, it is quite common that more than one cause of failure may be directed to a subject at the same time. It is required for an investigator to estimate a specific risk in the presence of other risk factors. In statistical literature, this process is known as the analysis of competing risks model. It is assumed, in the analysis of competing risks model, that data consist of a time to failure and an indicator denoting the cause of failure. In survival analysis our main objective is to estimate survival and hazard functions. Survival analysis can be done using parametric, non-parametric and Bayesian methods. For parametric approach we generally consider Weibull distribution because it has increasing, decreasing and constant hazard rate, but this distribution can be used when data is
unimodal. When we have bimodal data we cannot use the standard parametric lifetime distributions, in that case we can use mixture of distributions. For nonparametric approach we can consider the kernel method to estimate the hazard and survival functions.
Many authors work on mixture of distributions and kernel based methods to estimate hazard and survival functions. Modelling of mixture of gamma, mixture of Weibull and mixture of log normal distributions for analysing the heterogeneous survival data was considered by [1], for that they have considered mice data and Lung cancer dataset. The mixture of two and three Weibull distributions was modelled and estimated the parameters using MLE and tested for best fit of the models by [2], and used five different examples to show the hazard and survival functions. A parametric mixture model of three different distributions was used to analyse heterogeneous survival data by [3]. They have simulated the data and estimated the parameters using expectation-maximization (EM) algorithm and also compared individual distribution like exponential, gamma and Weibull with the mixture of these three distributions. Similarly, many authors have worked on mixture models ([4], [5], [6]). Estimation of the hazard function and its associated factors in gastric cancer patients using Wavelet and kernel smoothing methods was carried out by [7]. Repeated time to event models to characterize the repeated occurrence of clinical events and visualization of kernel based hazard with comparison to Weibull and Gompertz models was considered by [8]. (see also [9], [10]).
In present paper, we are considering estimation of density, hazard and survival functions using Epanechnikov kernel and mixture of two Weibull distributions. For estimation we are consider the prostate cancer data which is bimodal given in [11]. Generally, for bimodal data it is not appropriate to use standard parametric lifetime distributions but mixtures of those distributions are suitable for bimodal. Here we are considering two cases, case-I consists of estimation of hazard and survival function using mixture of two Weibull distributions and case-II considers kernel method of estimation.
II. Methods
2.1 Case-I: Mixture of two Weibull distribution
Now we are considering a parametric approach using mixture of two Weibull distributions. The study considers fitting of bimodal data to the mixture of Weibull distributions in presence of competing risks and calculation of the hazard and survival functions. The functional form of mixture distribution is given below.
Let T1, T2, ■■■, Tn be the failure time of n patients where Tt £ (0, t] ,i = 1,2,..., n if we consider k competing events then T^ ^ failure time of inpatient with jthcause. Each patient fail due to only one cause Ti = min (Til,Ti2,... ,Tik) . Let us consider C be the censoring time such that 7y = min (Tij,C). Let F(t) be the cumulative distribution function (CDF) and f(t) be probability distribution function and h(t) and S(t) be hazard and survival function at time t.
F(t) = 1 - (nie-atY + ^e-^) (1)
f(t) = n1ayty-1e-atY + n2pktl-1e-Pt^ (2)
S(t) = 1 - F(t)
S(t) = n1e-atY + n2e-^tX (3)
h(t) =m = *i«YtY-1e-atY+n2pitl-1e-PtX ( ) s(t) n1e-attY+n2e-PtX ( )
Here n1 and n2 be the weights such that n1 + n2 = 1
And a and y be scale and shape parameters with weight n1 and / and A be scale and shape parameters with weight n2.
Likelihood function L is given as
n
L = Hf(xi)
1 = 1
n
L = n^at^e-^ + n^Att^-^)
Now the likelihood function in terms of competing risks can be given as
_(l ithsubject fail due to jth cause, j = 1,2,......,k
i] \ 0 i thsubje c t does no t fa il due to jth cause (Censored)
n k
L = nn^))5^)1
=1 =1
L=nin=ink=1(hj( ti))SljS(ti) (5)
Now the log likelihood of equation (5) can be written as,
n
l = logL = ._ (( SU * bg (hj(+ lQg(S(
=1
n
/ [n1aiyitrj 1e-ajt7il+n2BiAitAj 1e-^Jtii \\ t Jj R /j
i=1 ' ^W " V n1e-aJtilJ +n2e-Pjti
a/yjt e-aiLi +
l = mn=1Zk=1 (log^^y^e-^1 + ^PjAjt^e-^) - log^^t]
^ftA^V^))) + log (nie-aihj + ^e-^)) (6)
For cause we can write log likelihood as
lj = Ynii(og(n1ajyjtYi-1e-aJtli+n2PjAjt^i-1e-^Jt^J))
1 =1 n
-2log{niajyjtYii-1e-a>tll+n2pjAjt,;i-1e-Pit>li) + 2log{nie-a^ii + n2e-^Jt^i)
=1
(7)
Now estimating the parameter values using MLE, and it is obtained by first order partial derivatives with respect to each parameter and equating to zero. But these equations do not have closed form, so to estimate the parameter values we consider numerical estimation that is Newton-Raphson method or we can use expectation-maximization (EM) algorithm. The first order partial derivatives are given in Appendix.
=1
=1
A. M. Rangoli, A. S. Talawar
ESTIMATION OF HAZARD AND SURVIVAL FUNCTION FOR COMPETING RISKS_
2.1.1 Expectation-Maximization (EM) Algorithm
The EM algorithm is a powerful iterative method used for finding maximum likelihood estimates or maximum a posteriori estimates in statistical models where the data is incomplete or contains latent variables. The EM algorithm consists of an-expectation step (E-step), and a maximization step (M-step). The advantage of the EM algorithm is that it solves a difficult incomplete-data problem by constructing two easy steps. The E-step only needs to compute the conditional expectation of the log-likelihood with respect to the incomplete data, given the observed data. The M-step needs to find the maximizer of this expected likelihood. An additional advantage of this method compared to other optimization techniques is that it is very simple, and it converges reliably [12]. Let Y be the observed data and X be the missing data. We also write I, lc and lm for the log-likelihoods based on the observed, complete and missing data distributions respectively. The EM algorithm consists of iterating two steps. First is the expectation, or "E", step, in which an objective function is constructed from the complete data likelihood. Second is the maximization, or "M", step, in which the previously computed objective function is maximized. These two steps are then alternated until some convergence criterion is met [13]. Whatever value of 9 the algorithm converges to and is used as our parameter estimate.
The E-step of the EM algorithm consists of computing the conditional expectation of the complete data likelihood, given the observed data. That is, the objective function at iteration k is given by
Q(6iek-i) = E9k-l(lc(6;y,X)IY = y) (8)
Where 9k-1 is the parameter estimate obtained from the previous iteration.
The M-step of the EM algorithm consists of maximizing the objective function constructed in the previous E-step. That is, we define 9k = argmaxgQ(9l9k-1). Typically, this optimization must be performed numerically via, e.g., gradient ascent or the Newton-Raphson algorithm. In fact, it is possible to divide the set of parameters into groups (possibly with each group containing a single parameter) and optimize over each group individually with the others held fixed. This is called the Expectation-Conditional Maximization, or ECM, algorithm. Notationally, we can combine the E and M-steps of the EM algorithm into a single "update function". We write M(9k-1) = argmaxgQ(9l9k-1). The EM algorithm can thus be viewed as the iterative application of this update function, M.
2.1.2 Asymptotic Confidence Bounds
The MLE's do not have closed form to know the distribution to calculate confidence intervals, in such a case we go with asymptotic distribution of the MLE of the parameters [14]. It is known that the asymptotic distribution of the MLE 9 is given by
(§ - 9) ^ N4(0,I-1(9))
Where I-1(9) ^ Fisher information matrix of the unknown parameters
9 = (a1,a2,a3,P1,P2,p3,Y1,Y2,Y3,K^2,A-3) . The elements of the 4 X 4 matrix of I-1(.), are approximated by Itj (§),
where
'ij(e) = -
d2i(e)
9 = 9
ddiddj
Where, 9 = ((^,^,^,pi,p2,p3^i,Y2,Y3,A.i,A.2,A.3) estimated parameters. Now information matrix can be written as,
d2l(9) d9;d9;
d2l o2i d2l d2i
d aj d2l dajdßj d2l dajdyj d2i dajdXj d2i
dajdßj d2l d ß] d2l dßjdyj d2l dßjdXj d2l
dajdyj d2i dßjdYj d2l d y* d2l dyjdXj d2i d x] ]
dajdXj dßjdXj dyjdXj
(9)
[
The elements of the fisher information matrix are given in Appendix.
Therefore, the approximate 100(1 — y)% two-sided, confidence interval for 8 is given by
ê±zr/2 Ji-1(0)
Here Zy/2 is the upper y/2 th percentile of a standard normal distribution.
(10)
2.2 Case-II: Kernel density Estimation
A kernel is a weight function of observation on x and scaling parameter h which is called as the Bandwidth. The scaled distances obtained at a point x is used to compute kernel density at that point. The kernel density function is regarded as probability density [8]. The estimator to estimate density is given by,
f(u)=1^K(u)
i=i
Where, K(.) ^ Kernel Function
The kernel function has the following properties viz,.
K(u) > 0, for all u J K(u)du = 1 (normalization) K(-u) = K(u) (symmetry) J u K(u) du = 0 And f u2 K(u) ± 0
Using the kernels we can estimate the survival and hazard functions. In general, to obtaining pattern for rate of failure the hazard curve is more obvious than survival curve. Hazard rate functions can be used for several statistical analysis in medicine, engineering and economics. For instance, hazard function commonly used when presenting results in clinical trials involving survival data. Several methods for hazard function estimation have been considered in the literature
A. M. Rangoli, A. S. Talawar
ESTIMATION OF HAZARD AND SURVIVAL FUNCTION RT&A, No 2 (78)
FOR COMPETING RISKS_Volume 19, June, 2024
([7], [9],[10]). Hazard function estimation by nonparametric methods has an advantage in flexibility because no formal assumptions are made about the mechanism that generates the sample order or the randomness [15]. There are many kernels in the literature say, Uniform, Triangle, Epanechnikov, Quartic, Triweight, Gaussian, Cosine etc. Now for our study we consider the Epanechnikov kernel [16].
3
K(u)=-(1-u2)I(|u| <1)
By considering u = Then kernel becomes
Where, b is the bandwidth , n is the number of observation, Xi is the given observation and t is the point where kernels are calculated.
From this kernel many bumps are formed and summing the bumps gives us the density function. The kernel density function is given by,
i=1
n /
=1
The estimation of the CDF, Fb is constructed by integrating fb. That is
n
1 n - Xi
n
rl — —.-, £ _ —
K(t) = J fb(x) dx = -^ K(—_--)
b
=l
Where, K( t) = £m K(x)dx
Estimation of hazard function using kernel [17],
h( t)=±in=iK(--£)tâ(ti) (12)
Where, n is the number of failure times, b is the bandwidth, A(t) is the Nelson-Aalen estimator of the cumulative hazard function.
2.2.1 Nelson-Aalen estimator of cumulative hazard function
Let the hazard function be,
1 f(t) h(t) = \im-P(t <T<t + l/T>t) =^ i^o l S(t)
where f(t) be density function and S(t) be survival function, and the survival function in terms of hazard function can be expressed as,
S(t) = e-S0h(u)du
A. M. Rangoli, A. S. Talawar
ESTIMATION OF HAZARD AND SURVIVAL FUNCTION RT&A, No 2 (78)
FOR COMPETING RISKS_Volume 19, June, 2024
Now the cause specific hazard function be given by,
1
hj(t) = lim-p(t <T <t+l,J = j /T>t)
1^0 I '
J
\hi i=i
h(t) = ^hj(t)
And the cumulative cause specific hazard function be given by
A(t) = i h(u)du Jo
And cause specific Nelson-Aalen estimator of the cumulative hazard [18] is given by,
K
Z Number of individuals observed to fail due to cause j at tk Number of individuals at risk just prior to tk
2.2.2 Selection of the Bandwidth
Important thing in the kernel density estimation is selection of the bandwidth. We calculate bandwidth using Silverman's Rule [19]. That is
f 1.06*9 /10N
b (13)
Where, ô is the sample standard deviation.
2.2.3 Kaplan-Meier (K-M) Estimator
The Kaplan-Meier estimator known as the product limit estimator is a non-parametric statistic used to estimate the survival function from lifetime data [20]. An important benefit of the Kaplan-Meier curve is that, the method can take into account some types of censored data, particularly right-censoring, which occurs if a patient withdraws from a study, or is lost due to follow-up, or is alive without event incidence at last follow-up. The Kaplan-Meier estimate is an easiest way of computing survival over time. The Kaplan Meier estimator of survival function is defined as
*(t)=nO i-.tl<t
Where ti is the failure time, dt is the number of events that occurs at time ti and nt is the number individuals at risk of experiencing the event immediately prior to tt.
III. Results
The prostate cancer data consists of 489 patients, 25.5% of them are failed due to cancer, 19% failed due to CVD and 25.74% failed due to other causes and rest of the data were censored. Median failure
time for cancer patients is 23 months and for CVD patients 20.5 months and for other causes 24 months.
From figure 1 we can see that our data is bimodal, to fit this data, we used mixture of two Weibull distributions (black line) and the kernel density estimation (red line). Figure 2 explains the hazard curves for three different causes cancer, CVD and other causes. Here we can see that for cause cancer and CVD the hazard initially increases till 30 months then decreases. For other causes the hazard increases-decreases-increases, so we can say that, the hazard function is non-monotonic. Figure 3 explains the survival curve of three causes using kernel, mixture model and Kaplan-Meier survival functions. Here we can observe that kernel and mixture model survival curves are close to each other. For all three causes Kaplan-Meier survival curve has less probability of surviving as compared to the kernel and mixture model. Table 1 shows the estimated parameter values using MLE by considering EM algorithm. Table 2 gives the estimated parameter values with their corresponding standard error and confidence limits.
Histogram of data with kernel and Weibull Mixture distribution
20 40 60 80
Time (Months)
Figure 1: Histogram of the data with fitted Mixture of Weibull distribution and Epanechmikov kernel.
Figure 2: Hazard curves using mixture model and kernel for three causes
Figure 3: Survival curves using mixture model, kernel and Kaplan-Meier for three causes
A. M. Rangoli, A. S. Talawar
ESTIMATION OF HAZARD AND SURVIVAL FUNCTION
FOR COMPETING RISKS_
Table 1: Estimated parameter values for three causes
Parameter Cancer CVD Other Causes
n1 0.8155268 0.810495 0.9224448
0.1844732 0.1895035 0.07755524
a 0.001494359 0.001097538 0.000103001
ß 0.000370462 0.000076065 0.000848062
Y 1.241094000 1.058928000 1.954219000
A 2.180196000 2.600613000 2.469943000
Table 2: Estimated parameter values, standard error (SE), lower control limit (LCL) and upper control limit (UCL)
Causes Parameters SE LCL UCL
Cancer a =0.001494359 0.000059337 0.001378061 0.001610657
ß =0.000370462 0.000006754 0.000357225 0.000383700
Y =1.241094000 0.004894528 1.231500902 1.250687098
A =2.180196000 0.000420388 2.179372055 2.181019945
CVD a =0.001097538 0.000139156 0.000824797 0.001370279
ß =0.000076065 0.000001373 0.000073374 0.000078756
7=1.058928000 0.000000469 1.058927081 1.058928919
A =2.600613000 0.000187876 2.600244771 2.600981229
Other causes a =0.000103001 0.000003077 0.000096969 0.000109032
ß =0.000848062 0.000022916 0.000803148 0.000892976
Y =1.954219000 0.000666012 1.952913641 1.955524359
A =2.469943000 0.011053927 2.448277701 2.491608299
IV. Discussions
From the study we conclude that, in real life situations with competing risks data, if data is bimodal we can use mixture of distributions or kernel methods to estimate hazard and survival functions. In this paper we have considered both approach to estimate hazard and survival function in presence of the competing risks. To estimate the parameters of the mixture models we have used MLE as it does not have closed form so we considered EM algorithm to estimate parameters of all three causes. We have also calculated the standard error and asymptotic confidence interval for all the parameters. All the estimated parameters are statistically significant at 5% level of significance. Here we can see that hazard function initially increases, then decreases and increases. For survival curve we have compared kernel, mixture model and Kaplan-Meier methods. So, when we have bimodal density, and having competing risks approach, the mixture model (as a parametric approach) is more appropriate or kernel method (as a nonparametric approach) is more appropriate to estimate hazard and survival functions.
Acknowledgment: The first author is thankful to Department of Science and Technology, innovation in science pursuit for inspired research (DST-INSPIRE) for financial support (Fellowship/2021/210203).
References
[1] Erigoglu, U., Erigoglu, M., and Erol, H. (2011). A mixture model of two different distributions approach to the analysis of heterogeneous survival data. International Journal of Computational and Mathematical Sciences, 5(2), 75-79.
[2] Razali, A. M., and Al-Wakeel, A. A. (2013). Mixture Weibull distributions for fitting failure times data. Applied Mathematics and Computation, 219(24), 11358-11364.
[3] Mohammed, Y. A., Yatim, B., and Ismail, S. (2013). A simulation study of a parametric mixture model of three different distributions to analyze heterogeneous survival data. Modern Applied Science, 7(7), 1-9.
[4] Elmahdy, E. E. (2015). A new approach for Weibull modeling for reliability life data analysis. Applied Mathematics and computation, 250, 708-720.
[5] Larson, M. G., and Dinse, G. E. (1985). A mixture model for the regression analysis of competing risks data. Journal of the Royal Statistical Society: Series C (Applied Statistics), 34(3), 201-211.
[6] Enogwe, S. U., Okereke, E. W., and Ibeh, G. C (2023). A Bimodal Extension of Suja Distribution with Applications. Statistics and Applications 21(2), pp 155-173.
[7] Ahmadi, A., Roudbari, M., Gohari, M. R., and Hosseini, B. (2012). Estimation of hazard function and its associated factors in gastric cancer patients using wavelet and kernel smoothing methods. Asian Pacific journal of cancer prevention, 13(11), 5643-5646.
[8] Goulooze, S. C., Valitalo, P. A., Knibbe, C. A., and Krekels, E. H. (2018). Kernel-based visual hazard comparison (kbVHC): a simulation-free diagnostic for parametric repeated time-to-event models. The AAPS journal, 20, 1-11.
[9] Hess, K. R., Serachitopol, D. M., and Brown, B. W. (1999). Hazard function estimators: a simulation study. Statistics in medicine, 18(22), 3075-3088.
[10] Klein, J. P., and Bajorunaite, R. (2003). Inference for competing risks. Handbook of statistics, 23, 291-311.
[11] Andrews, D. F., and Herzberg, A. M. (2012). Data: a collection of problems from many fields for the student and research worker. Springer Science and Business Media.
[12] Park, C. (2005). Parameter estimation of incomplete data in competing risks using the EM algorithm. IEEE Transactions on Reliability, 54(2), 282-290.
[13] Ruth, W. (2024). A review of Monte Carlo-based versions of the EM algorithm. arXiv preprint arXiv:2401.00945.
[14] Lawless, J. F. (2003). Statistical Models and Methods for Lifetime Data. John Wiley and Sons, New York.
[15] Klein, J. P., and Moeschberger, M. L. (2003). Survival analysis: techniques for censored and truncated data (Vol. 1230). New York: Springer.
[16] Guedes, D. G. P., Cunha, E. E., & Lima, G. F. C. (2017). Genetic evaluation of age at first calving from Brown Swiss cows through survival analysis. Archivos de zootecnia, 66(254), 247-255.
[17] Heisey, D. M., and Patterson, B. R. (2006). A review of methods to estimate cause-specific mortality in presence of competing risks. The Journal of Wildlife Management, 70(6), 1544-1555.
[18] Beyersmann, J., Allignol, A., and Schumacher, M. (2011). Competing risks and multistate models with R. Springer Science and Business Media.
[19] Silverman, B. W. (2018). Density estimation for statistics and data analysis. Routledge.
[20] Kaplan, E. L. and Meier, P. (1958). Non-parametric estimation from incomplete observation. Journal of American Statistical Association, 53, 457-481.
Appendix:
The log likelihood function (7) for the mixture of two Weibull distribution in presence of competing risks is given as
^log^Y/^e-^1 + nrfAt^e-^) + ^log^e"*? + ^e-^)
Here we are considering first with cause 1, that is G1 stands for failure times for cause 1, so here j = 1. Similarly, we can consider cause 2 and 3 as G2 and G3 respectively.
ea = e~(a'(Giy)); eb = e-^01^; eaa = e-(a*(xY)); ebb = e-{l><x% logx2 = log(G12) ; x2g = G12'r ; x2l = G12'A; xgl = G1-1; xll = G1A-1
denol = (p1 * a *y * xg1 * ea) + (p2 * p * A* xl1 * eb) deno2 = p1* ea + p2 * eb deno3 = p1* eaa + p2* ebb nume1 = p1*y * xg1 * ea* (1- a* (G1r)) nume2 = p1* (G1) * ea nume3 = p1* (xr) * eaa numeb1 = p2* X* xl1 * eb *(1- ¡3 * (G11)) numeb2 = p2* (G1A) * eb numeb3 = p2 * (xx^) * ebb numeg1 = p1* a * xg1 * ea* (y * log(G1) + 1 — a * g * (G1) * log(G1)) numeg2 = p1* a * (G1Y) * log(G1) * ea numeg3 = p1* a * (xr) * log(x) * eaa numel1 = p2 * b * xl1 * eb * (a* log(G1) + 1 — ¡3 * A* (G1A) * log(G1)) numel2 = p2* p * (G1A) * log(G1) * eb numel3 = p2* p * (xA) * log(x) * ebb numea1 = p1* y * x2g * a * ea numea2 = p1* y * xg1 * ea numea3 = p1* (G1Y) * ea numea4 = p1* (xr) * eaa numeb11 = p2* A* x2l * p * eb numeb22 = p2* A* xl1 * eb numeb33 = p2* (G1A) * eb
numeb44 = p2 * (xA) * ebb
nj nj n
ologlj /nume1\ /nume2\ /nume3\ da, ¿-i\deno1) Z-i\deno2/ Z-i\deno3/
1=1
1=1
1=1
A. M. Rangoli, A. S. Talawar
ESTIMATION OF HAZARD AND SURVIVAL FUNCTION RT&A, No 2 (78) FOR COMPETING RISKS_Volume 19, June, 2024
n
«—. /1 ^ ^ * y * x2a * ea * (—a
da2 denol2
d2loglj ^ !(deno1 *p1*y * x2g * ea* (—a — 2)) — (numel * numel) i =
ni
I
(deno2 *p1* ea* (—(G12*7))) — (nume2 * nume2y
deno22
deno32
n
} J n
dloglj^r* rnumeb1\ sr* mumeb2\ sr* /numebS
dBj 1 ( deno1 ) 1 ( deno2 ) 1 ( denoS
J i=l i=l i=l
[=1
1=1
' ]
d2loglj ^ f(deno1 *p2* A* x2l * eb * (—p — 2)) — (numeb1 * numeb1)\^
d0f Z_i\ deno12
/(deno2 *p2* eb * (—(G12'1))) — (numeb2 * numeb2)\
2-\ deno22 ) +
i=1\ /
11 deno32
n
j Hj
dloglj •^T' /numeg1\ /numeg2\ /numegS
dv,/-! V deno1 ) ¿-i V deno2 ) V denoS J i=1 i=1 i=1
d2loglj nj j ((p1*deno1*a*ea*log(G1)*xg1*(y*log(G1)+2-y*log(G1)*a*(G1Y)-a*(G1Y)-a*y*(G1Y)*logx2-a*(G1Y)-a*y*x2g))-(numeg1 *numeg1))\ dyf = 1 ( deno12 ) +
/ (p1 * deno2 * a* ((log(G1))2) * (G1r) *ea*(1 — a*y* (G12'7))) — (numeg2 * numeg2)
Z-i\ deno22
l=1\
" j (p1 * denoS * a * ((log(x))2) * (xr) * eaa * (1 — a *y * (x2*7))) — (numegS * numegS) Z_il denoS2
ni , dloglj\—* (numel1\ ST* /numel2\ ir1 {numelS
V deno1 ) ¿-i V deno2 ) V i
d2 loglj f (p2*deno1*^*eb*log(G1)*xl1*(A*log(G1)+2-A*log(G1)*^*(G1A)-^*(G1A)-^*A*(G1A)*logx2-^*(G1A)-^*A*x2l))-(numel1*numel1)
SA2 = ^t=1
f(p2*deno2*p*((log(G1))2)*(G1A)*eb*(1-p*i*(G12*A)))-(numel2*numel2) 1 ( deno22
denoS2
d2loglj d2loglj
dajYj dyjaj
j (deno1 *p1* a * x2g * ea* (y * log(G12) + 1 — a * y * (G1r) * log(G1))) — (numea1 * numeg1)\
1 deno12 I
Z' j (p1 * deno1 * xg1 * ea* (y * log(G1) + 1 — a * y * (G1r) * log(G1))) — (numea2 * numeg1)\ ( deno12 I +
1=1
1=1
1=1
1=1
1=1
+
1=1
A. M. Rangoli, A. S. Talawar
ESTIMATION OF HAZARD AND SURVIVAL FUNCTION RT&A, No 2 (78)
FOR COMPETING RISKS_Volume 19, June, 2024
j (p1 * deno2 * (G1r) * log(G1) * ea * (1 — a * (G17))) — (numea3 * numeg2)\
d no22
■Ai j (p1 * deno3 * (xr) * log(x) * eaa * (1 — a * (x7))) — (numeaA * numeg3)\
d no32
dAjPj dpjAj
^ / (deno1 *p2* p * x2l * eb * (a* log(G12) + 1 — p * A* (G1A) * log(G1))) — (numeb11 * numelV)
deno12
+
^ I (p2 * deno1 * xl1 * eb * (a* log(G1) + 1 — p * A* (G1A) * log(G1))) — (numeb22 * numel1) ^
d no12
(p2 * deno2 * (G1A) * log(G1) * eb * (1 — p * (G11))) — (numeb33 * numel2)\
!
deno22
. (p2 * deno3 * (xA) * log(x) * ebb * (1 — p * (x1))) — (numebAA * numel3) Z-i\ deno32
n
22
d2loglj d2loglj * mume1 * numeb1\ /nume2 * numeb2 dajpj dpjaj ¿—1( deno12 ) ¿1( deno22
n
+
d no32
=1
n> n> d2loglj d2loglj\^ fnume1* numel1\ ^ fnume2 * numel2\^
^ /nume1 * numel1\ sr* '' Z ( deno12 ) —
d no22 =1 =1
n
/nume3 * numel3
d no32 =1
=1
=1
=1
=1
=1
n
d2loglj d2logljST^ (numeb1 * numeg1\ sr^ fnumeb2 * numeg2
dpiYi dyjpj^K deno12 J V deno22
=1 =1
n
num b3 * num g3 d no32
n
n
+
=1
d2loglj d2loglj /numeg1 * numel1\ "ST1/numeg2 * numel2\ dyjAj dAjYj^K deno12 ) Z-i\ deno22 )
=1
n
/numeg3 * numel3\ d no32
=1