Getting Started with likelihood.model

Introduction

The likelihood.model package provides a flexible framework for specifying and using likelihood models for statistical inference in R. It supports: - Generic likelihood model interface (loglik, score, hess_loglik, fim) - Maximum likelihood estimation with confidence intervals - Bootstrap sampling distributions - Pure likelihood-based (Fisherian) inference - Likelihood ratio tests

This vignette provides a quick introduction to the main features using the built-in exponential_lifetime model.

For contribution-based models with heterogeneous observation types (exact, censored, etc.), see the companion package likelihood.contr.

Installation

# Install from GitHub
if (!require(devtools)) install.packages("devtools")
devtools::install_github("queelius/likelihood.model")

Loading Packages

library(likelihood.model)

Example 1: Closed-Form MLE with Exponential Lifetime Model

The exponential_lifetime model demonstrates a key design feature: specialized models can override fit() to bypass optim entirely when the MLE has a closed-form solution.

For Exponential(lambda) data, the MLE is simply lambda_hat = d / T, where d is the number of uncensored observations and T is the total observation time.

# Generate exponential survival data
set.seed(99)
df_exp <- data.frame(t = rexp(200, rate = 3.0))

# Create model and fit -- no initial guess needed!
model_exp <- exponential_lifetime("t")

# View model assumptions
assumptions(model_exp)
#> [1] "independent"              "identically distributed" 
#> [3] "exponential distribution"

# Fit the model
mle_exp <- fit(model_exp)(df_exp)

# View results
summary(mle_exp)
#> Maximum Likelihood Estimate (Fisherian)
#> ----------------------------------------
#> 
#> Coefficients:
#>        Estimate Std. Error   2.5% 97.5%
#> lambda   3.1372     0.2218 2.7024 3.572
#> 
#> Log-likelihood: 28.66 
#> AIC: -55.33 
#> Number of observations: 200

Extracting Results

# Parameter estimates
cat("Estimated parameters:\n")
#> Estimated parameters:
print(coef(mle_exp))
#> lambda 
#>  3.137

# Confidence intervals (Wald-based)
cat("\n95% Confidence Intervals:\n")
#> 
#> 95% Confidence Intervals:
print(confint(mle_exp))
#>         2.5% 97.5%
#> lambda 2.702 3.572

# Standard errors
cat("\nStandard Errors:\n")
#> 
#> Standard Errors:
print(se(mle_exp))
#> lambda 
#> 0.2218

# Log-likelihood value
cat("\nLog-likelihood:", as.numeric(logLik(mle_exp)), "\n")
#> 
#> Log-likelihood: 28.66

# AIC for model selection
cat("AIC:", AIC(mle_exp), "\n")
#> AIC: -55.33

# The score at the MLE is exactly zero (by construction)
cat("Score at MLE:", score_val(mle_exp), "\n")
#> Score at MLE: 0

Example 2: Right-Censored Exponential Data

The model handles right-censoring naturally through the sufficient statistics.

# Generate data with right-censoring at time 0.5
set.seed(99)
true_lambda <- 3.0
x <- rexp(200, rate = true_lambda)
censored <- x > 0.5
df_cens <- data.frame(
  t = pmin(x, 0.5),
  status = ifelse(censored, "right", "exact")
)

cat("Censoring rate:", mean(censored) * 100, "%\n")
#> Censoring rate: 20.5 %

model_cens <- exponential_lifetime("t", censor_col = "status")
mle_cens <- fit(model_cens)(df_cens)

cat("MLE (censored):", coef(mle_cens), "(true:", true_lambda, ")\n")
#> MLE (censored): 3.336 (true: 3 )
cat("95% CI:", confint(mle_cens)[1, ], "\n")
#> 95% CI: 2.817 3.854

Example 3: Verifying the Score at MLE

At the MLE, the score function (gradient of log-likelihood) should be approximately zero. This provides a useful diagnostic.

model <- exponential_lifetime("t")
df <- data.frame(t = rexp(100, rate = 2.0))

# Fit MLE
mle <- fit(model)(df)

# Evaluate score at MLE
score_func <- score(model)
score_at_mle <- score_func(df, coef(mle))

cat("Score at MLE (should be near zero):\n")
#> Score at MLE (should be near zero):
print(score_at_mle)
#> lambda.lambda 
#>             0

# The score is also stored in the MLE object
cat("\nScore stored in MLE object:\n")
#> 
#> Score stored in MLE object:
print(score_val(mle))
#> lambda 
#>      0

The score is exactly zero at the MLE for exponential_lifetime because the closed-form solution satisfies the score equation by construction.

Example 4: Fisherian Likelihood Inference

The package supports pure likelihood-based inference in the Fisherian tradition. Instead of confidence intervals, we can compute likelihood intervals that make no probability statements about parameters.

# Fit a model
model <- exponential_lifetime("t")
df <- data.frame(t = rexp(200, rate = 2.0))
mle <- fit(model)(df)

# Support function: log relative likelihood
# S(theta) = logL(theta) - logL(theta_hat)
# Support at MLE is always 0
s_at_mle <- support(mle, coef(mle), df, model)
cat("Support at MLE:", s_at_mle, "\n")
#> Support at MLE: 0

# Support at alternative parameter values (negative = less supported)
s_alt <- support(mle, c(lambda = 1.0), df, model)
cat("Support at lambda=1.0:", s_alt, "\n")
#> Support at lambda=1.0: -36.6

# Relative likelihood = exp(support)
rl <- relative_likelihood(mle, c(lambda = 1.0), df, model)
cat("Relative likelihood at lambda=1.0:", rl, "\n")
#> Relative likelihood at lambda=1.0: 1.269e-16

Likelihood Intervals

# Compute 1/8 likelihood interval (roughly equivalent to 95% CI)
# This is the set of theta where R(theta) >= 1/8
li <- likelihood_interval(mle, df, model, k = 8)
print(li)
#> 1/8 Likelihood Interval (R >= 0.125)
#> -----------------------------------
#>        lower upper
#> lambda  1.69 2.256
#> attr(,"k")
#> [1] 8
#> attr(,"relative_likelihood_cutoff")
#> [1] 0.125

# Compare with Wald confidence interval
cat("\nWald 95% CI:\n")
#> 
#> Wald 95% CI:
print(confint(mle))
#>         2.5% 97.5%
#> lambda 1.688 2.231

Example 5: Evidence Function

The evidence function computes the log-likelihood ratio between two parameter values, providing a pure likelihood-based measure of support.

model <- exponential_lifetime("t")
set.seed(42)
df <- data.frame(t = rexp(100, rate = 2.0))

# Evidence for lambda=2 vs lambda=1
ev <- evidence(model, df, c(lambda = 2), c(lambda = 1))
cat("Evidence for lambda=2 vs lambda=1:", ev, "\n")
#> Evidence for lambda=2 vs lambda=1: 13.1

# Positive evidence supports the first hypothesis
if (ev > log(8)) {
  cat("Strong evidence favoring lambda=2\n")
}
#> Strong evidence favoring lambda=2

Example 6: Bootstrap Sampling Distribution

For more robust inference, especially with small samples, we can use bootstrap methods to estimate the sampling distribution of the MLE.

model <- exponential_lifetime("t")
set.seed(42)
df <- data.frame(t = rexp(100, rate = 2.0))

# Create bootstrap sampler
boot_sampler <- sampler(model, df = df, par = c(lambda = 2))

# Generate bootstrap samples (100 replicates for speed)
boot_result <- boot_sampler(n = 100)

# Compare asymptotic vs bootstrap standard errors
mle <- fit(model)(df)
asymp_se <- se(mle)
boot_se <- se(boot_result)

cat("Standard Error Comparison:\n")
#> Standard Error Comparison:
cat("           Asymptotic  Bootstrap\n")
#>            Asymptotic  Bootstrap
cat(sprintf("Lambda:    %10.4f  %9.4f\n", asymp_se[1], boot_se[1]))
#> Lambda:        0.1779     0.2485

# Bootstrap bias estimate
cat("\nBootstrap Bias Estimate:\n")
#> 
#> Bootstrap Bias Estimate:
print(bias(boot_result))
#>  lambda 
#> 0.05554

Summary of Key Functions

Model Creation

Function Description
exponential_lifetime() Exponential model with closed-form MLE and optional censoring

For contribution-based models and standard distribution wrappers, see the likelihood.contr package.

Likelihood Calculus

Function Description
loglik() Get log-likelihood function
score() Get score (gradient) function
hess_loglik() Get Hessian of log-likelihood
fim() Fisher information matrix

Estimation and Inference

Function Description
fit() Create MLE solver (returns fisher_mle)
sampler() Create bootstrap sampler (returns fisher_boot)
coef() Extract parameter estimates
vcov() Variance-covariance matrix
confint() Confidence intervals
se() Standard errors

Fisherian Inference

Function Description
support() Log relative likelihood: S(theta) = logL(theta) - logL(theta_hat)
relative_likelihood() Likelihood ratio: R(theta) = L(theta)/L(theta_hat)
likelihood_interval() Interval where R(theta) >= 1/k
profile_loglik() Profile log-likelihood
evidence() Evidence for theta_1 vs theta_2

Model Comparison

Function Description
lrt() Likelihood ratio test
AIC() Akaike Information Criterion
BIC() Bayesian Information Criterion
deviance() Deviance statistic

Next Steps

For more details, see the other vignettes:

Session Info

sessionInfo()
#> R version 4.3.3 (2024-02-29)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: Ubuntu 24.04.3 LTS
#> 
#> Matrix products: default
#> BLAS:   /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.0 
#> LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.0
#> 
#> locale:
#>  [1] LC_CTYPE=C.UTF-8       LC_NUMERIC=C           LC_TIME=C.UTF-8       
#>  [4] LC_COLLATE=C.UTF-8     LC_MONETARY=C.UTF-8    LC_MESSAGES=C.UTF-8   
#>  [7] LC_PAPER=C.UTF-8       LC_NAME=C              LC_ADDRESS=C          
#> [10] LC_TELEPHONE=C         LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C   
#> 
#> time zone: America/Chicago
#> tzcode source: system (glibc)
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] algebraic.dist_1.0.0   likelihood.model_1.0.0 algebraic.mle_2.0.2   
#> 
#> loaded via a namespace (and not attached):
#>  [1] digest_0.6.39       R6_2.6.1            codetools_0.2-19   
#>  [4] numDeriv_2016.8-1.1 fastmap_1.2.0       xfun_0.54          
#>  [7] cachem_1.1.0        knitr_1.50          htmltools_0.5.8.1  
#> [10] generics_0.1.4      rmarkdown_2.30      lifecycle_1.0.5    
#> [13] mvtnorm_1.3-6       cli_3.6.5           sass_0.4.10        
#> [16] jquerylib_0.1.4     compiler_4.3.3      boot_1.3-32        
#> [19] tools_4.3.3         evaluate_1.0.5      bslib_0.9.0        
#> [22] yaml_2.3.11         rlang_1.1.7         jsonlite_2.0.0     
#> [25] MASS_7.3-60.0.1