Graduated Optimisation to solve the SLISE problem
Source:R/optimisation.R
graduated_optimisation.Rd
Graduated Optimisation to solve the SLISE problem
Usage
graduated_optimisation(
alpha,
X,
Y,
epsilon,
beta = 0,
lambda1 = 0,
lambda2 = 0,
weight = NULL,
beta_max = 20/epsilon^2,
max_approx = 1.15,
max_iterations = 300,
beta_min_increase = beta_max * 5e-04,
debug = FALSE,
...
)
Arguments
- alpha
Initial linear model (if NULL then OLS)
- X
Data matrix
- Y
Response vector
- epsilon
Error tolerance
- beta
Starting sigmoid steepness (default: 0 == convex problem)
- lambda1
L1 coefficient (default: 0)
- lambda2
L1 coefficient (default: 0)
- weight
Weight vector (default: NULL == no weights)
- beta_max
Stopping sigmoid steepness (default: 20 / epsilon^2)
- max_approx
Approximation ratio when selecting the next beta (default: 1.15)
- max_iterations
Maximum number of OWL-QN iterations (default: 300)
- beta_min_increase
Minimum amount to increase beta (default: beta_max * 0.0005)
- debug
Should debug statement be printed each iteration (default: FALSE)
- ...
Additional parameters to OWL-QN