Skip to content

slisemap.tuning

Find optimal hyper-parameters for Slisemap and Slipmap.

hyperparameter_tune(method, X, y, X_test, y_test, lasso=(0.001, 10.0), ridge=(0.0001, 1.0), radius=(1.5, 4.0), *args, model=True, n_calls=15, verbose=False, random_state=42, predict_kws={}, optim_kws={}, gp_kws={}, **kwargs)

Tune the lasso, ridge, and radius hyperparameters using Bayesian optimisation.

The search space is configured through the lasso/ridge/radius arguments as follows: - float: Skip the tuning of that hyperparameter. - tuple: tune the parameters limited to the space of (lowerbound, upperbound).

This function selects a candidate set of hyperparameters using skopt.gp_minimize. For a given set of hyperparameters, a Slisemap/Slipmap model is trained on X and y. Then the solution is evaluated using X_test and y_test. This procedure is repeated for n_calls iterations before the best result is returned.

Parameters:

Name Type Description Default
method Union[Type[Slisemap], Type[Slipmap]]

Method to tune, either Slisemap or Slipmap.

required
X ToTensor

Data matrix.

required
y ToTensor

target matrix.

required
X_test ToTensor

New data for evaluation.

required
y_test ToTensor

New data for evaluation.

required
lasso Union[float, Tuple[float, float]]

Limits for the lasso parameter. Defaults to (0.001, 10.0).

(0.001, 10.0)
ridge Union[float, Tuple[float, float]]

Limits for the ridge parameter. Defaults to (0.0001, 1.0).

(0.0001, 1.0)
radius Union[float, Tuple[float, float]]

Limits for the radius parameter. Defaults to (1.5, 4.0).

(1.5, 4.0)
*args Any

Arguments forwarded to method.

()

Other Parameters:

Name Type Description
model bool

Return a trained model instead of a dictionary with tuned parameters. Defaults to True.

n_calls int

Number of parameter evaluations. Defaults to 15.

verbose bool

Print status messages. Defaults to False.

random_state int

Random seed. Defaults to 42.

predict_kws Dict[str, object]

Keyword arguments forwarded to sm.predict.

optim_kws Dict[str, object]

Keyword arguments forwarded to sm.optimise.

gp_kws Dict[str, object]

Keyword arguments forwarded to skopt.gp_minimize.

**kwargs Any

Keyword arguments forwarded to method.

Raises:

Type Description
ImportError

If scikit-optimize is not installed.

Returns:

Type Description
Union[Slisemap, Slipmap, Dict[str, float]]

Dictionary with hyperparameter values or a Slisemap/Slipmap model trained on those (see the model argument).

Source code in slisemap/tuning.py
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
def hyperparameter_tune(
    method: Union[Type[Slisemap], Type[Slipmap]],
    X: ToTensor,
    y: ToTensor,
    X_test: ToTensor,
    y_test: ToTensor,
    lasso: Union[float, Tuple[float, float]] = (0.001, 10.0),
    ridge: Union[float, Tuple[float, float]] = (0.0001, 1.0),
    radius: Union[float, Tuple[float, float]] = (1.5, 4.0),
    *args: Any,
    model: bool = True,
    n_calls: int = 15,
    verbose: bool = False,
    random_state: int = 42,
    predict_kws: Dict[str, object] = {},
    optim_kws: Dict[str, object] = {},
    gp_kws: Dict[str, object] = {},
    **kwargs: Any,
) -> Union[Slisemap, Slipmap, Dict[str, float]]:
    """Tune the `lasso`, `ridge`, and `radius` hyperparameters using Bayesian optimisation.

    The search space is configured through the `lasso`/`ridge`/`radius` arguments as follows:
        - float: Skip the tuning of that hyperparameter.
        - tuple: tune the parameters limited to the space of `(lowerbound, upperbound)`.

    This function selects a candidate set of hyperparameters using `skopt.gp_minimize`.
    For a given set of hyperparameters, a Slisemap/Slipmap model is trained on `X` and `y`.
    Then the solution is evaluated using `X_test` and `y_test`.
    This procedure is repeated for `n_calls` iterations before the best result is returned.

    Args:
        method: Method to tune, either `Slisemap` or `Slipmap`.
        X: Data matrix.
        y: target matrix.
        X_test: New data for evaluation.
        y_test: New data for evaluation.
        lasso: Limits for the `lasso` parameter. Defaults to (0.001, 10.0).
        ridge: Limits for the `ridge` parameter. Defaults to (0.0001, 1.0).
        radius: Limits for the `radius` parameter. Defaults to (1.5, 4.0).
        *args: Arguments forwarded to `method`.

    Keyword Args:
        model: Return a trained model instead of a dictionary with tuned parameters. Defaults to True.
        n_calls: Number of parameter evaluations. Defaults to 15.
        verbose: Print status messages. Defaults to False.
        random_state: Random seed. Defaults to 42.
        predict_kws: Keyword arguments forwarded to `sm.predict`.
        optim_kws: Keyword arguments forwarded to `sm.optimise`.
        gp_kws: Keyword arguments forwarded to `skopt.gp_minimize`.
        **kwargs: Keyword arguments forwarded to `method`.

    Raises:
        ImportError: If `scikit-optimize` is not installed.

    Returns:
        Dictionary with hyperparameter values or a Slisemap/Slipmap model trained on those (see the `model` argument).
    """
    space = []
    params = {}

    def make_space(grid, name, prior):  # noqa: ANN001, ANN202
        if isinstance(grid, (float, int)):
            params[name] = grid
        else:
            _assert(
                len(grid) == 2,
                f"Wrong size `len({name}) = {len(grid)} != 2`",
                hyperparameter_tune,
            )
            space.append(skopt.space.Real(*grid, prior=prior, name=name))
            params[name] = (grid[0] * grid[1]) ** 0.5

    make_space(lasso, "lasso", "log-uniform")
    make_space(ridge, "ridge", "log-uniform")
    make_space(radius, "radius", "uniform")
    if len(space) == 0:
        _warn("No hyperparameters to tune", hyperparameter_tune)
        if model:
            sm = method(X, y, radius=radius, lasso=lasso, ridge=ridge, *args, **kwargs)  # noqa: B026
            sm.optimise(**optim_kws)
            return sm
        else:
            return params

    if model:
        best_loss = np.inf
        best_sm = None

    @skopt.utils.use_named_args(space)
    @lru_cache
    def objective(
        lasso: float = params["lasso"],
        ridge: float = params["ridge"],
        radius: float = params["radius"],
    ) -> float:
        sm = method(X, y, radius=radius, lasso=lasso, ridge=ridge, *args, **kwargs)  # noqa: B026
        sm.optimise(**optim_kws)
        Xt = sm._as_new_X(X_test)
        Yt = sm._as_new_Y(y_test, Xt.shape[0])
        P = sm.predict(Xt, **predict_kws, numpy=False)
        loss = sm.local_loss(Yt, P).mean().cpu().item()
        if verbose:
            print(
                f"Loss with { {'lasso': lasso, 'ridge': ridge, 'radius': radius} }: {loss}"
            )
        if model:
            nonlocal best_loss, best_sm
            if loss < best_loss:
                best_sm = sm
                best_loss = loss
        del sm
        return loss

    res = skopt.gp_minimize(
        objective,
        space,
        n_initial_points=min(10, max(3, (n_calls - 1) // 3 + 1)),
        n_calls=n_calls,
        random_state=random_state,
        **gp_kws,
    )
    for s, v in zip(space, res.x):
        params[s.name] = v
    if verbose:
        print("Final parameter values:", params)

    if model:
        return best_sm
    else:
        return params

optimise_with_test(sm, X_test, y_test, lasso_grid=3.0, ridge_grid=3.0, radius_grid=1.1, search_size=6, test=accuracy, patience=2, max_escapes=100, verbose=0, escape_kws={}, *, max_iterations=None, **kwargs)

Optimise a Slisemap or Slipmap object using test data to tune the regularisation.

How this works
  • The procedure is very similar to Slisemap.optimise, which alternates between LBFGS optimisation and an "escape" heuristic until convergence.
  • The hyperoptimisation tuning adds an additional step after each call to LBFGS where a small local search is performed to tune the hyperparameters.
  • The convergence criteria is also changed to use the test data (see the test parameter).
  • This should be faster than the usual "outer-loop" hyperperameter optimisation, but the local search dynamics might be less exhaustive.

Parameters:

Name Type Description Default
sm Union[Slisemap, Slipmap]

Slisemap or Slipmap object.

required
X_test Union[ndarray, Tensor]

Data matrix for the test set.

required
y_test Union[ndarray, Tensor]

Target matrix/vector for the test set.

required
lasso_grid float

The extent of the local search for the lasso parameter (lasso/lasso_grid, lasso*lasso_grid). Set to zero to disable the hyperparameter search. Defaults to 3.0.

3.0
ridge_grid float

The extent of the local search for the ridge parameter (ridge/ridge_grid, ridge*ridge_grid). Set to zero to disable the hyperparameter search. Defaults to 3.0.

3.0
radius_grid float

The extent of the local search for the radius parameter (radius/radius_grid, radius*radius_grid). Set to zero to disable the hyperparameter search. Defaults to 1.5.

1.1
search_size int

The number of evaluations in the local random search. Defaults to 6.

6
test Callable[[Slisemap, Tensor, Tensor], float]

Test to measure the performance of different hyperparameter values. Defaults to accuracy.

accuracy
patience int

Number of optimisation rounds without improvement before stopping. Defaults to 2.

2
max_escapes int

Maximum numbers optimisation rounds. Defaults to 100.

100
verbose Literal[0, 1, 2, 3]

Print status messages. Defaults to 0.

0
escape_kws Dict[str, Any]

Keyword arguments forwarded to sm.escape. Defaults to {}.

{}

Other Parameters:

Name Type Description
**kwargs Any

Optional keyword arguments to sm.lbfgs.

Returns:

Type Description
Union[Slisemap, Slipmap]

Optimised Slisemap or Slipmap object. This is not the same object as the input!

Deprecated

1.6: max_iterations renamed to max_escapes

Source code in slisemap/tuning.py
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
def optimise_with_test(  # noqa: D417
    sm: Union[Slisemap, Slipmap],
    X_test: Union[np.ndarray, torch.Tensor],
    y_test: Union[np.ndarray, torch.Tensor],
    lasso_grid: float = 3.0,
    ridge_grid: float = 3.0,
    radius_grid: float = 1.1,
    search_size: int = 6,
    test: Callable[[Slisemap, torch.Tensor, torch.Tensor], float] = accuracy,
    patience: int = 2,
    max_escapes: int = 100,
    verbose: Literal[0, 1, 2, 3] = 0,
    escape_kws: Dict[str, Any] = {},
    *,
    max_iterations: Optional[int] = None,
    **kwargs: Any,
) -> Union[Slisemap, Slipmap]:
    """Optimise a Slisemap or Slipmap object using test data to tune the regularisation.

    How this works:
        - The procedure is very similar to [Slisemap.optimise][slisemap.slisemap.Slisemap.optimise], which alternates between [LBFGS][slisemap.slisemap.Slisemap.lbfgs] optimisation and an ["escape" heuristic][slisemap.slisemap.Slisemap.escape] until convergence.
        - The hyperoptimisation tuning adds an additional step after each call to [LBFGS][slisemap.slisemap.Slisemap.lbfgs] where a small local search is performed to tune the hyperparameters.
        - The convergence criteria is also changed to use the test data (see the `test` parameter).
        - This should be faster than the usual "outer-loop" hyperperameter optimisation, but the local search dynamics might be less exhaustive.

    Args:
        sm: Slisemap or Slipmap object.
        X_test: Data matrix for the test set.
        y_test: Target matrix/vector for the test set.
        lasso_grid: The extent of the local search for the lasso parameter `(lasso/lasso_grid, lasso*lasso_grid)`. Set to zero to disable the hyperparameter search. Defaults to 3.0.
        ridge_grid: The extent of the local search for the ridge parameter `(ridge/ridge_grid, ridge*ridge_grid)`. Set to zero to disable the hyperparameter search. Defaults to 3.0.
        radius_grid: The extent of the local search for the radius parameter `(radius/radius_grid, radius*radius_grid)`. Set to zero to disable the hyperparameter search. Defaults to 1.5.
        search_size: The number of evaluations in the local random search. Defaults to 6.
        test: Test to measure the performance of different hyperparameter values. Defaults to [accuracy][slisemap.metrics.accuracy].
        patience: Number of optimisation rounds without improvement before stopping. Defaults to 2.
        max_escapes: Maximum numbers optimisation rounds. Defaults to 100.
        verbose: Print status messages. Defaults to 0.
        escape_kws: Keyword arguments forwarded to [sm.escape][slisemap.slisemap.Slisemap.escape]. Defaults to {}.

    Keyword Args:
        **kwargs: Optional keyword arguments to [sm.lbfgs][slisemap.slisemap.Slisemap.lbfgs].

    Returns:
        Optimised Slisemap or Slipmap object. This is not the same object as the input!

    Deprecated:
        1.6: `max_iterations` renamed to `max_escapes`
    """
    if max_iterations is not None:
        _deprecated(
            optimise_with_cv.max_iterations,
            optimise_with_cv.max_escapes,
        )
        max_escapes = max_iterations
    kwargs, hs_kws = _hyper_init(
        optimise_with_test,
        sm=sm,
        kwargs=kwargs,
        lasso_grid=lasso_grid,
        ridge_grid=ridge_grid,
        radius_grid=radius_grid,
        search_size=search_size,
    )
    if hs_kws is None:
        kwargs["increase_tolerance"] = False
        sm.optimise(verbose=verbose, escape_kws=escape_kws, **kwargs)
        return sm

    X_test = sm._as_new_X(X_test)
    y_test = sm._as_new_Y(y_test, X_test.shape[0])
    if verbose:
        _hyper_verbose(optimise_with_test, sm, 0, test(sm, X_test, y_test))

    # Initial optimisation with: _hyper_select -> escape -> lbfgs
    sm.lbfgs(only_B=True, verbose=verbose > 2, **kwargs)
    sm, ev = _hyper_tune(sm, X_test, y_test, test, **hs_kws, **kwargs)
    cc = CheckConvergence(patience, max_escapes)
    while not cc.has_converged(ev, sm.copy, verbose=verbose > 1):
        sm.escape(**escape_kws)
        sm.lbfgs(verbose=verbose > 2, **kwargs)
        sm, ev = _hyper_tune(sm, X_test, y_test, test, **hs_kws, **kwargs)
        if verbose:
            _hyper_verbose(optimise_with_test, sm, cc.iter, ev)

    # Secondary optimisation with: lbfgs -> _hyper_select
    sm, ev = cc.optimal, cc.best
    kwargs["increase_tolerance"] = False
    cc.patience = min(patience, 1)
    cc.counter = 0.0
    while not cc.has_converged(ev, sm.copy, verbose=verbose > 1):
        sm.lbfgs(verbose=verbose > 2, **kwargs)
        sm, ev = _hyper_tune(sm, X_test, y_test, test, **hs_kws, **kwargs)
        if verbose:
            _hyper_verbose(optimise_with_test, sm, cc.iter, ev)

    return cc.optimal

optimise_with_cv(sm, k=5, lasso_grid=3.0, ridge_grid=3.0, radius_grid=1.1, search_size=6, lerp=0.3, test=accuracy, patience=2, max_escapes=100, verbose=0, escape_kws={}, *, max_iterations=None, **kwargs)

Optimise a Slisemap or Slipmap object using cross validation to tune the regularisation.

How this works
  • The data is split into k folds for cross validation.
  • Then a procedure like optimise_with_test is used.
  • After every hyperparameter tuning the regularisation coefficients are smoothed across the folds (see the lerp parameter).
  • Finally, when the cross validation has converged the solution is transferred to the complete data for one final optimisation.
  • Note that this is significantly slower than just training on Slisemap solution.
  • However, this should be faster than the usual "outer-loop" hyperperameter optimisation (but the local search dynamics might be less exhaustive).

Parameters:

Name Type Description Default
sm Union[Slisemap, Slipmap]

Slisemap or Slipmap object.

required
k int

Number of folds for the cross validation. Defaults to 5.

5
lasso_grid float

The extent of the local search for the lasso parameter (lasso/lasso_grid, lasso*lasso_grid). Set to zero to disable the hyperparameter search. Defaults to 3.0.

3.0
ridge_grid float

The extent of the local search for the ridge parameter (ridge/ridge_grid, ridge*ridge_grid). Set to zero to disable the hyperparameter search. Defaults to 3.0.

3.0
radius_grid float

The extent of the local search for the radius parameter (radius/radius_grid, radius*radius_grid). Set to zero to disable the hyperparameter search. Defaults to 1.5.

1.1
search_size int

The number of evaluations in the local random search. Defaults to 6.

6
lerp float

Smooth regularisation coefficients across folds (linearly interpolating towards the mean coefficients). Defaults to 0.3.

0.3
test Callable[[Slisemap, Tensor, Tensor], float]

Test to measure the performance of different hyperparameter values. Defaults to accuracy.

accuracy
patience int

Number of optimisation rounds without improvement before stopping. Defaults to 1.

2
max_escapes int

Maximum numbers optimisation rounds. Defaults to 100.

100
verbose Literal[0, 1, 2, 3]

Print status messages. Defaults to 0.

0
escape_kws Dict[str, Any]

Keyword arguments forwarded to sm.escape. Defaults to {}.

{}

Other Parameters:

Name Type Description
**kwargs Any

Optional keyword arguments to sm.lbfgs.

Returns:

Type Description
Union[Slisemap, Slipmap]

Optimised Slisemap or Slipmap object.

Deprecated

1.6: max_iterations renamed to max_escapes

Source code in slisemap/tuning.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
def optimise_with_cv(  # noqa: D417
    sm: Union[Slisemap, Slipmap],
    k: int = 5,
    lasso_grid: float = 3.0,
    ridge_grid: float = 3.0,
    radius_grid: float = 1.1,
    search_size: int = 6,
    lerp: float = 0.3,
    test: Callable[[Slisemap, torch.Tensor, torch.Tensor], float] = accuracy,
    patience: int = 2,
    max_escapes: int = 100,
    verbose: Literal[0, 1, 2, 3] = 0,
    escape_kws: Dict[str, Any] = {},
    *,
    max_iterations: Optional[int] = None,
    **kwargs: Any,
) -> Union[Slisemap, Slipmap]:
    """Optimise a Slisemap or Slipmap object using cross validation to tune the regularisation.

    How this works:
        - The data is split into k folds for cross validation.
        - Then a procedure like [optimise_with_test][slisemap.tuning.optimise_with_test] is used.
        - After every hyperparameter tuning the regularisation coefficients are smoothed across the folds (see the `lerp` parameter).
        - Finally, when the cross validation has converged the solution is transferred to the complete data for one final optimisation.
        - Note that this is significantly slower than just training on Slisemap solution.
        - However, this should be faster than the usual "outer-loop" hyperperameter optimisation (but the local search dynamics might be less exhaustive).

    Args:
        sm: Slisemap or Slipmap object.
        k: Number of folds for the cross validation. Defaults to 5.
        lasso_grid: The extent of the local search for the lasso parameter `(lasso/lasso_grid, lasso*lasso_grid)`. Set to zero to disable the hyperparameter search. Defaults to 3.0.
        ridge_grid: The extent of the local search for the ridge parameter `(ridge/ridge_grid, ridge*ridge_grid)`. Set to zero to disable the hyperparameter search. Defaults to 3.0.
        radius_grid: The extent of the local search for the radius parameter `(radius/radius_grid, radius*radius_grid)`. Set to zero to disable the hyperparameter search. Defaults to 1.5.
        search_size: The number of evaluations in the local random search. Defaults to 6.
        lerp: Smooth regularisation coefficients across folds (linearly interpolating towards the mean coefficients). Defaults to 0.3.
        test: Test to measure the performance of different hyperparameter values. Defaults to [accuracy][slisemap.metrics.accuracy].
        patience: Number of optimisation rounds without improvement before stopping. Defaults to 1.
        max_escapes: Maximum numbers optimisation rounds. Defaults to 100.
        verbose: Print status messages. Defaults to 0.
        escape_kws: Keyword arguments forwarded to [sm.escape][slisemap.slisemap.Slisemap.escape]. Defaults to {}.

    Keyword Args:
        **kwargs: Optional keyword arguments to [sm.lbfgs][slisemap.slisemap.Slisemap.lbfgs].

    Returns:
        Optimised Slisemap or Slipmap object.

    Deprecated:
        1.6: `max_iterations` renamed to `max_escapes`
    """
    if max_iterations is not None:
        _deprecated(
            optimise_with_cv.max_iterations,
            optimise_with_cv.max_escapes,
        )
        max_escapes = max_iterations
    kwargs, hs_kws = _hyper_init(
        optimise_with_cv,
        sm=sm,
        kwargs=kwargs,
        lasso_grid=lasso_grid,
        ridge_grid=ridge_grid,
        radius_grid=radius_grid,
        search_size=search_size,
    )
    if hs_kws is None:
        kwargs["increase_tolerance"] = False
        sm.optimise(verbose=verbose, escape_kws=escape_kws, **kwargs)
        return sm

    # Create k folds
    fold_size = (sm.n - 1) // k + 1
    folds = torch.tile(torch.arange(k, **sm.tensorargs), (fold_size,))[: sm.n]
    sms = []
    tests = []
    for i in range(k):
        X_test = sm._X[folds == i, ...]
        y_test = sm._Y[folds == i, ...]
        tests.append((X_test, y_test))
        sm2 = sm.copy()
        sm2._X = sm._X[folds != i, ...].clone()
        sm2._Y = sm._Y[folds != i, ...].clone()
        if isinstance(sm, Slisemap):
            sm2._B = sm._B[folds != i, ...].clone()
        sm2._Z = sm._Z[folds != i, ...].clone()
        sm2.lbfgs(only_B=True, verbose=verbose > 2, **kwargs)
        sms.append(sm2)

    # Helper functions
    def hyper() -> List[float]:
        nonlocal sms
        losses = []
        for i, (X_test, y_test) in enumerate(tests):
            sms[i], loss = _hyper_tune(sms[i], X_test, y_test, test, **hs_kws, **kwargs)
            losses.append(loss)
        return [np.mean(losses), *losses]

    def optim() -> List[float]:
        lasso = np.mean([sm2.lasso for sm2 in sms])
        ridge = np.mean([sm2.ridge for sm2 in sms])
        radius = np.mean([sm2.radius for sm2 in sms])
        for sm2 in sms:
            sm2.lasso = sm2.lasso * (1 - lerp) + lasso * lerp
            sm2.ridge = sm2.ridge * (1 - lerp) + ridge * lerp
            sm2.radius = sm2.radius * (1 - lerp) + radius * lerp
            sm2.escape(**escape_kws)
            sm2.lbfgs(verbose=verbose > 2, **kwargs)
        return hyper()

    # Optimise the cross validation folds with hyperparameter tuning
    cc = CheckConvergence(patience, max_escapes)
    loss = hyper()
    if verbose:
        _hyper_verbose(optimise_with_cv, sms, 0, loss[1:])
    while not cc.has_converged(loss, lambda: [sm2.copy() for sm2 in sms], verbose > 1):
        loss = optim()
        if verbose:
            _hyper_verbose(optimise_with_cv, sms, cc.iter, loss[1:])

    # Apply the tuned parameters on the complete model
    loss = [test(sm2, sm._X, sm._Y) for sm2 in cc.optimal]
    opt = np.argmin(loss)
    sm._Z[folds != opt, ...] = cc.optimal[opt]._Z
    if isinstance(sm, Slisemap):
        sm._B[folds != opt, ...] = cc.optimal[opt]._B
    sm.lasso = np.mean([sm2.lasso for sm2 in cc.optimal])
    sm.ridge = np.mean([sm2.ridge for sm2 in cc.optimal])
    sm.radius = np.mean([sm2.radius for sm2 in cc.optimal])

    # Optimise the complete model
    kwargs["increase_tolerance"] = False
    sm.lbfgs(verbose=verbose > 2, **kwargs)
    if verbose:
        _hyper_verbose(optimise_with_cv, sm, "Final", None)

    return sm

optimise(sm, X_test=None, y_test=None, **kwargs)

Optimise a Slisemap or Slipmap object with hyperparameter tuning.

This can either be done using a test set or cross validation. The choice of method is based on whether X_test and y_test is given.

Parameters:

Name Type Description Default
sm Union[Slisemap, Slipmap]

Slisemap or Slipmap object.

required
X_test Union[None, ndarray, Tensor]

Data matrix for the test set. Defaults to None.

None
y_test Union[None, ndarray, Tensor]

Target matrix/vector for the test set. Defaults to None.

None

Other Parameters:

Name Type Description
**kwargs Any

Returns:

Type Description
Union[Slisemap, Slipmap]

Optimised Slisemap or Slipmap object. This is not the same object as the input!

Deprecated

1.6: Use the uncerlying function directly instead

Source code in slisemap/tuning.py
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
def optimise(
    sm: Union[Slisemap, Slipmap],
    X_test: Union[None, np.ndarray, torch.Tensor] = None,
    y_test: Union[None, np.ndarray, torch.Tensor] = None,
    **kwargs: Any,
) -> Union[Slisemap, Slipmap]:
    """Optimise a Slisemap or Slipmap object with hyperparameter tuning.

    This can either be done using a [test set][slisemap.tuning.optimise_with_test] or [cross validation][slisemap.tuning.optimise_with_cv].
    The choice of method is based on whether `X_test` and `y_test` is given.

    Args:
        sm: Slisemap or Slipmap object.
        X_test: Data matrix for the test set. Defaults to None.
        y_test: Target matrix/vector for the test set. Defaults to None.

    Keyword Args:
        **kwargs: Optional keyword arguments to [slisemap.tuning.optimise_with_test][] or [slisemap.tuning.optimise_with_cv][].

    Returns:
        Optimised Slisemap or Slipmap object. This is not the same object as the input!

    Deprecated:
        1.6: Use the uncerlying function directly instead
    """
    if X_test is None or y_test is None:
        _deprecated(optimise, optimise_with_cv)
        return optimise_with_cv(sm, **kwargs)
    else:
        _deprecated(optimise, optimise_with_test)
        return optimise_with_test(sm, X_test, y_test, **kwargs)