Kernel ridge regression is usually solved by utilizing its closed-form expression, which includes inverting a generally quite large matrix. This matrix inversion can be avoided by instead solving the problem iteratively, using gradient-based optimization methods. Apart from the reduced computational cost, this approach also opens up both for using penalties other than the ridge penalty and for changing the kernel during training.
Other penalties than ridge are investigated in https://arxiv.org/abs/2306.16838, where we mainly focus on robust kernel regression by replacing the ridge norm with the infinity norm. Changing the kernel during training is investigated in https://arxiv.org/abs/2311.01762.
Both articles are in their final stages but would benefit from additional experiments, including bootstrap confidence intervals, something that would be greatly facilitated by more computational resources.