Kernel ridge regression is usually solved by utilizing its closed-form expression, which includes inverting a generally quite large matrix. This matrix inversion can be avoided by instead solving the problem iteratively, using gradient-based optimization methods. Apart from the reduced computational cost, this approach also opens up for changing the kernel during training, something that can result in both benign overfitting and in double descent.
This is investigated in https://arxiv.org/abs/2311.01762. The article is in its final stages but would benefit from additional experiments, including bootstrap confidence intervals, something that more computational resources would greatly facilitate.