specific gridsearch
I am copying a pattern from Simon Willison by also having Claude Code write me some notebooks on occasion about algorithm-peformance things that I'm curious about. A few years ago, it probably would not have been worth it for me to make this investment, but these days ...
The topic of todays exercise is GridSearchCV from scikit-learn and ways it could be so much faster if you designed it to work for specific use-cases. Under the hood it uses joblib/pickle to serialise scikit-learn pipelines and this comes at a cost.
I had Claude write me a notebook to compare a few different approaches for linear models, specifically.
Ridge Regression
Ridge is a linear model that comes with a regularisation parameter that you might want to loop over. Here's the thing though; you can pick an optimiser that leverages that Ridge has a closed form solution.
$$ \mathbf{w} = (\mathbf{X}^T \mathbf{X} + \alpha \mathbf{I})^{-1} \mathbf{X}^T \mathbf{y} $$
You can even write your NumPy code in a clever way so that you don't have to loop over all the $\alpha$-values. You can just figure out all the optimal weights on one swoop. If you compare it to looping over lots of calls to Ridge().fit or, even worse, doing that inside of a GridSearchCV you're gonna get some overhead.
Logistic Regression
Logistic regression does not have a closed form solution, but you can apply another trick. After you've trained your first model, you can consider using those weights as a starting point to train the next model that has a slightly different regularisation parameter. This is also a massive speedup! One that's easy to perform when you write the code "by hand" but not something that grid-search can do for you because it assumes that all
Pramatic?
Should you rewrite all your code now? No! Scikit-learn tries to solve an utmost general problem, there's bound to be many more of these instances where you can get better specific performance.
Is it an interesting lesson, though? One that's easier to observe thanks to new tools? Sure!
Is this a free lunch? No, not at all. Claude did a bunch of boilerplate right but it did get a lot of important nuance wrong. It originally made comparisons where it didn't just measure the joblib overhead but also performed cross-validations which isn't a fair comparison. You still need to check the work of the LLM, even if it speeds up the boilerplate and lowers the barrier of entry for these sorts of things.
Notebook for this work can be found here.