Open Issues Need Help
View All on GitHubAI Summary: Implement an LRU1 cache wrapper to optimize the computation of loss, gradient, and Hessian in `better_optimize`. This will improve efficiency when using autodiff libraries like PyTensor or JAX by reusing sub-computations and avoiding redundant calculations, especially for triple-fused objective functions.
A friendlier front-end to scipy.optimize
AI Summary: Debug and fix a bug in the `better_optimize` library where the basinhopping algorithm ignores the results of the first iteration. This impacts performance, especially for computationally expensive objective functions. The solution involves investigating why the first iteration's results are discarded and implementing a fix to correctly incorporate them.
A friendlier front-end to scipy.optimize