Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark functions #4

Open
caglayantuna opened this issue Jun 20, 2022 · 7 comments
Open

benchmark functions #4

caglayantuna opened this issue Jun 20, 2022 · 7 comments

Comments

@caglayantuna
Copy link
Member

We are working on some benchmark functions which use tlviz functions for visualization and print processing time, errors for quantitative analysis. With these functions, users can input their own function to compare it with the methods in Tensorly. We thought that adding these benchmarks to tlviz would be better since we use some external libraries such as xarray.

You can find these functions here. If you find this interesting, please let me know, then I can work on a PR.

@yngvem
Copy link
Collaborator

yngvem commented Jun 27, 2022

These are great! I think that some benchmarking functionality would be an excellent addition to the TensorLy ecosystem. I see a couple of ways to integrate this code

  • As a function
  • As an example in the documentation

There are benefits and downsides to both. If we keep it as a function, it may be easier to create general benchmark suites. However, the functions are also quite specialised, and I think that these benchmark functions may be something people want to adapt to their exact use case. In that case, maybe a documentation example would fit better? That way, people can also download it as a Python file or a Jupyter notebook. It would be nice to have an example where we demonstrate how to convert datasets from TensorLy into xarray DataArrays also!

The next question is where it belongs. TLViz definitely makes sense, especially in the example gallery. If we want to keep it as a function, then we would need to decide where it belongs. Currently, we don't have TensorLy as a dependency for TLViz. However, we could add it as an optional dependency for an additional benchmarking module? Another alternative, of course, is to add it as examples to TLViz and functions to TensorLy-Lab?

@caglayantuna
Copy link
Member Author

Thanks for your answer and suggestions. When we start thinking about benchmarks with @cohenjer, our target was to provide customization to the users. People can use their own dataset or functions to make comparison. Example in TLViz also makes sense, we wouldn't need to add dependency, right? Besides, it could be more didactic such as for xarray as you said. What is your opinion @cohenjer?

I am not so eager to add something to lab when it is working with all backends since lab is not so visible.

@yngvem
Copy link
Collaborator

yngvem commented Jul 3, 2022

Yes, TensorLy is already a dependency for the examples, so that's no problem 😀 I think an example where we show how to benchmark CP and Tucker would be very nice. It might be more useful than a function too, since it is difficult to foresee exactly what needs other researchers might have.

@cohenjer
Copy link

cohenjer commented Jul 4, 2022

Thanks for the feedback @yngvem! I am wondering if it could be possible to have both a formatted example and the function readily available.
There are some use cases where the function seems more easy to use:

  • Trying out algorithm speed on various architectures
  • Fast Comparison of a personal code with existing methods

Having an example makes it transparent what the benchmark is doing and how to modify it, but the non-modifiable version would be great for reproducible comparisons; having a way to import the benchmark function easily with the package loader seems more convenient to me than an example that one may have to download manually from the website (since finding files in packages installed by pip is not super obvious). When using the benchmark functionality, I would rather not import the whole code from the example everytime. The function interface can be made quite flexible as well.

Could we have the benchmark defined in the example as a function and import it? Or could we duplicate the code in a benchmark script and an example?

@yngvem
Copy link
Collaborator

yngvem commented Jul 9, 2022

I like your idea of having a unified "test bench"-function for reproducible benchmarks in papers! However, I'm wondering how we would best implement such a thing. It would for example be useful to be able to provide more customisation to the fitting algorithms (e.g. line search parameters).

Here are a couple of thoughts:

  • Do we want to have the possibility of appending multiple other methods simultaneously?
  • Should there be an append_kwargs-argument which is unpacked into the append_method function call?
  • Is it possible to make the code more general to seamlessly allow other types of decompositions?

I also think that it might be a good idea to wait with adding such a feature to an installable package until we have a release of TensorLy that has callbacks? Maybe we add it as an example now, and once TensorLy has a callback API we could plan a benchmarking suite like you proposed @cohenjer?

@caglayantuna
Copy link
Member Author

Yes, appending more methods and adding append kwargs option make sense. I will edit the functions accordingly.

I would be happy to make the code more general but component_comparison_plot is not appropriate for tucker decomposition e.g.. This function is very nice and useful for our case. Could we make this function more general first?

@cohenjer
Copy link

So after chatting with @caglayantuna, we agreed that having the current benchmark code as an example makes more sense:

  • we lack some tools to make it more general and customizable (callback, more visualisations for e.g. Tucker...)
  • I want to have a look at more powerful ways to build benchmark, e.g. using the benchopt library. In that case we would separate the benchmark module from tensorly viz.

We also think tensorly viz is a better place for pushing this notebook since it relies heavily on tensorly viz. We are implementing a few changes as @caglayantuna mentionned, and we will start a PR after that if that works for you @MarieRoald @yngvem!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants