Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing Suite for PGF/TikZ? #666

Open
ilayn opened this issue Apr 24, 2019 · 10 comments
Open

Testing Suite for PGF/TikZ? #666

ilayn opened this issue Apr 24, 2019 · 10 comments
Assignees

Comments

@ilayn
Copy link
Member

ilayn commented Apr 24, 2019

Hi this is percusse-on-TeX-SX speaking

First Welcome to and thanks for switching to here. I have some PRs from old times (some are couple of years old). I'd like to see if they are still useful and I'd be really happy to get rid of them but I can't see any CI jobs running somewhere.

Are there any plans for it? I can help set it up if needed or maybe contribute some tests. We can at least start making a job for generating the manual just to give an example.

best,

@josephwright josephwright self-assigned this Apr 24, 2019
@josephwright
Copy link
Contributor

I'm going to look at extending l3build so it can cover pgf, at least for testing. It's largely a question of dealing with the larger set of install locations, probably need an install-map or similar.

@ilayn
Copy link
Member Author

ilayn commented Apr 24, 2019

Over SciPy we are also dealing with a manual of 2500 pages. I've seen that you have been playing around with github pages. I am pretty positive about having an online manual together with the pdf version instead. Sphinx can be used to define the rulesets for generation of the manual.

Actually it would make things much much easier for everyone one in 2019 if there is an online source instead. We basically have negligible users downloading the pdf out of 50k nonunique visits per week.

@hmenke hmenke pinned this issue Apr 24, 2019
@hmenke
Copy link
Member

hmenke commented Apr 24, 2019

Thank you for offering your support Ilhan. There is already some continuous integration running on Travis CI, see .travis.yml and the scripts in the ci folder.

Travis Build Status

Currently the “tests” are limited to building the manual with various different engines, apart from tex4ht because of #651 and vtex because I don't even know how to get it (are there even users?).

On top of the tests there is continuous deployment for every commit to master. If Travis CI is running on the upstream repo (i.e. here), it will deploy the dvisvgm docs to https://pgf-tikz.github.io/. For every repo (i.e. also forks), Travis CI checks for the existence of a gh-pages branch and automatically deploys a tlcontrib repo to it which can be added via tlmgr to an existing TeX Live installation, so people can use the latest development version.

In the future, with Joseph's help I hope to get regressions testing working. To this end I want to write (and already have partly written) a parser for the manual that extracts all codeexample environments that do not have the code only option (see also #640). These can then be fed to l3build to compare the log and the output of \showlists.

@ilayn
Copy link
Member Author

ilayn commented Apr 24, 2019

I must be blind. I don't know how I missed the CI part. Sorry about that.

The manual parsing can be left to the doc generation tools such as Sphinx. Basically this is kind of a solved problem now (making it work is a different story of course).

You basically define what your rubric structure looks like and it basically parses them. I know TeX world doesn't want to deal with other pieces of software but Python can parse these blocks quite quickly, it can even do it once and can check for only the changed parts etc. I think this part of the development is too modern for anything related to TeX. Something we like to have is parsing pieces in parallel that increases the compile speed on Travis. But that's a decision for devs to make so a passing suggestion.

But yes, vtex, thistex, thattex obscurity problem is something I always tend to forget in TeX world. So anyways if I can be of use please ping me and let me know.

Here is some description of a admittedly simpler setup http://www.sphinx-doc.org/en/master/latex.html

@josephwright
Copy link
Contributor

I think the major challenge here, as for LaTeX itself, is getting tests written. I have the ones I've done for l3draw, which broadly cover the base layer of pgf (at least in as far as operations that don't need PDF resources). The API is almost identical: We could probably use most of those with some simple search-and-replace. Any use?

@josephwright
Copy link
Contributor

@hmenke Perhaps I should try making the change in l3draw to see if it actually works in enough cases? I've got the freedom to do that ...

@ilayn
Copy link
Member Author

ilayn commented Jan 11, 2020

@josephwright what is the assertion syntax for the tests ? Writing small TeX documents and running them one by one and error out if desired \neq actual?

@hmenke
Copy link
Member

hmenke commented Jan 12, 2020

@ilayn It's not as easy as just checking whether to code compiles. You have to check whether the resulting image still looks the same. For that you could diff the output of \showlists but that kind of bars you from any low-level changes that don't actually affect the visual representation of the picture (like just reordering instructions in the PDF). That is not an easy problem, however, even more tricky is identifying meaningful testcases. Of course, we could just extract all the codeexample enironments from the manual but that would compile forever while wasting time and energy and if nobody runs it, the whole point of a testsuite is just moot. Also TeX doesn't have access to metrics like code coverage or stacktraces.

@ilayn
Copy link
Member Author

ilayn commented Jan 12, 2020

There are many options on that front including comparing instructions on the system layer. At some point you have to trust upstream mechanism which doesn't change very often. For image comparison you can do what matplotlib or any other plotting lib does, say, https://matplotlib.org/3.1.1/devel/testing.html#writing-an-image-comparison-test

But comparing at system layer seems pretty robust to me. Also pgfkeys and pgfmath can be tested pretty throughly just by comparing the results and macro testing. The most boring part of testing is writing the tests and once it is in, it takes care most of the issues before it happens. We can ask the community to help us writing the tests.

I think code coverage is not really that useful since it is a dummy metric and the tools are not that reliable. We already gave up paying attention to it over SciPy.

Note that the testing tools don't need to be written in TeX only the compilation of the test snippets require TeX.

@DemiMarie
Copy link

@hmenke If requiring LuaTeX is an option, one approach would be to do as much possible in Lua, and have the TeX macros be thin wrappers around Lua function calls. Then the Lua could be tested normally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

4 participants