Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement support for general (auxiliary) tensor indices #177

Open
wants to merge 35 commits into
base: master
Choose a base branch
from

Conversation

Krzmbrzl
Copy link
Collaborator

@Krzmbrzl Krzmbrzl commented Feb 5, 2024

This PR

  • adds a new index group (auxiliary) to the tensor interface. These indices are (for now) assumed to never carry spin (they are ignored for e.g. spintracing) and they are assumed to be fully non-symmetric.
  • Refactor the TensorNetwork implementation to make the implementation (hopefully) a bit more readable and also to fix some issues in the old implementation (under certain circumstances the resulting output wasn't canonical). Furthermore, the new implementation ensures that canonicalization is a single-pass step. No more need to sandwich a slow canonicalization between two fast ones (though the implementation doing the sum canonicalization was left alone and thus still uses a multi-pass procedure). Finally, the new implementation supports canonicalization of tensors with auxiliary indices.
  • Some more or less minor changes/refactoring - see individual commits for summaries

Additional notes:

  • I tried adapting the wick implementation's dependence on the graph-representation of a tensor network but failed to get things working together. Thus, I have re-introduced the old graph-representation as a new WickGraph class that will be used for Wick. At some point we should try wiring up the Wick impl to the new graph representation, but for now I'm afraid I don't have the time to keep trying.

Fixes #161
Fixes #107

@Krzmbrzl
Copy link
Collaborator Author

Krzmbrzl commented Feb 8, 2024

@evaleev I believe this is ready for review now. Note that this PR has a cross-relation to #180 in that if either of these PRs is merged, the other needs to be updated.

/CC @ak-ustutt

@Krzmbrzl
Copy link
Collaborator Author

Rebased against current master and thereby resolved the cross-dependence on the hash PR

@Krzmbrzl
Copy link
Collaborator Author

Merged with master (no rebase due to too many conflicts with #184 - didn't feel like doing a rebase with those)

@evaleev
Copy link
Member

evaleev commented Mar 13, 2024

@Krzmbrzl thanks, this is quite a chunk of work!!! I just had a look at this ... do you recall what the issue is with the Wick engine not meshing with the new network graph? At least, what the symptoms were?

@Krzmbrzl
Copy link
Collaborator Author

do you recall what the issue is with the Wick engine not meshing with the new network graph? At least, what the symptoms were?

@evaleev At first I ran into various asserts within the Wick impl due to not having resolved an implicit assumption about the graph representation. Those were about e.g. tensor core vertex ID's being usable as indices into the tensor vector of the network.
Once I fixed the code triggering the assertion I still had the issue of producing wrong results. Iirc for test cases I mostly saw failures where my patched code produced one or two additional terms.
Finally, running the srcc example, the performance was observed to be terribly degraded compared to the unpatched version. Furthermore, during the derivation of the doubles or triples residual (I think it was the doubles), RAM consumption simply kept increasing until my system ran out of RAM (32 GB). I believe that I somehow created an infinite loop in the recursive wick impl call tree.

Note that I did find some bugs in my new canonicalization routine after having re-introduced the old graph for the Wick program so perhaps (part of) the issue in the end was a buggy graph representation after all.


My biggest problem in all of this was that the wick implementation and the graph representation are tightly coupled, but only implicitly. That means that the wick implementation access what should probably be an implementation detail of the tensor network implementation (the exact graph representation).
If this dependence would instead be through a level of indirection (e.g. member function of a graph object (e.g. vertex_id_to_tensor_index)), refactoring would probably be a lot easier as then the parts of the code that need to be adapted is strictly limited to the implementation of these indirection functions.

@Krzmbrzl
Copy link
Collaborator Author

@evaleev the new SRCC tests for CSVs are failing. One of them fails due to an unmet assertion that detected external indices are unique, but this unique-ness is based on Index::LabelCompare - this was overtaken from the old implementation. However, in the SRCC example 5 I now get two external indices {a_2^{{i_1}}} and {a_2^{{i_2}}} that only differ in their proto index label.
I would tend to replace the LabelCompare with a FullLabelCompare, but I'm not sure what the expected behavior is in this case as I have never worked with CSVs. Any insights from your end?

Instead of hardcoding two expected LaTeX representations of a given
expression, we now do an exhaustive search over all permutation of
summands to see if one of them yields the desired permutation.
This is acceptable as the order of summands has no practical relevance
other than for matching a hard-coded order in a test case.

The benefit of doing it this way is reduced maintenance cost in exchange
for a runtime performance overhead when running the tests. However, the
overhead is expected to be small and for test cases it shouldn't matter
anyway.
@Krzmbrzl Krzmbrzl force-pushed the general-tensor-indices branch 3 times, most recently from 5d1d12b to 1b14687 Compare June 25, 2024 16:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Simplify can end up renaming external indices General tensor indices
2 participants