Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault in distributed-memory version with MPI ranks >= 1K #40

Open
nfabubaker opened this issue Jul 23, 2021 · 1 comment
Open
Assignees
Labels

Comments

@nfabubaker
Copy link

There's a seg. fault when running the mpi version with1K processors or more.

Config command:
./configure --with-mpi

Run command:
mpiexec -np 1024 splatt cpd enron.tns -t 1 -r 16

The input tensor can be found here

Some trace info:

in splatt_tt_get_slices (tt=0x5810640, m=0, nunique=0x7ffe7d607c20)
in p_greedy_mat_distribution (rinfo=0x7ffe7d607dd0, tt=0x5810640, perm=0x5a96b80)
splatt/src/mpi/mpi_mat_distribute.c:464 in splatt_mpi_distribute_mats (rinfo=0x7ffe7d607dd0, tt=0x5810640, distribution=SPLATT_DECOMP_MEDIUM)
splatt/src/mpi/mpi_mat_distribute.c:616 in splatt_mpi_cpd_cmd (argc=8, argv=0x7ffe7d6085f0)
splatt/src/cmds/mpi_cmd_cpd.c:219 in main (argc=9, argv=0x7ffe7d6085e8)
@ShadenSmith ShadenSmith self-assigned this Jul 26, 2021
@ShadenSmith
Copy link
Owner

Thanks for the helpful report! I will look into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants