Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

estimates the DVF from a pair of input images #3

Open
Q-Z-Y opened this issue Jul 12, 2018 · 5 comments
Open

estimates the DVF from a pair of input images #3

Q-Z-Y opened this issue Jul 12, 2018 · 5 comments

Comments

@Q-Z-Y
Copy link

Q-Z-Y commented Jul 12, 2018

Thanks for sharing the source code which is super helpful for understanding the method. Based on my understanding the network takes patches as input. May I ask how to estimate the DVF during testing given a pair of input images, ideally in a single shot?

@hsokooti
Copy link
Owner

Thank you for your interest Qiang-Zhang-Y. At the test time, you need to increase the patch size and the size of output will automatically increase. In order to avoid any errors in the script, you can set the size of the placeholders to be None:

x = tf.placeholder(tf.float32, shape=[None, None, None, None , 2], name="x")
xLow = tf.placeholder(tf.float32, shape=[None, None, None, None, 2], name="x")
y = tf.placeholder(tf.float32, shape=[None, None, None, None, 3], name="labels")

The above lines are in the script, you can uncomment them.

Please note that in this version of the code, you need to apply an upsampling with a factor of two to the output. In the next version, this problem is solved.

@nlite8
Copy link

nlite8 commented Oct 8, 2018

Thank you for sharing, but I can't understand how to process the test images to get the registration results, maybe voxel by voxel?

@hsokooti
Copy link
Owner

hsokooti commented Oct 8, 2018

For example, you can set the size of x to be 163, and set the size of xLow to be 150, then the output automatically enlarges.

@cs123951
Copy link

cs123951 commented Dec 14, 2018

hi, @hsokooti , I have the same question. In the paper, you said

The proposed CNN architecture RegNet takes patches from a pair of 3D images (the fixed image IF and the moving image IM) as input.

Then how to get the registration result of the test image?
I mean, when I have a test image (let the size of the image be 256x256x128, the spacing is [1,1,1]), should I input the whole image into the network when testing? If I do so, is the output exactly the DVF?

But when I run your code, I found the input of 155 outputs 27, which confused me.

Could you show more about the details of the network?

Thanks a lot.

@hsokooti
Copy link
Owner

Hi @cs123951,

The network design of the current version (0.2alpha) is a bit different from the paper (0.1). In this version, the network design decimation3 takes inputs of 155 and gives an output of 27 during training. At the test phase, you can increase the size of inputs and a larger output.

You can tune the input size is in the script Function.RegNetModel.decimation3 line 89. For instance, you can increase the input to 255:

r_input = 255

The size of dvf_regnet will be [None, 127, 127, 127, 3].

In your case, the ideal number of r_input is r_input = 387 which leads to dvf_regnet of [None, 259, 259, 259, 3]. However, the limitation to increase the input size is your GPU memory. By trying different numbers you can find the maximum size that can be fed in the GPU memory.

Two minor tricks should be noted:

  • You need to pad the input images in order to get the output with the same size as the original image.
  • Because of the memory limitation you might need to do a loop over the images.

The script RegNet3D_FullReg_MultiStage.py does this job. If you use only one stage, then set the stage list to be 1:

stage_list = [1]

I hope that this might help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants