Skip to content
This repository has been archived by the owner on Jul 29, 2023. It is now read-only.

Inference image shape and pixel values #177

Open
JohannaRahm opened this issue Nov 1, 2022 · 4 comments
Open

Inference image shape and pixel values #177

JohannaRahm opened this issue Nov 1, 2022 · 4 comments
Labels
bug Something isn't working inference using and sharing the models

Comments

@JohannaRahm
Copy link
Contributor

JohannaRahm commented Nov 1, 2022

When applying inference to images of shape 2562x2562 px² they are cropped to 2048x2048 px². Only the inferred image is saved and this makes it impossible to further compare ground truth and input images to the inferred images as the exact cropping area is unknown.

Furthermore, the inferred image has not the same dynamic range and the ground truth image. In the inference figure both target and prediction have pixel values ranging up to 33K. However, the ground truth image only has pixel values up to 280. The inferred image is stored with values up to 33K.

Both scenarios make it hard to further compare ground truth and inferred image outside of microDL. Could we think of a strategy to solve this?

Pixel values of target image
image

Inference figure showing different pixel values for target image
image

Commit 151cc25 master branch

@Christianfoley
Copy link
Contributor

Christianfoley commented Nov 2, 2022

Hi @JohannaRahm, have you noticed the cropping issue happening in both 2D and 2.5D models, or have you only tried 2D model inference?

I cannot find anywhere in the inference pipeline where we hard-code a 2048 pixel limit. In your configuration files, have you changed the "dataset->height" and "dataset->width" parameters to 2562?

@JohannaRahm
Copy link
Contributor Author

JohannaRahm commented Nov 8, 2022

I created an example with our models to make it easier to find the error. The inference data contains 3 FOV with size 2048x2048, 2000x2000, and 1000x1000. They are cropped to 2048x2048, 1024x1024, and 512x512 respectively.

Here the paths:

  • inference yml file /hpc/projects/comp_micro/projects/virtualstaining/2022_microDL_nuc_mem/configfiles/tests/input_size/config_inference_2022_03_15_nuc_mem_heavy_augmentation_test_input_size.yml
  • inference data /hpc/projects/comp_micro/projects/HEK/2022_04_19_nuc_mem_LF_63x_04NA/all_pos_single_page/test_different_input_size/
  • inference results /hpc/projects/comp_micro/projects/virtualstaining/2022_microDL_nuc_mem/models/2022_03_15_nuc_mem/loss_functions/heavy_augmentation_z25-60_mae/pred_input_size/
  • model dir /hpc/projects/comp_micro/projects/virtualstaining/2022_microDL_nuc_mem/models/2022_03_15_nuc_mem/loss_functions/heavy_augmentation_z25-60_mae/
  • model yml file /hpc/projects/comp_micro/projects/virtualstaining/2022_microDL_nuc_mem/configfiles/input_nuc_mem/loss_functions/config_train_2022_03_15_nuc_mem_heavy_augmentation_z25-60_mae.yml
  • preprocess directory /hpc/projects/comp_micro/projects/virtualstaining/2022_microDL_nuc_mem/preprocess/2022_03_15_nuc_mem_z25-60/
  • preprocess yml file /hpc/projects/comp_micro/projects/virtualstaining/2022_microDL_nuc_mem/preprocess/config_preprocess_2022_03_15_nuc_mem_z25-60.yml

The width and height of the inferred data are not specified and in the scenario posted above the sizes of images in the inference data slightly differs, which make specification not possible. Looking at the sizes of the inferred images, they seem to be cropped to something which is dividable by tile size (256x256). Is specification of inference size a must and if yes why and where? The only width and height defined in the yml files are the tile sizes.

I have only tried 2D model inference.

In this test the inference code from this PR #155 is used, but the test above showed that this unexpected behavior also occurs in commit 151cc25 master branch.

@JohannaRahm
Copy link
Contributor Author

Update: The pixel values are correctly presented in microDL. Fiji shows two versions of pixel values for these images -> see screenshot with example value = 83 (32851), where 32851 is the value stored in the pixel - rightly captured by microDL.
image

@Soorya19Pradeep
Copy link
Contributor

I have the same issue as @JohannaRahm with the inference images produced by microDL. The 2012x2012 input images (images resized on x-y registration) used for microDL inference produces 1024x1024 output images. The central 1024x1024 pixels in the image are chosen to run the inference.

@mattersoflight mattersoflight added the inference using and sharing the models label Dec 8, 2022
@ziw-liu ziw-liu added the bug Something isn't working label Mar 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working inference using and sharing the models
Projects
None yet
Development

No branches or pull requests

5 participants