Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installing Correlation package #2

Open
AliKafaei opened this issue Mar 22, 2021 · 9 comments
Open

Installing Correlation package #2

AliKafaei opened this issue Mar 22, 2021 · 9 comments

Comments

@AliKafaei
Copy link

Hi,
I wanted to install the correlation package using Cuda 9 and Pytorch 0.4 but I found this error:
"119 | #error -- unsupported GNU version! gcc versions later than 6 are not supported!"
So I tried installing older version of gcc but I couldent (The OS didnt allow installing old version)
My ubuntu is version 20 (Kubuntu).
Regards,
Ali

@ltkong218
Copy link
Owner

According to my experience, provided correlation package only supports PyTorch 0.4.1. I install it under Ubuntu 16.04, and my gcc version is 5.4.0. I think you can try to search for other implementation of the correlation layer to replace it.

@AliKafaei
Copy link
Author

Thanks for your response. From ubuntu 18, gcc version 5.4 is not supported anymore (I have managed to run gcc version 5 before but that was very challenging). If the correlation package support newer version of gcc, that would widen the reproducibility of the work.

@ltkong218
Copy link
Owner

Please refer to Pytorch Correlation module. This module supports newer versions of PyTorch, such as 1.2 and so on.

@pacifinapacific
Copy link

Looking at the link below I was able to successfully run the following command: pip install spatial-correlation-sampler
How do I get your program to work?
https://github.com/ClementPinard/Pytorch-Correlation-extension

@ltkong218
Copy link
Owner

ltkong218 commented Apr 22, 2021

import torch
from spatial_correlation_sampler import SpatialCorrelationSampler

# define a correlation module
correlation_sampler = SpatialCorrelationSampler(1, 9, 1, 0, 1)

output = correlation_sampler(input1, input2)

# reshape output to be a 3D cost volume
b, c, h, w = input1.shape
output = output.view(b, -1, h, w) / c

@ChuanchuanZheng
Copy link

import torch
from spatial_correlation_sampler import SpatialCorrelationSampler

# define a correlation module
correlation_sampler = SpatialCorrelationSampler(1, 9, 1, 0, 1)

output = correlation_sampler(input1, input2)

# reshape output to be a 3D cost volume
b, c, h, w = input1.shape
output = output.view(b, -1, h, w) / c

Hi. I replace the self.corr in the original code with this initialization of correlation_sampler. But the error occurs, which is RuntimeError: input1 must be contiguous. Could you tell me how to replace the origin corr code to get your code to work?

@ltkong218
Copy link
Owner

ltkong218 commented Jun 4, 2021

import torch
from spatial_correlation_sampler import SpatialCorrelationSampler

input1 = torch.randn(2, 32, 48, 64).cuda()
input2 = torch.randn(2, 32, 48, 64).cuda()

# define a correlation module
correlation_sampler = SpatialCorrelationSampler(1, 9, 1, 0, 1)

output = correlation_sampler(input1, input2)

# reshape output to be a 3D cost volume
b, c, h, w = input1.shape
output = output.view(b, -1, h, w) / c

print(output.shape)

I run above code and it's okay. So please check whether your input tensors are contiguous, or you can call .contiguous() to make them contiguous.

@fransiskusyoga
Copy link

fransiskusyoga commented Oct 19, 2022

I dont know is it correct or not but what I did was.

OOO in models/correlation_package/setup.py
remove ", extra_compile_args={'cxx': cxx_args, 'nvcc': nvcc_args}"

OOO in /models/correlation_package/correlation_cuda.cc
change “at::globalContext().getCurrentCUDAStream()” to “ at::cuda::getCurrentCUDAStream()”
and I add "#include <ATen/cuda/CUDAContext.h>"

OOO edit /models/correlation_package/correlation.py

import torch
from torch.nn.modules.module import Module
from torch.autograd import Function
import correlation_cuda

class CorrelationFunction(Function):

    
    @staticmethod
    def forward(ctx, input1, input2, pad_size=3, kernel_size=3, max_displacement=20, stride1=1, stride2=2, corr_multiply=1,):
        ctx.save_for_backward(input1, input2)
        ctx.corr_params = (pad_size, kernel_size, max_displacement, stride1, stride2, corr_multiply)
        #out_channel = ((max_displacement/stride2)*2 + 1) * ((max_displacement/stride2)*2 + 1)

        with torch.cuda.device_of(input1):
            rbot1 = input1.new()
            rbot2 = input2.new()
            output = input1.new()

            correlation_cuda.forward(input1, input2, rbot1, rbot2, output, 
                pad_size, kernel_size, max_displacement, stride1, stride2, corr_multiply)

        return output
    
    @staticmethod
    def backward(ctx, grad_output):
        input1, input2 = ctx.saved_tensors
        pad_size, kernel_size, max_displacement, stride1, stride2, corr_multiply = ctx.corr_params

        with torch.cuda.device_of(input1):
            rbot1 = input1.new()
            rbot2 = input2.new()

            grad_input1 = input1.new()
            grad_input2 = input2.new()

            correlation_cuda.backward(input1, input2, rbot1, rbot2, grad_output, grad_input1, grad_input2,
                pad_size, kernel_size, max_displacement, stride1, stride2, corr_multiply)

        return grad_input1, grad_input2


class Correlation(Module):
    def __init__(self, pad_size=0, kernel_size=0, max_displacement=0, stride1=1, stride2=2, corr_multiply=1):
        super(Correlation, self).__init__()
        self.pad_size = pad_size
        self.kernel_size = kernel_size
        self.max_displacement = max_displacement
        self.stride1 = stride1
        self.stride2 = stride2
        self.corr_multiply = corr_multiply

    def forward(self, input1, input2):

        result = CorrelationFunction.apply(input1, input2, self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)

        return result

after those edit i can run benchmark

@ltkong218
Copy link
Owner

I suggest that you can set fixed input tensors and compare outputs from different implementations to check it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants