Skip to content
This repository has been archived by the owner on Dec 11, 2020. It is now read-only.

CUDA version mismatch in docker enviroment #147

Open
xsir317 opened this issue Feb 28, 2019 · 2 comments
Open

CUDA version mismatch in docker enviroment #147

xsir317 opened this issue Feb 28, 2019 · 2 comments

Comments

@xsir317
Copy link

xsir317 commented Feb 28, 2019

After i build the docker image( run docker build . ) , i found that the installed CUDA is 9.0 ,but the driver is 10.0.

and the df_console.py failed with "CUDA driver version is insufficient for CUDA runtime version"

in df_console.py , print('CUDA version', torch.version.cuda) # this is 10.0

i checked conda list , there is pytorch-nightly 1.0.0.dev20190226 py3.7_cuda10.0.130_cudnn7.4.2_0 pytorch

and the CUDA version in Dockerfile is ENV CUDA_VERSION 9.0.176

so this could be a mismatch . is this a bug?

@bytesoftly
Copy link

Is a mismatch but I found installing nvidia driver 410 (i.e. for CUDA 10) on my host machine, running with nvidia-docker works.

@JeremyCollinsMPI
Copy link

I'm having the same problem, it would be good if the docker file could be fixed. I'm trying to fix it at the moment.
@bytesoftly Could you possibly provide a docker file for your solution?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants