-
Notifications
You must be signed in to change notification settings - Fork 615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image textures backed by cupy array #2391
base: main
Are you sure you want to change the base?
Conversation
Numpy arrays have a |
Yup, I think this is what this example is showcasing, which we need to adapt here. My question was mainly: where/how is this code generated, and can/should we use the same machinery for generating similar code for cupy? EDIT: sorry, just realized it says so in the docstring :P |
I have a local copy of that example where I started to try to convert it to vispy gloo. I ran out of time to dive really far into it, but remember getting stuck at a point where the user-level has a gloo object, but the user needed to tell Cupy the low-level OpenGL identifier (or maybe vice versa). We should have talked about this at the meeting. Shoot. |
Next time we can talk about it :P After playing around with this, my feeling is that we really need a relatively high level "split" for this to work well. Kinda like in the example, we probably should straight up use a different |
Looks like most references (8x) to These all reference There's also a reference here, where a pointer to the data is obtained instead: Line 559 in 16d8c5b
|
To clarify what I think has been touched on, I don't believe it is supported to form a GL buffer from a CUDA buffer without downloading and uploading the data [EDIT: this could be done as an on-device copy though]. What is supported is to form a CUDA buffer from a GL buffer. Then the two magically refer to the same data. Unfortunately, the GL handle is behind the GLIR, which means either doing the CUDA work on the GLIR server, or breaking the normal GLIR boundaries for the special case of local, live execution. If the CUDA operations are performed client-side, then either the GLIR would be probed to access the handle, or the buffer would be made outside of the GLIR and the handle uploaded to load it. That final option seems likely to meet the various usecases the most simply, although it's a little messy. This means the data would already have been uploaded and there would be no need to access it with ctypes at all. For CuPy specifically, there are existing examples. Here is where an OpenGL handle is converted to a CUDA resource: https://gist.github.com/keckj/e37d312128eac8c5fca790ce1e7fc437#file-cupy_gl_interop-py-L89 And then imported as a cupy cuda memory pointer: https://gist.github.com/keckj/e37d312128eac8c5fca790ce1e7fc437#file-cupy_gl_interop-py-L117 It seems when interoperating between opengl and cuda, basically all the cuda tensors are allocated in opengl first, and then exported to cuda. EDIT:
|
This PR is a very rough example of the kind of changes needed to allow cupy arrays to be preserved all the way to the GL level. (see #1985, #1986, cupy/cupy#5711, napari/napari#2243).
The changes on the
Texture
andImage
level are overall quite simple, but things get problematic at lower levels where numpy interals are used in ways that I don't quite understand yet.If one runs the new example (which is a copy of the existing image example but converting the data to a cupy array), the following exception is thrown:
The code where this error happens is apparently autogenerated, but examply how I'm not sure. Hopefully the
git blame
d @almarklein can point us in the right direction here :)cc @haesleinhuepf @jakirkham