Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alpha_mode in the SurfaceConfiguration does not work. #5661

Open
tovernaar123 opened this issue May 4, 2024 · 4 comments
Open

Alpha_mode in the SurfaceConfiguration does not work. #5661

tovernaar123 opened this issue May 4, 2024 · 4 comments
Labels
external: driver-bug A driver is causing the bug, though we may still want to work around it

Comments

@tovernaar123
Copy link

Want to start with the fact that I am very new to wgpu and and anything gpu related for that matter so if my issue is something very obvious I'm sorry.

Description
I wanted to make a transparent window with wgpu + winnit but when setting the alpha_mode for my surface to anything other then Opaque it errored with: 'Requested alpha mode x is not in the list of supported alpha modes: [Opaque]". And when using Opaque it is Opaque which until this point all makes sence (altough I would not know why my gpu does not support other alpha modes). But then when I switch my gpu to my low power integrated gpu I can still not do any alpha modes but when chosing Opaque the window is drawn transparent (and not Opaque).

Repro steps
(mostly a modified version of https://sotrh.github.io/learn-wgpu/)

use std::iter;

use wgpu::InstanceFlags;
use winit::{
    event::*,
    event_loop::EventLoop,
    window::{Window, WindowBuilder},
};

struct State<'a> {
    surface: wgpu::Surface<'a>,
    device: wgpu::Device,
    queue: wgpu::Queue,
    config: wgpu::SurfaceConfiguration,
    size: winit::dpi::PhysicalSize<u32>,
    window: &'a Window,
}
impl<'a> State<'a> {
    async fn new(window: &'a Window) -> State<'a> {
        let size = window.inner_size();
        println!("{:#?}", InstanceFlags::empty());
        let instance = wgpu::Instance::new(wgpu::InstanceDescriptor {
            backends: wgpu::Backends::default(),
            flags:InstanceFlags::empty(),
            ..Default::default()
        });
    
        let surface = instance.create_surface(window).unwrap();

        let adapter = instance
            .request_adapter(&wgpu::RequestAdapterOptions {
                //chaning this to LowPower makes it work like it want it to (altough not what i expectect to happen)
                // with Opaque
                power_preference: wgpu::PowerPreference::HighPerformance,
                compatible_surface: Some(&surface),
                force_fallback_adapter: false,
            })
            .await
            .unwrap();
        let (device, queue) = adapter
            .request_device(
                &wgpu::DeviceDescriptor {
                    label: None,
                    required_features: wgpu::Features::empty(),
                    required_limits: wgpu::Limits::default()
                },
                None,
            )
            .await
            .unwrap();
        //check to see if chaning power_preference acctualy did anything
        println!("{}",adapter.get_info().name);
        let surface_caps = surface.get_capabilities(&adapter);
        let surface_format: wgpu::TextureFormat = surface_caps
            .formats
            .iter()
            .copied()
            .find(|f| f.is_srgb())
            .unwrap_or(surface_caps.formats[0]);
        println!("{:?}", surface_format);
        
        let config = wgpu::SurfaceConfiguration {
            usage: wgpu::TextureUsages::RENDER_ATTACHMENT,
            format: surface_format,
            width: size.width,
            height: size.height,
            present_mode: surface_caps.present_modes[0],
            alpha_mode: wgpu::CompositeAlphaMode::Opaque,
            desired_maximum_frame_latency: 2,
            view_formats: vec![],
        };

        Self {
            surface,
            device,
            queue,
            config,
            size,
            window,
        }
    }

    fn window(&self) -> &Window {
        &self.window
    }

    pub fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
        if new_size.width > 0 && new_size.height > 0 {
            self.size = new_size;
            self.config.width = new_size.width;
            self.config.height = new_size.height;
            self.surface.configure(&self.device, &self.config);
        }
    }

    fn render(&mut self) -> Result<(), wgpu::SurfaceError> {
        let output = self.surface.get_current_texture()?;
        let view = output
            .texture
            .create_view(&wgpu::TextureViewDescriptor::default());

        let mut encoder = self
            .device
            .create_command_encoder(&wgpu::CommandEncoderDescriptor {
                label: Some("Render Encoder"),
            });

        {
            let _render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
                label: Some("Render Pass"),
                color_attachments: &[Some(wgpu::RenderPassColorAttachment {
                    view: &view,
                    resolve_target: None,
                    ops: wgpu::Operations {
                        load: wgpu::LoadOp::Clear(wgpu::Color {
                            r: 0.1,
                            g: 0.0,
                            b: 0.0,
                            a: 0.1,
                        }),
                        store: wgpu::StoreOp::Store,
                    },
                })],
                depth_stencil_attachment: None,
                occlusion_query_set: None,
                timestamp_writes: None,
            });
        }

        self.queue.submit(iter::once(encoder.finish()));
        output.present();

        Ok(())
    }
}


pub async fn run() {

    env_logger::init();

    let event_loop = EventLoop::new().unwrap();
    let window = WindowBuilder::new().with_transparent(true).build(&event_loop).unwrap();
    let mut state = State::new(&window).await;
    let mut surface_configured = false;


    event_loop
        .run(move |event, control_flow| {
            match event {
                Event::WindowEvent {
                    ref event,
                    window_id,
                } if window_id == state.window().id() => {
                        // UPDATED!
                        match event {
                            WindowEvent::CloseRequested  => control_flow.exit(),
                            WindowEvent::Resized(physical_size) => {
                                log::info!("physical_size: {physical_size:?}");
                                surface_configured = true;
                                state.resize(*physical_size);
                            }
                            WindowEvent::RedrawRequested => {
                                // This tells winit that we want another frame after this one
                                state.window().request_redraw();

                                if !surface_configured {
                                    return;
                                }

                                state.render();
                            }
                            _ => {}
                        }
                    
                }
                _ => {}
            }
        })
        .unwrap();
}

fn main() {
    pollster::block_on(run());
}

and cargo.toml with

[dependencies]
winit = { version = "0.29.15", features = ["rwh_05"] }
env_logger = "0.10.2"
log = "0.4"
wgpu = "0.20.0"
pollster = "0.3"

Expected vs observed behavior
So what I would expect with Opaque alpha mode is this:
image
and this is what I get when using my nvidia gpu but when I use the low power one I get:
image
Now what I would expect is to be able to specify the alpha mode in both cases and that the alpha mode determines if it is drawn transparent, also I dont understand why my nvidia gpu does not support any other modes.

Platform
Using windows 11
the high power gpu is a nvidia gpu
the low power gpu is a intel integrated gpu (I am on a laptop)
using version 0.20.0 of wgpu

@Wumpf
Copy link
Member

Wumpf commented May 4, 2024

thanks for reporting! Looks like something is messed up there.
I thought actually that we didn't support transparent rendering at all yet on Windows but the described behavior (getting transparent despite selecting Opaque depending on the gpu) is even worse 😮

@Wumpf Wumpf added the type: bug Something isn't working label May 4, 2024
@cwfitzgerald
Copy link
Member

I don't think this is actually our bug - if anything this is an issue with Intel. We just pass through the vulkan compositing modes, so if Opaque (which is defined in vk as explicitly ignoring the alpha) is reading the Alpha, thats the drivers problem. If that's the behavior they want, they should be advertising the Inherit blend mode, which has undefined alpha blending behavior.

@cwfitzgerald cwfitzgerald added external: driver-bug A driver is causing the bug, though we may still want to work around it and removed type: bug Something isn't working labels May 5, 2024
@tovernaar123
Copy link
Author

@cwfitzgerald that makes geuss sometimes bugs are usefull (cause I wanted to make something transparent) :). Should I close this issue since its not a wgpu issue? ( And my nivdia gpu not being able to render transparent is already know so their is probibly already an issue for it)

@FrancescoLuzzi
Copy link

FrancescoLuzzi commented May 12, 2024

And my nivdia gpu not being able to render transparent is already know so their is probibly already an issue for it

I've just encountered this behavior and this issue is the only one that talks about it (maybe I've missed it).
I'm using a Rtx 3070ti and surface_caps.alpha_modes always returns Opaque, no other options.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
external: driver-bug A driver is causing the bug, though we may still want to work around it
Projects
None yet
Development

No branches or pull requests

4 participants