-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Scene Description API for automation and a11y #1363
Labels
Comments
Mentioned in an editors meeting: There's a possibility that this information could also be used as a generic input assist, where we could start surfacing which semantic object a target ray intersected with select events. This could make some types of inputs easier for developers. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Right now rendered scenes are pretty opaque. They are hard to parse by machines to extract information about what is being shown and where it is in 3D space.
I would like to propose a solution where we have an object graph created by the user and attached to an entry point on the session each object is assigned a colour.
And a stencil buffer where these colours are rendered so that the device knows what is on the scene.
/facetoface
The text was updated successfully, but these errors were encountered: