Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use 1u << when dealing with Uint32 and Uint16 #9412

Closed
wants to merge 2 commits into from

Conversation

Susko3
Copy link
Contributor

@Susko3 Susko3 commented Mar 31, 2024

Description

Changes #defines which define constants commonly used with unsigned integers to use 1u << x instead of 1 << x or 1ul << x.

Helps with generating bindings, as 1u seems to be platform-agnostic, while 1ul has platform defined width (at least when using ClangSharp on windows).

Main inspiration is SDL_haptic.h definitions (of which SDL_HAPTIC_SQUARE was missed!)

Helps with generating bindings, as `1u` seems to be platform-agnostic,
while `1ul` has platform defined width.
@slouken slouken requested a review from icculus April 1, 2024 14:40
@slouken
Copy link
Collaborator

slouken commented Apr 1, 2024

@smcv, are there any other implications of this PR?

@Susko3
Copy link
Contributor Author

Susko3 commented Apr 6, 2024

The L suffix means long, which seems to be 32 bits on windows and 32 or 64 bits on linux (depending on OS bitness).
I've worked around this issue by specifying the windows-types option in ClangSharp (I had previously used unix-types).

Source: https://learn.microsoft.com/en-us/dotnet/api/system.runtime.interopservices.clong?view=net-8.0

@smcv
Copy link
Contributor

smcv commented Apr 8, 2024

1u seems to be platform-agnostic

1u is explicitly unsigned and implicitly int-sized, which in theory is just as platform-dependent as long. In theory Standard C allows int to be of any size ≥ some minimum (I think it's 16 bits minimum?) but in practice all practical platforms have int 32 bits or larger.

1ul is explicitly unsigned and explicitly long-sized.

If we want a constant that has a fixed size in bits, as far as I'm aware the only way to achieve that is ((uint32_t) (1u << 3)).

A taxonomy of "normal" platform data models, for reference:

  • "ILP32": 32-bit Windows, 32-bit Linux, historically 32-bit macOS. 32-bit int, long and void *; 64-bit long long.
  • "LP64": 64-bit Linux, 64-bit macOS. 32-bit int; 64-bit long, long long and void *.
  • "LLP64": 64-bit Windows. 32-bit int and long; 64-bit long long and void *.

In practice it's rare to have int larger than 32 bits, because if you make int larger than 32 bits, then there are not enough smaller standard types {char, short} to be able to provide all three of the fixed-size types {int8_t, int16_t, int32_t}.

@smcv
Copy link
Contributor

smcv commented Apr 8, 2024

Changes #defines which define constants commonly used with unsigned integers to use 1u << x instead of 1 << x or 1ul << x.

This is really two different changes, and I would personally say that they should be two separate commits.

  1. Changing 1 << x to 1u << x changes the type of the constant from int to unsigned int (same width, different interpretation), which gives us 1 more usable bit in the bitfield: if we can assume that int is at least 32 bits, then we can use 1u << 31, whereas 1 << 31 would be either negative or undefined behaviour (I forget which, but neither is what we want). I would say this is definitely a good thing for bitfields.

  2. Changing 1ul << x to 1u << x narrows the constant from long to int: no practical effect on ILP32 or LLP64, i.e. Windows or 32-bit Linux, but it matters on LP64, i.e. 64-bit Linux and 64-bit macOS, and it would also matter on platforms with 16-bit int if SDL still supports any. I would say this is not so obviously a good thing. It makes the constant the same width as the variable it will go in or narrower, which is sort of good; but if there is any platform that SDL supports that uses int smaller than 32 bits, then 1u << 16 or greater would become invalid.

Changing %lx in the tests to %x needed to be done to avoid compiler warnings/errors on 64-bit Linux after changing the type of the corresponding constant. I would personally say this should be part of (2.).

Does SDL document the architectural assumptions that it makes? It might be good to have a document that describes the assumptions that SDL makes beyond the guarantees of Standard C, for example "we (do/don't) assume that int is at least 16 bits in size". I can say with at least 99% confidence that SDL does make some assumptions beyond what Standard C guarantees, because any particular version of Standard C is less helpful than you would think! GLib has https://gitlab.gnome.org/GNOME/glib/-/blob/main/docs/toolchain-requirements.md?ref_type=heads which is the sort of thing that I'm thinking of: it documents GLib as requiring about half of the differences between C89 and C99.

Another good way to document architectural assumptions is with static assertions, like _Static_assert(CHAR_BITS == 8) and _Static_assert(sizeof(int) >= 4) (or the equivalent with SDL's similar macros). GLib has some of this in https://gitlab.gnome.org/GNOME/glib/-/blob/main/glib/glib-init.c?ref_type=heads, which I know because I added a lot of it myself.

@Susko3
Copy link
Contributor Author

Susko3 commented May 20, 2024

Most of this was added in #9726.

@Susko3 Susko3 closed this May 20, 2024
@Susko3 Susko3 deleted the use-1u-in-public-headers branch May 20, 2024 18:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants