share
Stack OverflowIs there any way to compute the width of an integer type at compile-time?
[+19] [8] R.. GitHub STOP HELPING ICE
[2010-10-18 07:30:34]
[ c integer width padding ]
[ https://stackoverflow.com/questions/3957252/is-there-any-way-to-compute-the-width-of-an-integer-type-at-compile-time ]

The size of an integer type (or any type) in units of char/bytes is easily computed as sizeof(type). A common idiom is to multiply by CHAR_BIT to find the number of bits occupied by the type, but on implementations with padding bits, this will not be equal to the width in value bits. Worse yet, code like:

x>>CHAR_BIT*sizeof(type)-1

may actually have undefined behavior if CHAR_BIT*sizeof(type) is greater than the actual width of type.

For simplicity, let's assume our types are unsigned. Then the width of type is ceil(log2((type)-1). Is there any way to compute this value as a constant expression?

(4) Are there implementations with padding bits? - sbi
(6) @sbi: If padding bits are allowed in any type, then it doesn't matter if there are such implementations actually existing or not. If you want to write portable and conforming code, then you have no choice. You can redefine "portable" for yourself as "to any system without padding bits", of course. Just document it well to remember it when it suddenly doesn't work anymore. - Secure
(9) @sbi: every C99 implementation has a type with padding bits, namely _Bool. Its maximum value is 1 and its size is at least one, too. So it has CHAR_BIT - 1 padding bits. - Jens Gustedt
@Jens: So this comes down to What is the width of an integer type? - sbi
(1) @sbi: the width of an integer type is a well defined term in the standard. It is clearly distinguished from it size, which is its storage requirement. - Jens Gustedt
(1) If you look at the 12/29/10 edit for my question that you commented on I found a function-like macro searching through comp.lang.c that should create a compile-time constant for the width (defined as value bits and sign bit if applicable). As noted you must know the MAX of the type to use the macro. I hope this is helpful. Also I have replied to the comment you made on a different question I had about padding bits. I was wondering if you disagree with my thinking and if so could you reply there. Thanks - Anonymous Question Guy
@AQG: Wow, that's quite a macro. I'm going to have to spend some time reading it, but if it works, please post it here as an answer and I'll accept it. (At least I think I can still change my choice of accepted answer...) - R.. GitHub STOP HELPING ICE
Hi R, I have submitted the IMAX_BITS() macro as the answer to your question. Note that in my original comment I incorrectly stated that the macro creates a compile-time constant for the width when it does not. It actually creates a compile-time constant for the amount of value bits. This is discussed in my answer. My apologies. - Anonymous Question Guy
[+20] [2011-01-03 23:33:32] Anonymous Question Guy [ACCEPTED]

There is a function-like macro that can determine the value bits of an integer type, but only if you already know that type's maximum value. Whether or not you'll get a compile-time constant depends on your compiler but I would guess in most cases the answer is yes.

Credit to Hallvard B. Furuseth for his IMAX_BITS() function-like macro that he posted in reply to a question on comp.lang.c [1]

/* Number of bits in inttype_MAX, or in any (1<<b)-1 where 0 <= b < 3E+10 */
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
                  + (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))

IMAX_BITS(INT_MAX) computes the number of bits in an int, and IMAX_BITS((unsigned_type)-1) computes the number of bits in an unsigned_type. Until someone implements 4-gigabyte integers, anyway:-)


And credit to Eric Sosman for this [alternate version](http://groups.google.com/group/comp.lang.c/msg/e998153ef07ff04b?dmode=source) that should work with less than 2040 bits: **(EDIT 1/3/2011 11:30PM EST: It turns out this version was also written by Hallvard B. Furuseth)**
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))

**Remember that although the width of an unsigned integer type is equal to the number of value bits, the width of a signed integer type is one greater (§6.2.6.2/6).** This is of special importance as in my original comment to your question I had incorrectly stated that the IMAX_BITS() macro calculates the width when it actually calculates the number of value bits. Sorry about that!

So for example IMAX_BITS(INT64_MAX) will create a compile-time constant of 63. However, in this example, we are dealing with a signed type so you must add 1 to account for the sign bit if you want the actual width of an int64_t, which is of course 64.

In a separate comp.lang.c discussion a user named blargg gives a breakdown of how the macro works:
Re: using pre-processor to count bits in integer types... [2]

Note that the macro only works with 2^n-1 values (ie all 1s in binary), as would be expected with any MAX value. Also note that while it is easy to get a compile-time constant for the maximum value of an unsigned integer type (IMAX_BITS((unsigned type)-1)), at the time of this writing I don't know any way to do the same thing for a signed integer type without invoking implementation-defined behavior. If I ever find out I'll answer my own related SO question, here:
C question: off_t (and other signed integer types) minimum and maximum values - Stack Overflow [3]

[1] https://groups.google.com/g/comp.lang.c/c/NfedEFBFJ0k
[2] https://web.archive.org/web/20150403064546/http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2009-01/msg02242.html
[3] https://stackoverflow.com/questions/4514572/c-question-off-t-and-other-signed-integer-types-minimum-and-maximum-values

What's the implementation-defined-behavior-dependent way to do it for signed types? I assume it depends on casting an unsigned value to signed, but it would be nice to see what you've got and what assumptions it depends on. - R.. GitHub STOP HELPING ICE
(2) By the way, the result of this macro is defined by the C language to be a constant expression, as long as the argument m is a constant expression. - R.. GitHub STOP HELPING ICE
@R re signed types: the way I came up with was if we were to start with the value of 1 in a signed integer type and continuously multiply by 2 until the value became negative we could determine the amount of value bits. What is implementation defined is whether or not the value will actually become negative. Since I had asked my question I've googled and found some interesting macros that check bits in a signed type sequentially using bitwise operations, but that wouldn't be portable (not that my own example is, anyway). I'll search my history on another computer for a link or two. - Anonymous Question Guy
Hi again. I misspoke in my previous comment above as apparently integer overflow behavior is undefined not implementation-defined. As best I can tell that means the implementation doesn't have to document its behavior (or even have any behavior at all) in the case of integer overflow. Re the other macros, which I couldn't find, I had discarded them due to behavior that is easiest to describe as uncompliant, or technically according to §6.5.7/4,5 the bitwise shift operators have either defined, undefined, or implementation-defined behavior depending on the value of the signed type. - Anonymous Question Guy
Curiously, the optimized assembly produced by for (int i = 0; m >> i; i++) {} and IMAX_BITS is identical with gcc7. - David C. Rankin
I was thinking why we can’t deduce the width of a signed integer type from its corresponding unsigned type, but C99 §6.3.6.1 indicates there need not even be such a type, and §6.2.6.2/2 seems to indicate that 31-bit unsigned int + 32-bit signed int (both having a max value of 0x7FFFFFFF) is perfectly fine… cries but I guess we have an answer for the unsigned values at least, and the unsigned one is an upper bound for the signed one, also due to §6.2.6.2/5 sentence 2… - mirabilos
I found that IMAX_BITS(0x20000000U) is 398 so we clearly need to check for positive and comprised of consequent “ones” with no “zero” at the end first. I hope that the latest update to my integer type sanity checks makes them a bit robuster and still (hopefully!) not UB… sigh… - mirabilos
The link for the explanation is down at least down, and if it is permanently down, it would be nice to replace it with web.archive.org/web/20150403064546/http://coding.derkeiler.c‌​om/… - Pascal Cuoq
@mirabilos As noted "the macro only works with 2^n-1 values" - Anonymous Question Guy
@AnonymousQuestionGuy true, but it’s still good to have a way to check them. (I extended it to 279 bits and moved all the “make ints safer in C. I hate C.” stuff into a separate .h file in the meanwhile for easier peer review, reuse, and I plan on extending this as I go.) - mirabilos
1
[+6] [2010-10-18 08:22:10] Christoph

Compare the macros from <limits.h> against known max values for specific integer widths:

#include <limits.h>

#if UINT_MAX == 0xFFFF
#define INT_WIDTH 16
#elif UINT_MAX == 0xFFFFFF
#define INT_WIDTH 24
#elif ...
#else
#error "unsupported integer width"
#endif

2
[+3] [2010-10-18 08:31:21] Jens Gustedt

First approach, if you know what standard type you have (so your type is no typedef) go with the {U}INT_MAX macros and check against the possible sizes.

If you don't have that, for unsigned types this is relatively easy conceptually. For your favorite type T, just do (T)-1 and do a monster test macro that checks against all possible values with ?:. Since these then are only compile time constant expressions, any decent compiler will optimize that out a leave you with just the value that you are interested in.

This wouldn't work in #if etc, because of the type cast, but this can't be avoided in a simple way.

For signed types this is more complicated. For types at least as wide as int you can hope to do a trick to promote to the corresponding unsigned type and get the width of that type then. But to know whether or not your signed type has just one value bit less or not, no I don't think that there is a generic expression to know that.

Edit: Just to illustrate this a bit, I give some extracts of what you can do to make this approach (for unsigned types) not generate too large expressions in P99 [1] I have something like

#ifndef P99_HIGH2
# if P99_UINTMAX_WIDTH == 64
#  define P99_HIGH2(X)                                         \
((((X) & P00_B0) ? P00_S0 : 0u)                              \
 | (((X) & P00_B1) ? P00_S1 : 0u)                            \
 | (((X) & P00_B2) ? P00_S2 : 0u)                            \
 | (((X) & P00_B3) ? P00_S3 : 0u)                            \
 | (((X) & P00_B4) ? P00_S4 : 0u)                            \
 | (((X) & P00_B5) ? P00_S5 : 0u))
# endif
#endif
#ifndef P99_HIGH2
# if P99_UINTMAX_WIDTH <= 128
#  define P99_HIGH2(X)                                         \
((((X) & P00_B0) ? P00_S0 : 0u)                              \
 | (((X) & P00_B1) ? P00_S1 : 0u)                            \
 | (((X) & P00_B2) ? P00_S2 : 0u)                            \
 | (((X) & P00_B3) ? P00_S3 : 0u)                            \
 | (((X) & P00_B4) ? P00_S4 : 0u)                            \
 | (((X) & P00_B5) ? P00_S5 : 0u)                            \
 | (((X) & P00_B6) ? P00_S6 : 0u))
# endif
#endif

where the magic constants are defined with a sequence of #if at the beginning. There it is important to not to expose too large constants for compilers that can't handle them.

/* The preprocessor always computes with the precision of uintmax_t */
/* so for the preprocessor this is equivalent to UINTMAX_MAX       */
#define P00_UNSIGNED_MAX ~0u

#define P00_S0 0x01
#define P00_S1 0x02
#define P00_S2 0x04
#define P00_S3 0x08
#define P00_S4 0x10
#define P00_S5 0x20
#define P00_S6 0x40

/* This has to be such ugly #if/#else to ensure that the            */
/* preprocessor never sees a constant that is too large.            */
#ifndef P99_UINTMAX_MAX
# if P00_UNSIGNED_MAX == 0xFFFFFFFFFFFFFFFF
#  define P99_UINTMAX_WIDTH 64
#  define P99_UINTMAX_MAX 0xFFFFFFFFFFFFFFFFU
#  define P00_B0 0xAAAAAAAAAAAAAAAAU
#  define P00_B1 0xCCCCCCCCCCCCCCCCU
#  define P00_B2 0xF0F0F0F0F0F0F0F0U
#  define P00_B3 0xFF00FF00FF00FF00U
#  define P00_B4 0xFFFF0000FFFF0000U
#  define P00_B5 0xFFFFFFFF00000000U
#  define P00_B6 0x0U
# endif /* P00_UNSIGNED_MAX */
#endif /* P99_UINTMAX_MAX */
#ifndef P99_UINTMAX_MAX
# if P00_UNSIGNED_MAX == 0x1FFFFFFFFFFFFFFFF
#  define P99_UINTMAX_WIDTH 65
#  define P99_UINTMAX_MAX 0x1FFFFFFFFFFFFFFFFU
#  define P00_B0 0xAAAAAAAAAAAAAAAAU
#  define P00_B1 0xCCCCCCCCCCCCCCCCU
#  define P00_B2 0xF0F0F0F0F0F0F0F0U
#  define P00_B3 0xFF00FF00FF00FF00U
#  define P00_B4 0xFFFF0000FFFF0000U
#  define P00_B5 0xFFFFFFFF00000000U
#  define P00_B6 0x10000000000000000U
# endif /* P00_UNSIGNED_MAX */
#endif /* P99_UINTMAX_MAX */
.
.
.
[1] http://p99.gforge.inria.fr/p99-html/group__integers_gaf26d8bca47d8b51ae9a520e9c0966608.html#gaf26d8bca47d8b51ae9a520e9c0966608

As far as I know, a signed type always has just one value bit less than the corresponding unsigned type. This is a consequence of the representations being compatible for positive values of the signed type, and of the restriction that only twos complement, ones complement, and sign/magnitude allowed as signed representations. - R.. GitHub STOP HELPING ICE
(1) @R., in all practical cases that I know of you are right, but if you think of just that condition (positive signed -> unsigned) it is perfectly possible that the signed type has even less value bits. The C99 standard explicitly allows that, it just requires that the signed type has no more value bits than the unsigned type. So my guess would be that there was at least one implementor in the committee that vetoed a cleaner solution, implying in turn that such a weird things existed somewhere. - Jens Gustedt
(2) Yes, the important part is "no more value bits". The signed type is even allowed to have the same number of value bits as the unsigned type, plus the sign bit. So unsigned could have 32 bits and signed 33 bits. - Secure
@Secure: yes, I think this was what I tried to say. (signed and unsigned with 31 value bits is more likely than 32, though ;-) The point I was trying to make is that you can't know the maximum values of the signed integer types without inspecting the MAX macros. There is no such expression that is guaranteed to work in all cases. - Jens Gustedt
3
[+1] [2010-10-18 12:47:41] Secure

You can calculate it at runtime with a simple loop, well-defined and without the danger of UB:

unsigned int u;
int c;

for (c=0, u=1; u; c++, u<<=1);

total_bits   = CHAR_BIT * sizeof(unsigned int);
value_bits   = c;
padding_bits = total_bits - value_bits;

The simplest way would be to check in your unit tests (you have them, right?) that value_bits is identical to your current INT_WIDTH definition.

If you really need to calculate it at compile time, I'd go with one of the given #if-#elif cascades, either testing UINT_MAX or your target system.

What do you need it for? Maybe YAGNI?


(1) The example usage I had in mind was implementing a bit array that uses a type that's expected to be efficient, like unsigned int or size_t. - R.. GitHub STOP HELPING ICE
4
[+1] [2010-10-18 12:55:25] Will

A general observation is that if you rely on the width of a data type in your calculations, you should use the explicit width data types defined in <stdint.h> [1] e.g. uint32_t.

Trying to count the bytes in the standard types is begging the question of what your 'portable' code would do in the event of an overflow.

[1] https://pubs.opengroup.org/onlinepubs/009695399/basedefs/stdint.h.html

(5) I disagree. A very reasonable thing to do when implementing a bit array, for instance, would be to assume that unsigned or size_t is an efficient size to work with, and then you'd want to know the number of bits each "word" stores. - R.. GitHub STOP HELPING ICE
(1) Also, as someone pointed out, uint32_t may be of lower rank than int, which makes it a candidate for automatic promotion to a signed type, causing all kinds of havoc including Undefined Behaviour. (It may even be narrower than int…) I’m currently in the process of changing all my code away from all <stdint.h> types (except {,u}intmax_t) for safety. - mirabilos
5
[0] [2010-10-18 07:40:48] Aaron Digulla

Yes, since for all practical purposes, the number of possible widths is limited:

#if ~0 == 0xFFFF
# define INT_WIDTH 16
#elif ~0 == 0xFFFFFFFF
# define INT_WIDTH 32
#else
# define INT_WIDTH 64
#endif

(3) Are you guaranteed the integer type used by the pre-processor is the same as the integer type of compiled code? I don't know if the standard says this is the case or not. - The Archetypal Paul
@Paul: Good point. My gut feeling is that you should be safe unless you start using flags to switch the int width (like "use 32bit ints when 16bit is the default"). - Aaron Digulla
(2) @Paul: see C99 6.10.1 §4: "For the purposes of this token conversion and evaluation, all signed integer types and all unsigned integer types act as if they have the same representation as, respectively, the types intmax_t and uintmax_t defined in the header <stdint.h>"; use the _MAX macros from <limits.h> instead - Christoph
Please define "for all practical purposes". en.wikipedia.org/wiki/Byte "Various implementations of C and C++ define a byte as 8, 9, 16, 32, or 36 bits." - Secure
(2) @Aaron, for the reasons that Christoph cites, your preprocessor approach only will work to determine the width of uintmax_t, which in any case has a width of at least 64, so your approach doesn't lead far, unfortunately. And you can't use this for signed types in any case, since you can't know how exactly how a signed and unsigned type are related. - Jens Gustedt
@Secure: True but today, almost any desktop CPU uses 32 or 64 bits. And if you have something else, you will know (and won't need to guess the bit size). - Aaron Digulla
@Jens: Interesting. When I used C the last time, there were no 64bit types. - Aaron Digulla
@Aaron, depends on the standard your compiler implements. What I said was for the current one, C99. If you happen to have a compiler that has now fallen 11 years behind, well yes, there might be no 64bit types. - Jens Gustedt
(1) @Aaron, for your reply @Secure: if you think you always know on which platform your code will be executed, you are fine. If you want to write portable code that might end up e.g on embedded devices, you better not take the bet. - Jens Gustedt
6
[0] [2023-12-14 21:56:56] Doug Royer

I prefer standards, I could only find one standard for C++ (#3 below).

For "C" and "C++", (non-standard?) code:

(1) On Linux, the xxxx_WIDTH macros are defined in limits.h

(2) For windows, XXXX_WIDTH (two underscores before and after), seem to be what works. These are the ones defined. And obviously the unsigned pairs are the same width as their signed ones.

SCHAR_WIDTH SHRT_WIDTH INT_WIDTH LONG_WIDTH LONG_LONG_WIDTH

(3) For C++, I need the same. I just found this:

https://en.cppreference.com/w/c/types/limits

The xxxx_WIDTH values.


NOTE-> for some reason, posting the message remove the TWO underscores from before and after the MACRO names. On windows there are TWO underscores before and after the macro names. - Doug Royer
(1) The integer width macros were only made standard in the upcoming C23 Standard. Before that they were an implementation-specific feature. I believe that GCC has had them since GCC 7, but you may be required to define a feature test macro to enable this, e.g., #define _GNU_SOURCE. Note that stdint.h contains further integer width macros for more exotic integer types. - ad absurdum
7
[-1] [2010-10-18 07:57:56] mouviciel

Usually, size of int is known for a given compiler/platform. If you have macros that identify compiler/platform, then you can use them to conditionnally define INT_WIDTH.

You can take a look at <sys/types.h> and its dependents for examples.


8