I'm pretty new the more "technical" side, per se, of computing, so please bear with me if this is a stupid question. I'm missing what is likely an obvious point, but why are flags+bitmasks more memory efficient than say, an equally sized bunch of booleans, considering wouldn't you have to initialize up to 32 integers to fill up the flag?
Are they just faster computationally, or do they also take up less memory (if this is the case, I'm lost).
I was checking these out, but I didn't see my question:
- Why use flags+bitmasks rather than a series of booleans?
- http://www.vipan.com/htdocs/bitwisehelp.html
EDIT: @EJP This is what I'm getting at by the "initializing", from that vipan.com. There are 32 instantiations of an integer, taking up (4 bytes * 32) versus the equivalent of 32 booleans for (1 byte * 32):
// Constants to hold bit masks for desired flags
static final int flagAllOff = 0; // 000...00000000 (empty mask)
static final int flagbit1 = 1; // 2^^0 000...00000001
static final int flagbit2 = 2; // 2^^1 000...00000010
static final int flagbit3 = 4; // 2^^2 000...00000100
static final int flagbit4 = 8; // 2^^3 000...00001000
static final int flagbit5 = 16; // 2^^4 000...00010000
static final int flagbit6 = 32; // 2^^5 000...00100000
static final int flagbit7 = 64; // 2^^6 000...01000000
static final int flagbit8 = 128; // 2^^7 000...10000000
//...
static final int flagbit31 = (int) Math.pow(2, 30); // 2^^30
//...
// Variable to hold the status of all flags
int flags = 0;
EDIT:
So in this case, flags is my flag variable. But if I want to say, represent some value in flags, I'm going to do something of the form of flags = flagbit1 | flagbit2 | flagbit3 | ... | flagbit31. In order to set flags to whatever that turns out to be, I had to create 32 integers called flagbit# and this is what I'm asking about.