When you ask about big O notation there are two issues to think about:
What is N? (For an n x n matrix, is N n, or n**2?)
What cost are you measuring? A merge sort of N items requires O(N log N) comparisons; but if you are sorting strings the cost of each comparison depends on the lengths of the strings.
For your two constructors, the first allocates O(1) memory, and the second allocates O(N) memory where N is maxSize.
But allocating O(N) memory, or requiring O(N) comparisons, is not at all the same as taking O(N) time - which although clearly important is a much more slippery concept, requiring thinking about the whole system (cache effects, garbage collection, CPU scheduling etc) and not just the algorithm.
In general it seems that it is approximately true that allocating N bytes of memory takes O(N) time, both for classic C-style memory allocation with explicit free and for Java-style garbage collections systems (allocating memory in Java is extremely cheap, but all memory you allocate must be garbage collected, and you have to allow for the amortised cost of that). This is still an interesting research question; Time complexity of memory allocation has some good pointers. (HT to Patrick Kostjens for the link.)
I'm glad you're thinking about this, and not blindly trusting either your book or your instructor.