C++ (and C# and Objective-C and other direct descendants of C) have a quirky way of naming and specifying the primitive integral types.
As specified by C++, short and int are simple-type-specifiers, which can be mixed and matched along with the keywords long, signed, and unsigned in any of a page-full of combinations.
The general pattern for the single type short int is [signed] short [int], which is to say the signed and int keywords are optional.
Note that even if int and short are the same size on a particular platform, they are still different types. int has at least the same range as short so it's numerically a drop-in replacement, but you can't use an int * or int & where a short * or short & is required. Besides that C++ provides all kinds of machinery for working with types… for a large program written around short, converting to int may take some work.
Note also that there is no advantage to declaring something short unless you really have a reason to save a few bytes. It is poor style and leads to overflow errors, and can even reduce performance as CPUs today aren't optimized for 16-bit operations. And as Dietrich notes, according to the crazy way C arithmetic semantics are specified, the short is upcast to int before any operation is performed and then if the result is assigned back to a short, it's cast back again. This dance usually has no effect but can still lead to compiler warnings and worse.
In any case, the best practice is to typedef your own types for whatever jobs you need done. Always use int by default, and leverage int16_t, uint32_t from <stdint.h> (<cstdint> since C++11), etc instead of relying on platform-dependent short and long.