In C#, the built-in integers are represented by a sequence of bit values of a predefined length. For the basic int datatype that length is 32 bits. Since 32 bits can only represent 4,294,967,296 different possible values (since that is 2^32), 
Since int can hold both positive and negative numbers, the sign of the number must be encoded somehow. This is done with first bit. If the first bit is 1, then the number is negative.
Here are the int values laid out on a number-line in hexadecimal and decimal:
 Hexadecimal        Decimal
 -----------    -----------
 0x80000000     -2147483648
 0x80000001     -2147483647
 0x80000002     -2147483646
    ...              ...
 0xFFFFFFFE              -2
 0xFFFFFFFF              -1
 0x00000000               0
 0x00000001               1
 0x00000002               2
     ...             ...
 0x7FFFFFFE      2147483646
 0x7FFFFFFF      2147483647
As you can see from this chart, the bits that represent the smallest possible value are what you would get by adding one to the largest possible value, while ignoring the interpretation of the sign bit. When a signed number is added in this way, it is called "integer overflow". Whether or not an integer overflow is allowed or treated as an error is configurable with the checked and unchecked statements in C#. 
This representation is called 2's complement 
You can check this link if you want to go deeper.