C# and many other languages use IEEE 754 as a specification for their floating-point data types. Floating-point numbers are expressed as a significand and an exponent, similar to how a decimal number, in scientific notation, is expressed as
1.234567890 x 10^12
^ ^
mantissa exponent
I won't go into the details (the Wikipedia article goes into that better than I can), but IEEE 754 specifies that:
for a 32-bit floating point number, such as the C# float data type, has 24 bits of precision for the significand, and 8 bits for the exponent.
for a 64-bit floating point number, such as the C# double data type, has 53 bits of precision for the significand, and 11 bits for the exponent.
Because a float only has 24 bits of precision, it can only express 7-8 digits of precision. Conversely, a double has 53 bits of precision so has about 15-16 digits of precision.
As has been said in the comments, if you don't want to lose precision, don't go from a double (64 bits in total) to a float (32 bits in total). Depending on your application, you could perhaps use the decimal data type which has 28-29 decimal digits of precision - but will come with penalties because (a) calculations involving it are slower than for double or float, and (b) it's typically far less supported by external libraries.
Note that you're talking about 91.15149709518846 which will actually be interpreted as 91.1514970951884 by the compiler - see, for example, this:
double value = 91.151497095188446;
Console.WriteLine(value);
// prints 91.1514970951884