C# decimals don't just store the value, they also store information about the precision. You can see that with code like:
decimal d1 = 0M;
decimal d2 = 0.00M;
 
Console.WriteLine(d1);  // 0
Console.WriteLine(d2);  // 0.00
This accuracy can be changed when multiplying or dividing:
decimal d1 = 10M;
decimal d2 = 10.00M;
decimal d3 = 5.0M;
 
Console.WriteLine(d1 * d3); // 50.0
Console.WriteLine(d2 * d3); // 50.000
If you don't want to use default output formats, you need to use specific ones, such as d3.ToString("0.#####"), which will include up to five significant digits after the decimal point.
The following complete program shows all the effects above, plus a way to do that specific formatting (the final lines showing how to get fixed places after the decimal point for things like currency):
using System;
                    
public class Program {
  public static void Main() {
    decimal d1 = 0M;
    decimal d2 = 0.00M;
    Console.WriteLine(d1); // 0
    Console.WriteLine(d2); // 0.00
    d1 = 7M;
    d2 = 10.00M;
    decimal d3 = 5.0M;
    Console.WriteLine(d1 * d3); // 35.0
    Console.WriteLine(d2 * d3); // 50.000
    d1 = 1234567.89000M;
    Console.WriteLine(d1);                     // 1234567.89000
    Console.WriteLine(d1.ToString("0.#####")); // 1234567.89
    Console.WriteLine(d2 * d3);                  // 50.000 (see above)
    Console.WriteLine((d2 * d3).ToString("F3")); // 50.00
  }
}