Compare the performance of the float, double, and decimal data types in C#

[performance]

A computer’s math coprocessor typically gives the best performance with a particular data type. To see which works best on my system and to learn how big the difference is, I wrote this program to compare the performance of calculations that use the float, double, and decimal data types.

Enter a number of trials and click the Go button to execute the following code.

// Compare performances.
private void btnGo_Click(object sender, EventArgs e)
{
    txtTimeFloat.Clear();
    txtTimeDouble.Clear();
    txtTimeDecimal.Clear();
    Cursor = Cursors.WaitCursor;
    Refresh();

    int num_trials = int.Parse(txtNumTrials.Text);
    Stopwatch watch = new Stopwatch();
    float float1, float2, float3;
    double double1, double2, double3;
    decimal decimal1, decimal2, decimal3;

    watch.Start();
    for (int i = 0; i < num_trials; i++)
    {
        float1 = 1.23f;
        float2 = 4.56f;
        float3 = float1 / float2;
    }
    watch.Stop();
    txtTimeFloat.Text =
        watch.Elapsed.TotalSeconds.ToString() + " sec";
    txtTimeFloat.Refresh();

    watch.Reset();
    watch.Start();
    for (int i = 0; i < num_trials; i++)
    {
        double1 = 1.23d;
        double2 = 4.56d;
        double3 = double1 / double2;
    }
    watch.Stop();
    txtTimeDouble.Text =
        watch.Elapsed.TotalSeconds.ToString() + " sec";
    txtTimeDouble.Refresh();

    // Scale by a factor of 10 for decimal.
    num_trials /= 10;
    watch.Reset();
    watch.Start();
    for (int i = 0; i < num_trials; i++)
    {
        decimal1 = 1.23m;
        decimal2 = 4.56m;
        decimal3 = decimal1 / decimal2;
    }
    watch.Stop();
    txtTimeDecimal.Text = "~" +
        (watch.Elapsed.TotalSeconds * 10).ToString() + " sec";

    Cursor = Cursors.Default;
}

The code starts by clearing its result text boxes and getting the number of trials desired. It then runs a loop that performs a simple mathematical operation on float variables and displays the elapsed time. It then repeats those steps for the double data type.

Next the code repeats the loop for the decimal data type. After running the program, I discovered that the decimal data type was much slower than the other types. To run in a reasonable amount of time for the decimal data type, the program divides the number of trials by 10 and then multiplies the elapsed time by ten.

If you look closely at the picture, you’ll see that to perform 100 million calculations the program used about 0.45 seconds for the float data type, about 0.60 for the double data type, and an estimated 22.04 seconds for the decimal data type.

The moral is, if you want performance, use float. If you need greater accuracy, use double. The performance difference isn’t that big anyway.

If you need a lot of accuracy and are willing to wait a lot longer, use decimal. At least on my computer. If you run the program on your computer and discover that double or decimal gives a faster result than float, please post a comment below.

Note that all of these calculations are fast. Even decimal took only about 1/5 of a microsecond per calculation. That means speed will only be an issue for programs that perform a huge number of calculations.


Download Example   Follow me on Twitter   RSS feed   Donate




This entry was posted in performance, variables and tagged , , , , , , , , , , . Bookmark the permalink.

6 Responses to Compare the performance of the float, double, and decimal data types in C#

  1. CarlD says:

    It’s likely in this test that the float & double performance is donimated by memory access (specifically writes), which is why float is almost exactly 2x the performance of double. In more complex sequences where the compiler can avoid writing results to memory, the difference between float and double should be less.

    Decimal, on the other hand, is a complex type supported by a ton of C/C++ code in the CLR (it actually comes from oleaut32.dll, last I checked). This type gets practially no help from the hardware, which is why the performance is so much less.

    You didn’t indicate whether you were running on 64-bit hardware, and if on 64-bit hardware if you were running as a 64-bit process. You may find significant differences between 32/32, 32/64 and 64/64.

  2. Rod Stephens says:

    Good points.

    I ran this on 64-bit hardware using a 32-bit OS and then again using a 64-bit OS. There wasn’t much difference in performance between the two.

    They’re all fast enough for many applications.

  3. E Anderson says:

    I made a modification to enable calculations using constant values (as you did) and incrementing the values by 1 each time through the loop.

    Interestingly, and consistently I should add, the calculations with value constants results in times of 0.596f/0.723d/15.18m while incrementing the values results in 0.773f/0.756d/33.46m.

    I believe the differences are due to caching at some level. Or it could just be my code. 😉

  4. Alfred Neuman says:

    Decimal is so much slower because a Decimal type is a scaled integer and the FPU (Floating Point Unit, or Math Co-processor) doesn’t get involved in decimal operations. The FPU is highly specialized and performs floating point operations very, very fast. Again, Decimal is a scaled integer; no floating point operations involved.

  5. Rod Stephens says:

    Thanks Alfred. Makes sense.

  6. Pingback: Compare the performance of the int, long, and byte data types in C# - C# HelperC# Helper

Leave a Reply

Your email address will not be published. Required fields are marked *