Title: Compare the performance of the float, double, and decimal data types in C#
A computer's math coprocessor typically gives the best performance with a particular data type. To see which works best on my system and to learn how big the difference is, I wrote this program to compare the performance of calculations that use the float, double, and decimal data types.
Enter a number of trials and click the Go button to execute the following code.
// Compare performances.
private void btnGo_Click(object sender, EventArgs e)
{
txtTimeFloat.Clear();
txtTimeDouble.Clear();
txtTimeDecimal.Clear();
Cursor = Cursors.WaitCursor;
Refresh();
int num_trials = int.Parse(txtNumTrials.Text);
Stopwatch watch = new Stopwatch();
float float1, float2, float3;
double double1, double2, double3;
decimal decimal1, decimal2, decimal3;
watch.Start();
for (int i = 0; i < num_trials; i++)
{
float1 = 1.23f;
float2 = 4.56f;
float3 = float1 / float2;
}
watch.Stop();
txtTimeFloat.Text =
watch.Elapsed.TotalSeconds.ToString() + " sec";
txtTimeFloat.Refresh();
watch.Reset();
watch.Start();
for (int i = 0; i < num_trials; i++)
{
double1 = 1.23d;
double2 = 4.56d;
double3 = double1 / double2;
}
watch.Stop();
txtTimeDouble.Text =
watch.Elapsed.TotalSeconds.ToString() + " sec";
txtTimeDouble.Refresh();
// Scale by a factor of 10 for decimal.
num_trials /= 10;
watch.Reset();
watch.Start();
for (int i = 0; i < num_trials; i++)
{
decimal1 = 1.23m;
decimal2 = 4.56m;
decimal3 = decimal1 / decimal2;
}
watch.Stop();
txtTimeDecimal.Text = "~" +
(watch.Elapsed.TotalSeconds * 10).ToString() + " sec";
Cursor = Cursors.Default;
}
The code starts by clearing its result text boxes and getting the number of trials desired. It then runs a loop that performs a simple mathematical operation on float variables and displays the elapsed time. It then repeats those steps for the double data type.
Next the code repeats the loop for the decimal data type. After running the program, I discovered that the decimal data type was much slower than the other types. To run in a reasonable amount of time for the decimal data type, the program divides the number of trials by 10 and then multiplies the elapsed time by ten.
If you look closely at the picture, you'll see that to perform 100 million calculations the program used about 0.45 seconds for the float data type, about 0.60 for the double data type, and an estimated 22.04 seconds for the decimal data type.
The moral is, if you want performance, use float. If you need greater accuracy, use double. The performance difference isn't that big anyway.
If you need a lot of accuracy and are willing to wait a lot longer, use decimal. At least on my computer.
Note that all of these calculations are fast. Even decimal took only about 1/5 of a microsecond per calculation. That means speed will only be an issue for programs that perform a huge number of calculations.
Download the example to experiment with it and to see additional details.
