I've been doing C# development since the early days of .NET but for some reason I still can't figure the following out...
We have float.ToString("Fn"), where "n" specifies the number of digits you want after the point. So that will convert "123.456" to "123.46". The documentation calls the "F" formatting string the "precision" specifier and even says: "The precision specifier indicates the desired number of decimal places".
Maybe I'm missing something but "precision" and "decimal place" aren't exactly the same thing. To me, "precision" means number of signinficant digits.
So according to me, the following values all have the same precision:
0.0012345 0.012345 0.12345 1.2345 12.345 123.45 1234.5 12345 123450 1234500
...yet if you were to specify ToString("F5"), the output won't look anything like the above.
So let's say I want to display a floating point value that can have a large range with a certain precision, how do I do that? Up to now I have used an algorithm that uses log10 to calculate the location of the point and then calculates the correct value for n in ToString("Fn").
Is there a better way?