So I have some MATLAB code that is writing out data to a CSV file. The CSV file is an output of test vectors being used by a C++ program to determine pass/fail of an algorithm implementation with test vectors generated by MATLAB. Part of what my code is generating is outputs of a conversion from spherical coordinate to Cartesian coordinate, so I am computing r*sin(theta)*cos(phi). When theta is 0, as you would expect, the equation evaluates to 0. However, if 90 degrees < phi < 270 degrees (i.e. in the second quadrant or third quadrant, where cosine would be negative), MATLAB returns a negative 0. If I have MATLAB print the value, I get a positive 0 and no indication that it is negative, but when I write the number to my CSV file, I end up getting -0.000. I am using fprintf(fileId, "%10.10e", res) to get my result to the file because I need to be able to control the precision of the output and writetables() would not always give accurate enough precision for the test vectors.
This causes problems because if C++ reads in a -0 for some functions, it treats it as a different kind of error case, so I don't want MATLAB writing these -0.000.
I can confirm that it is being stored in memory as a negative 0 by doing num2hex(), which returns 8000000000000000 instead of 0000000000000000. Is there anything I can do to help prevent this from occurring? I need my answers to be negative if they turn out to be negative, so I can't wrap with an absolute value, but negative 0 feels like it should be incorrect in this scenario since multiplication by 0 is defined as 0 and sin(0) = 0.
value = 10.0*sin(0.0)*cos(3*pi/2);
num2hex(value) = '8000000000000000'
fprintf(fileId,"10.4e", value) >> -0.000
value = 10.0*sin(0.0)*cos(0);
num2hex(value) = '0000000000000000'
fprintf(fileId,"10.4e", value) >> 0.000