I have a Simulink design which is supposed to be implemented on a FPGA. Currently, the design runs with identical fixed-point arithmetic for each block, i.e. I have two variables in my workspace that control the word size and the fraction length. All blocks have their data type manually set to this setting. So far I have been using a larg word length of 64 bit to avoid overflows during simulations. However, I now need to optimize the design in order to fit it on the target architecture.
I have been working with the fixed-point tool for a couple of days now but the results are not satisfying, there are several issues with this optimization add-on.
In my workspcae, I calculate the coeffitients of many z-transfer-function blocks. They are converted to fixed-point data type inside my m-file. The fixed-point tool ignores these data types during optimization (See Fig.1). As a result, they varry strongly from the dynamic range proposed by the fixed-point tool. How can I automatically select the best data type for these parameters aswell? I cannot use "inherent" since the HDL designer, which I use to convert my Simulink model to VHDL code, does not accept this setting.
How do I find the required precision (resolution) of my signals? I start my analysis by collecting the signal ranges with double-precision floating-point arithmetic, i.e. I overwrite my settings and perform a simulation. I can then sort the result in the explorer tap (see Fig.2). However, if I select "word length", then all signals have the identical precision. The tool takes the precision that is possible by the predefined fixed-point data type in my workspace, which is of course identical since all blocks use the same predefined arithmetic. I want a histogram/list that shows me which signal requires the highest resolution. Is this possible?
I fail to successfully optimize both, the word length and the fraction length of my design. When using the iterative optimization, I can only optimize one at once, e.g. I can tell the tool to place the comma automatically. The word length is then untouched. From try and error, I know that my design runs properly with 40 bit resolution. Therefore, I start by setting the word length to 40 bit in my workspace. Then I let the tool select the best fraction length for each signal. This yields a more or less good result (see fig.3). However, there is still plenty of room for further optimization, i.e. many signals do no use the whole dynamic range that is possible with their setting. If I now make a second iteration where I let the tool chose the word size, then the result is nonsense. Many overflows occur during simulation and the model runs unstable. Defining a tolerance at important signals, e.g. a Rel Tol of 0.01, does not change the result. The tool seems to ignore this setting. How can I adjust the setting such that every signal has its optimum word and fraction length?
The fixed-point designer has pretty visualisation tools, e.g. the histograms. How can I export them? Is it possible to edit these graphs (e.g. change axis labels) like normal Matlab figures?