27 views (last 30 days)

Show older comments

Hello,

in the documentation there is the hint to use mwsize rather than int for Cross-Platform Flexibility. What is the actual advantage? The backround of my question is the following: I want to write (and partly already have written) C-Code which I wan to use in MATLAB. I tried to find a good workflow to extend the existing C-Code and use it in MATLAB. The big problem I came across is the debugging. I found this very helpful Blog entry: https://blogs.mathworks.com/developer/2018/06/19/mex-debugging-vscode/ nevertheless, it is pretty cumbersome to debug like this all the time. Especially because MATLAB might crash pretty often because bugs happen during development of new C-Code. Therefore, I thought it would be better to write the code in a C IDE and just put the wrapper mexFunction around my code at the end. My question is, if I loose a lot of performance if i do it this way?

Furthermore, a question arose regarding the 'MATLAB Support for Interleaved Complex API in MEX Functions'. Which API is preferred for high speed applications?

James Tursa
on 21 Jan 2021

Edited: James Tursa
on 21 Jan 2021

"Furthermore, a question arose regarding the 'MATLAB Support for Interleaved Complex API in MEX Functions'. Which API is preferred for high speed applications?"

This is somewhat of a moot point given that the choice is determined by the MATLAB version you are using. If you are using R2017b or earlier, then you will be using two separate Real/Imaginary data areas. If you are using R2018a or later, then you will be using a single interleaved Complex data area. There are no MATLAB versions that simultaneously support both methods. Compiling a mex routine in R2018a or later with the -R2017b memory model option simply forces the mex routine to do a copy-in/copy-out on all complex variables in the background. It does not change the underlying storage scheme of the variable data, which is always interleaved complex in R2018a and later.

As to which is faster, that depends on what you are doing. Note that the BLAS and LAPACK complex linear algebra library routines that MATLAB uses only support the interleaved complex data model (in every version of MATLAB, not just R2018a and later), so that drives the comments below. E.g.,

Matrix Multiply real * complex:

The R2017b separate storage scheme will be faster because the BLAS matrix multiply routines can be called directly without any intermediate data copying needed. I.e., the individual real*real and real*imaginary pieces can be done by making two calls to the real BLAS matrix multiply routine and the results stuffed directly into the MATLAB output variable. For the R2018a interleaved storage scheme to use the complex BLAS matrix multiply routine in this case, it must first deep copy the real variable into a complex variable with imaginary part 0, and then make the call. So extra wasted memory and time to do the intermediate deep data copy for R2018a and lots of extra unnecessary 0 multiplies.

Linear Algebra calls to complex LAPACK routines:

The R2018a interleaved storage scheme will be faster because the input can be passed directly to the LAPACK routine and the output stuffed directly into a MATLAB variable. No intermediate deep data copying needed. For the R2017b separate storage scheme to use the complex LAPACK routine, it must first deep copy the separate real/imaginary data areas into a single contiguous interleaved area and then pass that to the LAPACK routine. Then it must take the interleaved result and deep copy it into two separate real/imaginary data areas for output back to MATLAB. So extra wasted memory and time to do the intermediate deep data copies.

Walter Roberson
on 23 Dec 2020

In C, int is permitted to be 16 bits or larger. It is common for compilers to treat int as 32 bits. It is uncommon for compilers to treat int as 64 bits: 64 bits is typically long int or long long.

Meanwhile, mwsize is defined as 64 bits provided that large array dims is enabled, which it should be for any 64 bit target.

Using int puts the correctness of your code at the mercy of the compiler default integer size, instead of using a fixed size as is required by MATLAB

Walter Roberson
on 20 Jan 2021

At some point in the future, Mathworks might decide to switch to (for example) 80 bits for matlab class double. Interfaces coded in mxDouble would make the transition automatically, but code that uses C or C++ double would have to be upgraded.

Also, some vendors such as IBM are actively pursuing non-IEEE-754 floating-point, as some of the design choices of 754 are limiting performance improvements (in particular, signaling nan and denormalized numbers)

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!