Effacer les filtres
Effacer les filtres

Formatting Data in dlarray for Deep Learning Models

17 vues (au cours des 30 derniers jours)
Isabelle Museck
Isabelle Museck le 10 Juil 2024 à 13:28
Modifié(e) : Matt J le 11 Juil 2024 à 0:36
Hello there. I have data stored in a cell array that I am trying to convert to a dlarray format. The data I am trying to convert is stored in "XTrain" and is a 1x9 cell with each cell containing a 3x541 double. The 9 cells coreespond to the 9 trials (Batches), and for each trial theere is the 3 is the three input channels of data and corresponding to the 541 timesteps for that trial. As shown here:
I want to convert this to a dlarray with 9 batches, 3 channels, and 541 time steps. When I run the following code:
XTrain= dlarray(cat(3,Xtrain{:}),'CTB')
I do get XTrain = 3(C) x 9 (B) x 541 (T) dlarray, but the data looks like this:
XTrain:
(:,:,1) =
-9.8044 -9.8147 -9.8693 -9.8247 -9.8124 -9.8727 -9.8543 -9.8656 -9.8525
-0.2282 -0.2896 -0.2260 -0.3172 -0.2189 -0.3087 -0.1495 -0.2691 -0.3280
0.7071 0.7812 0.4556 0.6039 0.1103 0.1425 0.0086 -0.0670 0.3715
(:,:,2) =
-9.8104 -9.8155 -9.8573 -9.8475 -9.8178 -9.8778 -9.8626 -9.8563 -9.8679
-0.2714 -0.3503 -0.2347 -0.3274 -0.2795 -0.3116 -0.1470 -0.2499 -0.3460
0.7076 0.7897 0.4830 0.5738 0.1197 0.1358 0.0611 -0.1045 0.4273
(:,:,3) =
-9.7981 -9.8225 -9.8534 -9.8293 -9.8822 -9.8726 -9.8225 -9.8701 -9.8697
-0.2643 -0.3474 -0.2172 -0.3436 -0.2872 -0.3388 -0.1918 -0.2574 -0.3647
0.7142 0.7598 0.4382 0.5934 0.1157 0.1331 0.0844 -0.1184 0.4391
(:,:,4) =
-9.8505 -9.8007 -9.8387 -9.8374 -9.8862 -9.8430 -9.8153 -9.8652 -9.8377
-0.2488 -0.3729 -0.1999 -0.3266 -0.2296 -0.3341 -0.1930 -0.2296 -0.4105
0.7074 0.7723 0.4861 0.6090 0.1126 0.1307 0.0286 -0.0994 0.4614
(:,:,5) =
-9.8193 -9.7711 -9.8500 -9.8353 -9.9168 -9.8344 -9.8497 -9.8225 -9.7706
-0.2927 -0.3913 -0.1949 -0.3394 -0.2077 -0.3220 -0.2130 -0.2623 -0.3908
0.7444 0.7260 0.5198 0.6132 0.1354 0.1461 0.0019 -0.1226 0.4879
(:,:,6) =
-9.8391 -9.8163 -9.8085 -9.8183 -9.8220 -9.8489 -9.8214 -9.8409 -9.7316
-0.2791 -0.3945 -0.2150 -0.3333 -0.2189 -0.3273 -0.2397 -0.2116 -0.4483
0.7311 0.7238 0.4837 0.6277 0.1575 0.1267 0.0267 -0.0669 0.5399
(:,:,7) =
-9.8431 -9.8050 -9.8663 -9.8534 -9.8596 -9.8537 -9.8629 -9.8854 -9.7257
-0.2728 -0.3573 -0.2141 -0.3437 -0.2528 -0.3514 -0.1938 -0.2198 -0.4149
0.7381 0.6826 0.4767 0.5633 0.1433 0.1235 0.0609 -0.0852 0.5617
(:,:,8) =
-9.7986 -9.8074 -9.8325 -9.8184 -9.9066 -9.8487 -9.9236 -9.9240 -9.7185
-0.2986 -0.3460 -0.1595 -0.3355 -0.2499 -0.3389 -0.2061 -0.2021 -0.3813
0.7847 0.7369 0.5120 0.5438 0.1401 0.1474 0.0270 -0.0899 0.5602
(:,:,9) =
-9.8640 -9.8395 -9.8394 -9.8030 -9.8728 -9.8696 -9.9135 -9.8750 -9.8521
-0.2816 -0.3459 -0.2358 -0.3439 -0.2792 -0.3156 -0.1948 -0.2322 -0.2927
0.7056 0.7309 0.5211 0.5627 0.1395 0.1210 0.0156 -0.0729 0.4990
(:,:,10) =
-9.7958 -9.8271 -9.8320 -9.8375 -9.8670 -9.8540 -9.8830 -9.8667 -9.8494
-0.2916 -0.3218 -0.2379 -0.3354 -0.2923 -0.3621 -0.1915 -0.2537 -0.3228
0.7503 0.7005 0.5049 0.5413 0.1232 0.1500 0.0190 -0.1062 0.4945
(:,:,11) =
-9.8675 -9.8573 -9.8152 -9.8361 -9.8155 -9.8488 -9.8627 -9.8440 -9.8810
-0.2476 -0.3143 -0.2841 -0.3485 -0.2497 -0.3293 -0.2063 -0.2264 -0.3183
0.6469 0.7087 0.4956 0.5453 0.1404 0.1461 0.0213 -0.0958 0.4852
(:,:,12) =
-9.8271 -9.8400 -9.8179 -9.8245 -9.7730 -9.8753 -9.9106 -9.8636 -9.8348
-0.3244 -0.2989 -0.2954 -0.3289 -0.2946 -0.3206 -0.1601 -0.2520 -0.2764
0.7624 0.6780 0.5388 0.5772 0.1475 0.1578 -0.0356 -0.0819 0.4358
(:,:,13) =
-9.7437 -9.8760 -9.8512 -9.8426 -9.8205 -9.8716 -9.7673 -9.8175 -9.7851
-0.2627 -0.3319 -0.3031 -0.3106 -0.2872 -0.3335 -0.2253 -0.2506 -0.2929
0.6604 0.7042 0.5543 0.6148 0.1349 0.1468 -0.0105 -0.0619 0.4163
(:,:,14) =
-9.8183 -9.8148 -9.8522 -9.8507 -9.8779 -9.8780 -9.8105 -9.8185 -9.9186
-0.3567 -0.3391 -0.2660 -0.3160 -0.2716 -0.3400 -0.2176 -0.2733 -0.3084
0.6731 0.6819 0.5418 0.5497 0.1243 0.1377 -0.0253 -0.0877 0.3708
(:,:,15) =
-9.8210 -9.8280 -9.7950 -9.8374 -9.8236 -9.8543 -9.8413 -9.8564 -9.8511
-0.3185 -0.3218 -0.2863 -0.3451 -0.2929 -0.3180 -0.2526 -0.2891 -0.2867
0.7416 0.7219 0.5454 0.5438 0.1491 0.1463 -0.0171 -0.0615 0.3530
(:,:,16) =
-9.7702 -9.8036 -9.8025 -9.8287 -9.8787 -9.8241 -9.8976 -9.8712 -9.8689
-0.2553 -0.2993 -0.2928 -0.3098 -0.2996 -0.3237 -0.2285 -0.2526 -0.2880
0.7410 0.6932 0.5699 0.5713 0.1512 0.1522 -0.0439 -0.0840 0.3711
(:,:,17) =
-9.8122 -9.8339 -9.8401 -9.8246 -9.8647 -9.8723 -9.8656 -9.9150 -9.8752
-0.2491 -0.3338 -0.2538 -0.3167 -0.2816 -0.3536 -0.2305 -0.2943 -0.2477
0.7235 0.7166 0.5196 0.5602 0.1480 0.1465 -0.0513 -0.0491 0.3705
(:,:,18) =
-9.7961 -9.8055 -9.8568 -9.8403 -9.8629 -9.8947 -9.8190 -9.8889 -9.8107
-0.2785 -0.3339 -0.2591 -0.3552 -0.2879 -0.3279 -0.2435 -0.2802 -0.2046
0.8004 0.7347 0.5044 0.5421 0.1349 0.1467 -0.0090 -0.0849 0.4191
(:,:,19) =
-9.8736 -9.8163 -9.8196 -9.7820 -9.8525 -9.8796 -9.8646 -9.8894 -9.8062
-0.2809 -0.3314 -0.2378 -0.3204 -0.2733 -0.3473 -0.2346 -0.2588 -0.2858
0.8058 0.7424 0.5394 0.5743 0.1455 0.1498 -0.0406 -0.0951 0.4255
(:,:,20) =
-9.8573 -9.8078 -9.8441 -9.8334 -9.9045 -9.8444 -9.8661 -9.8566 -9.8409
-0.2525 -0.3306 -0.2741 -0.2888 -0.3146 -0.3191 -0.2429 -0.2535 -0.2523
0.7609 0.7459 0.5126 0.5551 0.1395 0.1565 -0.0591 -0.1031 0.4261
(:,:,21) =
-9.8071 -9.8394 -9.8093 -9.8146 -9.8927 -9.8402 -9.8533 -9.8183 -9.8459
-0.2915 -0.3386 -0.2825 -0.3119 -0.3252 -0.3245 -0.2393 -0.2300 -0.3123
0.7921 0.7543 0.4987 0.5890 0.1397 0.1466 -0.0328 -0.0738 0.4117
(:,:,22) =
-9.7563 -9.8255 -9.8290 -9.8939 -9.8882 -9.8522 -9.8989 -9.9003 -9.8233
-0.2271 -0.3160 -0.2632 -0.3257 -0.3198 -0.3122 -0.2563 -0.2295 -0.3217
0.7701 0.6867 0.5125 0.5603 0.1416 0.1465 -0.0516 -0.1143 0.3659
(:,:,23) =
-9.8219 -9.8019 -9.8137 -9.8514 -9.8521 -9.8446 -9.8559 -9.8182 -9.8075
-0.2682 -0.3097 -0.2724 -0.3005 -0.3160 -0.3292 -0.2157 -0.2402 -0.3516
0.7306 0.7428 0.5058 0.5672 0.1465 0.1388 -0.0418 -0.1146 0.4134
..... and goes to (:,:, 209)
I am trying to get the data to have 9 iterations (correspodnig to the 9 batches/trials) with the 3 rows coresponding to the 3 input channels and 541 coulms correspoding to the time steps, and then it shoulf only go to (:, :, 9). How would I go about chaging it into this format? Any help is greatly appreciated!

Réponses (1)

Matt J
Matt J le 10 Juil 2024 à 18:32
Modifié(e) : Matt J le 11 Juil 2024 à 0:36
You cannot do that, but you shouldn't need to. Because the dimensions are all identified by labels (in this case CBT), trainnet() will be able to interpret what each dimension means on its own.

Catégories

En savoir plus sur Image Data Workflows dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by