That's pretty much how every serious dense matrix (or more generally, multi-dimensional array) representation has been done since, well, forever. It's only C programmers writing "My First Matrix Program" that insist on allocating each row separately due to the lack of native multidimensional arrays in that language.
A 2-dimensional array A will, in the object program, be stored sequentially in
the order Ai,i, A2,i Am l , A| i2 , A2f2 Am, 2> , Am,„. Thus
it is stored “columnwise”, with the first of its subscripts varying most
rapidly, and the last varying least rapidly. The same is true of 3-dimensional arrays.
1 -dimensional arrays are of course simply stored sequentially. All arrays are stored backwards in storage; i.e. the above sequence is in the order of decreasing absolute location.
Yes! I was surprised how simple it was. Obvious once you know how. I never really considered multidimensional storage until looking into Machine Learning recently. My only brush with it is relational database tuning, but that is very different, as the number of "dimensions" are small and they are usually not homogenous.