Your welcome Arben,
This framework is fundamental to many fields, so I'm willing to assist your call for comment. We're all tolerant to the most reasonable result within the time budget and the present condition of this framework is indeed quite good. I myself can only devote so much time to the call for comment.
Linear algebra introduces an arbitrary system of j linear equations in i unknowns xi as the sum over i and j of [aij xi] = bi, which is decomposed to A X = B, where a matrix is defined as A = [a11, ... a1j, a21, .... a2j, ... , ai,1, ... aij], and vectors defined as X=[x1, ... xj] and B = [b1, ... bj]. So a vector is defined simply as a row of matrix, a column of a matrix or more simply as a list of two or more scalars.
Applying vectors to Euclidean space whether (n= 2, 3) or to higher dimension, the concept of orthogonality is introduced, wherein the it understood that a change in any unknown, say xi, does not affect other unknowns, ie, in Euclidean 3D a change delta x does not change y or z. This is simply understood for 2D and 3D Euclidean space as we humans can visualize the orthogonal directions and associated unit vectors, but we need help to understand any applications of this beyond n=3.
That help is the concept of an orthonormal bases. For instance say we want to decompose a list of words describing colour. In literature all the colours are given a word or word combination (orange, white, bluish green, ...) where-in the word combinations are unique. Fields of study such as artificial intelligence, AI, might chose to model these assuming words and word combinations although unique as spelled are unique in meaning, that is can be modelled mathematically as orthogonal, ie that each colour word or word combination is orthogonal, however physicists study of light reveals colour is truly orthogonal as vector representation in the primary colours of <Red,Green,Blue>, ie. RGB as an n=3 orthonormal bases of colour. Fortunately the Gram-Schmidt process helps us determine an orthonormal basis when the xi are themselves not mutually orthogonal. I think this is an important thought as AI applications deepen beyond our comprehension of uniqueness.
In physics and engineering where the techniques of linear algebra, calculus, are well developed for conserved fields (read conservation of mass, momentum and energy) where the physics can be described in Euclidean space as orthogonal, it is preferred to keep separate the force vector functions and displacement vector functions, so the "physics" is more readable in the equations. An example of this would be the application example I liked and supplied for which I supplied a reworked notebook in a previous post, and should have separated the forces from the distances in all the equations. That way we would have seen an equation of the form f . d = mg <0,h> where mg is a force and the distance vector is <0,h> instead of f.d = mgh
Keep going Arben and team, You're doing well.
John