In [321]: vec1=np.array([0,0.5,1,0.5]); vec2=np.array([2,0.5,1,0.5])
     ...: vec=np.transpose(np.stack((vec1,vec2)))
In [322]: vec1.shape
Out[322]: (4,)
In [323]: vec.shape
Out[323]: (4, 2)
A nice thing about the stack function is we can specify an axis, skipping the transpose:
In [324]: np.stack((vec1,vec2), axis=1).shape
Out[324]: (4, 2)
Why the mix of np. and n.? NameError: name 'n' is not defined.  That kind of thing almost sends me away.
In [326]: mat = np.moveaxis(np.array([[[0,1,2,3],[0,1,2,3],[0,1,2,3],[0,1,2,3]],[[-1,2.,0
     ...: ,1.],[0,0,-1,2.],[0,1,-1,2.],[1,0.1,1,1]]]),0,2)
In [327]: mat.shape
Out[327]: (4, 4, 2)
In [328]: outvec=np.zeros((4,2))
     ...: for i in range(2):
     ...:     outvec[:,i]=np.dot(mat[:,:,i],vec[:,i])
     ...:     
In [329]: outvec
Out[329]: 
array([[ 4.  , -0.5 ],
       [ 4.  ,  0.  ],
       [ 4.  ,  0.5 ],
       [ 4.  ,  3.55]])
In [330]: # (4,4,2) (4,2)   'kji,ji->ki'
From your loop, the location of the i axis (size 2) is clear - last in all 3 arrays.  That leaves one axis for vec, lets call that j.  It pairs with the last (next to i of mat). k carries over from mat to outvec.
In [331]: np.einsum('kji,ji->ki', mat, vec)
Out[331]: 
array([[ 4.  , -0.5 ],
       [ 4.  ,  0.  ],
       [ 4.  ,  0.5 ],
       [ 4.  ,  3.55]])
Often the einsum string writes itself.  For example if mat was described as (m,n,k) and vec as (n,k), with the result being (m,k)
In this case only the j dimension is summed - it appears on the left, but on the right.  The last dimension, i in my notation, is not summed because if appears on both sides, just as it does in your iteration.  I think of that as 'going-along-for-the-ride'.  It isn't actively part of the dot product.
You are, in effect, stacking on the last dimension, size 2 one.  Usually we stack on the first, but you transpose both to put that last.
Your 'failed' attempt runs, and can be reproduced as:
In [332]: np.einsum('ijk,il->ik', mat, vec)
Out[332]: 
array([[12. ,  4. ],
       [ 6. ,  1. ],
       [12. ,  4. ],
       [ 6. ,  3.1]])
In [333]: mat.sum(axis=1)*vec.sum(axis=1)[:,None]
Out[333]: 
array([[12. ,  4. ],
       [ 6. ,  1. ],
       [12. ,  4. ],
       [ 6. ,  3.1]])
The j and l dimensions don't appear on the right, so they are summed.  They can be summed before multiplying because they appear in only one term each.  I added the None to enable broadcasting (multiplying a ik with i).
np.einsum('ik,i->ik', mat.sum(axis=1), vec.sum(axis=1))
If you'd stacked on the first, and added a dimension for vec (2,4,1), it would matmul with a (2,4,4) mat.   mat @ vec[...,None].
In [337]: m1 = mat.transpose(2,0,1)
In [338]: m1@v1[...,None]
Out[338]: 
array([[[ 4.  ],
        [ 4.  ],
        [ 4.  ],
        [ 4.  ]],
       [[-0.5 ],
        [ 0.  ],
        [ 0.5 ],
        [ 3.55]]])
In [339]: _.shape
Out[339]: (2, 4, 1)