If a is jagged, but i is multidimensional, we can using i to index a:
In [78]: a = np.array([[10,0,30,10],[40,50,60,10],[70,80,90,10,30]])#NOTE:Jagged array                 
In [79]: i = np.array([[0,1],[0,2],[1,2]])                                                             
In [80]: a.shape    #  an array of list objects                                                                                       
Out[80]: (3,)
In [81]: a[i]                                                                                          
Out[81]: 
array([[list([10, 0, 30, 10]), list([40, 50, 60, 10])],
       [list([10, 0, 30, 10]), list([70, 80, 90, 10, 30])],
       [list([40, 50, 60, 10]), list([70, 80, 90, 10, 30])]], dtype=object)
Since these are list objects, we can use sum to "concatenate" them:
In [82]: a[i].sum(axis=1)                                                                              
Out[82]: 
array([list([10, 0, 30, 10, 40, 50, 60, 10]),
       list([10, 0, 30, 10, 70, 80, 90, 10, 30]),
       list([40, 50, 60, 10, 70, 80, 90, 10, 30])], dtype=object)
your list comprehension:
In [83]: e = [np.hstack(a[i[j]]) for j in range(len(i))]                                               
In [84]: e                                                                                             
Out[84]: 
[array([10,  0, 30, 10, 40, 50, 60, 10]),
 array([10,  0, 30, 10, 70, 80, 90, 10, 30]),
 array([40, 50, 60, 10, 70, 80, 90, 10, 30])]
some timings:
In [85]: timeit a[i].sum(axis=1)                                                                       
8.64 µs ± 17.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [86]: timeit e = [np.hstack(a[i[j]]) for j in range(len(i))]                                        
63.3 µs ± 168 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Your hstack may be slower because it's converting the a lists to arrays.  Let's by pass that:
In [89]: [sum(a[i[j]],[]) for j in range(len(i))]                                                      
Out[89]: 
[[10, 0, 30, 10, 40, 50, 60, 10],
 [10, 0, 30, 10, 70, 80, 90, 10, 30],
 [40, 50, 60, 10, 70, 80, 90, 10, 30]]
In [90]: timeit [sum(a[i[j]],[]) for j in range(len(i))]                                               
8.41 µs ± 109 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Sometimes pure list solutions are faster.  Converting lists of arrays takes time.
===
If both arrays are equalized and multidimensonal, we can use a pure "vectorized" solution:
In [104]: aa = np.array([[10,0,30,10],[40,50,60,10],[70,80,90,10]])                                    
In [105]: i                                                                                            
Out[105]: 
array([[0, 1],
       [0, 2],
       [1, 2]])
In [106]: aa[i]                                                                                        
Out[106]: 
array([[[10,  0, 30, 10],
        [40, 50, 60, 10]],
       [[10,  0, 30, 10],
        [70, 80, 90, 10]],
       [[40, 50, 60, 10],
        [70, 80, 90, 10]]])
In [107]: aa[i].reshape(3,-1)                                                                          
Out[107]: 
array([[10,  0, 30, 10, 40, 50, 60, 10],
       [10,  0, 30, 10, 70, 80, 90, 10],
       [40, 50, 60, 10, 70, 80, 90, 10]])
In [108]: timeit aa[i].reshape(3,-1)                                                                   
5.07 µs ± 57.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
But once one or more of the arrays/lists are ragged you loose this option, and need to seriously consider list alternatives.