I am using following code for feature importance calculation.
from matplotlib import pyplot as plt
from sklearn import svm
def features_importances(coef, names):
    imp = coef
    imp,names = zip(*sorted(zip(imp,names)))
    plt.barh(range(len(names)), imp, align='center')
    plt.yticks(range(len(names)), names)
    plt.show()
features_names = ['input1', 'input2']
svm = svm.SVC(kernel='linear')
svm.fit(X, Y)
feature_importances(svm.coef_, features_names)
How would I be able to calculate featurue importance of a non linear kernal, which doesn't give expected result in the given example.
 
     
     
    