I am confused now about the loss functions used in XGBoost. Here is how I feel confused:
- we have 
objective, which is the loss function needs to be minimized;eval_metric: the metric used to represent the learning result. These two are totally unrelated (if we don't consider such as for classification onlyloglossandmloglosscan be used aseval_metric). Is this correct? If I am, then for a classification problem, how you can usermseas a performance metric? - take two options for 
objectiveas an example,reg:logisticandbinary:logistic. For 0/1 classifications, usually binary logistic loss, or cross entropy should be considered as the loss function, right? So which of the two options is for this loss function, and what's the value of the other one? Say, ifbinary:logisticrepresents the cross entropy loss function, then what doesreg:logisticdo? - what's the difference between 
multi:softmaxandmulti:softprob? Do they use the same loss function and just differ in the output format? If so, that should be the same forreg:logisticandbinary:logisticas well, right? 
supplement for the 2nd problem
say, the loss function for 0/1 classification problem should be
L = sum(y_i*log(P_i)+(1-y_i)*log(P_i)). So if I need to choose binary:logistic here, or reg:logistic to let xgboost classifier to use L loss function. If it is binary:logistic, then what loss function reg:logistic uses?