I have a bash script which processing each file in some directory:
for (( index=0; index<$COUNT; index++ ))
do
    srcFile=${INCOMING_FILES[$index]}
    ${SCRIPT_PATH}/control.pl ${srcFile} >> ${SCRIPT_PATH}/${LOG_FILE} &
    wait ${!}
    removeIncomingFile ${srcFile}
done
and for few files it works fine but when the number of files is quite large is too slow. I want to use this script parallel to processing grouped files.
Example files:
server_1_1    |    server_2_1    |    server_3_1
server_1_2    |    server_2_2    |    server_3_2
server_1_3    |    server_2_3    |    server_3_3    
script should processing files related to each server parallel.
First instance  - server_1*
Second instance - server_2*
Third instance  - server_3*    
Is it possible using GNU Parallel and how it can be reached? Many thanks for each solution!