I've implemented insertion sort in C(Visual Studio) and Java(Eclipse) to analyse the time required for completion and to compare the difference in both the languages.
I tried to find out the worst case time required for the algorithm to complement.(Convert decreasing array to increasing one).
I run my code with samples of 10,000, 50,000 and 100,000 entries and the following were the observations:
In C:
10000: 0.172 seconds
50000: 3.874 seconds
100000: 15.384 seconds
whereas in Java
10000: 0.048 seconds
50000: 0.385 seconds
100000: 1.924 seconds
My code is normal insertion sort code. Nothing new in it. The time measured are only of insertion sort code and i/o operations are independent of it. e.g:
Input
Timer starts here
Insertion Sort
Timer ends
Summary(Time required and all)
I believed C was faster than Java but I couldn't justify this result..
EDIT: Here's the C code
void InsertionSort(int a[]) {
    int i;
    clock_t st, end;
    st = clock();
    for (i = 1; i < MAX; i++)   {
        int temp = a[i];
        int pos = i - 1;
        while(a[pos] > temp)    {
            a[pos + 1] = a[pos];
            pos--;
        }
        a[pos + 1] = temp;
    }
    end = clock();
    printf("\nSorting Completed. Time taken:%f", (double)(end - st) / CLOCKS_PER_SEC);
}
and Java Code:
public void Sort(int a[], int size) {
        int i;
        for (i = 1; i < size; i++)  {
            int temp = a[i];
            int pos = i - 1;
            while(pos >= 0 && a[pos] > temp)    {
                a[pos + 1] = a[pos];
                pos--;
            }
            a[pos + 1] = temp;
        }
}
 
    