I have lately been curious about the relative speed of Java vs. C. Since Java is ultimately "interpreted" I'd be surprised if it turned out to be faster than C.
I wrote the following short program that checks a million times whether a number is prime. Here it is in Java:
import java.lang.Math;
class time_test {
    public static void main(String[] args){
        
        boolean prime = true;
        long start, end;
        
        try{
            // get the int
            int num = Integer.parseInt(args[0]);
            
            // start the clock
            start = System.nanoTime();
            
            for (int h=0; h<1000000; h++)
                for (int i=2; i<Math.floor(Math.sqrt(num))+1; i++)
                    if (num % i == 0) prime = false;
                
            end = System.nanoTime();
            System.out.println((end-start)/1000000000.0);
            System.out.println(prime);
            
        }
        catch(Exception e) {
            System.out.println(e.toString());
        }
    }
}
And here it is in C:
#include <time.h>
#include <stdio.h>
#include <math.h>
#include <stdbool.h>
#include <stdlib.h>
clock_t start, end;
int main(int argc, char * argv[]){
    
    bool prime = true;
    int num = atoi(argv[1]);
    
    start = clock();
    for (int h=0; h<1000000; h++)
        for (int i=2; i<floor(sqrt(num))+1; i++)
            if (num%i == 0) prime = false;
        
    end=clock();
    printf("%f\n", (double) (end-start)/CLOCKS_PER_SEC);
    
    if (prime) printf("true\n");
    else printf("false\n");
}
I compiled the Java version with:
javac time_test.java
And the C version with:
gcc time_test.c -lm
However, when I ran them both with 27221 the Java version finished in 0.365623241 seconds and the C in 0.647930. How can this be? Even with -O3 optimization the C version can only do it in 0.366007 seconds, which about the same as the Java! Have I been lied to all my life? :)
 
    