While poking around the JDK 1.7 source I noticed these methods in Boolean.java:
public static Boolean valueOf(String s) {
    return toBoolean(s) ? TRUE : FALSE;
}
private static boolean toBoolean(String name) {
    return ((name != null) && name.equalsIgnoreCase("true"));
}
So valueOf() internally calls toBoolean(), which is fine. I did find it interesting to read how the toBoolean() method was implemented, namely:
- equalsIgnoreCase()is reversed from what I would normally do (put the string first), and then
- there is a null check first. This seems redundant if point 1 was adopted; as the first/second check in that method is a null check.
So I thought I would put together a quick test and check how my implementation would work compared with the JDK one. Here it is:
public class BooleanTest {
    private final String[] booleans = {"false", "true", "null"};
    @Test
    public void testJdkToBoolean() {
        long start = System.currentTimeMillis();
        for (int i = 0; i < 1000000; i++) {
            for (String aBoolean : booleans) {
                Boolean someBoolean = Boolean.valueOf(aBoolean);
            }
        }
        long end = System.currentTimeMillis();
        System.out.println("JDK Boolean Runtime is: " + (end-start));
    }
    @Test
    public void testModifiedToBoolean() {
        long start = System.currentTimeMillis();
        for (int i = 0; i < 1000000; i++) {
            for (String aBoolean : booleans) {
                Boolean someBoolean = ModifiedBoolean.valueOf(aBoolean);
            }
        }
        long end = System.currentTimeMillis();
        System.out.println("ModifiedBoolean Runtime is: " + (end-start));
    }
}
class ModifiedBoolean {
    public static Boolean valueOf(String s) {
        return toBoolean(s) ? Boolean.TRUE : Boolean.FALSE;
    }
    private static boolean toBoolean(String name) {
        return "true".equalsIgnoreCase(name);
    }
}
Here is the result:
Running com.app.BooleanTest
JDK Boolean Runtime is: 37
ModifiedBoolean Runtime is: 34
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.128 sec
So not much of a gain, especially when distributed over 1m runs. Really not all that surprising.
What I would like to understand is how these differ at the bytecode level. I am interested in delving into this area but don't have any experience. Is this more work than is worth while? Would it provide a useful learning experience? Is this something people do on a regular basis?
 
     
    