It seems that Files.newBufferedReader() is more strict about UTF-8 than the naive alternative.
If I create a file with a single byte 128---so, not a valid UTF-8 character---it will happily be read if I construct an BufferedReader on an InputStreamReader on the result of Files.newInputStream(), but with Files.newBufferedReader() an exception is thrown.
This code
try (
    InputStream in = Files.newInputStream(path);
    Reader isReader = new InputStreamReader(in, "UTF-8");
    Reader reader = new BufferedReader(isReader);
) {
    System.out.println((char) reader.read());
}
try (
    Reader reader = Files.newBufferedReader(path);
) {
    System.out.println((char) reader.read());
}
has this result:
�
Exception in thread "main" java.nio.charset.MalformedInputException: Input length = 1
    at java.nio.charset.CoderResult.throwException(CoderResult.java:281)
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:339)
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
    at java.io.InputStreamReader.read(InputStreamReader.java:184)
    at java.io.BufferedReader.fill(BufferedReader.java:161)
    at java.io.BufferedReader.read(BufferedReader.java:182)
    at TestUtf8.main(TestUtf8.java:28)
Is this documented?  And is it possible to get the lenient behavior with Files.newBufferedReader()?
 
     
    