You could try different solutions. Most important thing about efficiency - measure.
You can look for sample implementations here: How to read a file into a vector elegantly and efficiently?
In most cases the bigger buffer the faster the reads/writes will go. Using iterator solutions to pass single bytes, for example like so:
std::copy(std::istream_iterator<char>(is),
          std::istream_iterator<char>(),
          std::ostream_iterator<char>(os));
although looks nice is pretty much the worst scenario in terms of efficiency - at least for the setup I've tested.
Reading the whole file at once from the specified offset to a large buffer gives best time results - you could try this unless you have some memory limits. To do this you should calculate the file size, read the data to a buffer (for example initialized like this: std::vector<char> buff(fileSize, 0)) and write it to output stream at one call.
Since you only want to copy N bytes then you should compare the value to the file size minus starting offset to still be able to make it in one big read/write.
For example:
// helper function
streampos getSizeToEnd(ifstream& is)
{
    auto currentPosition = is.tellg();
    is.seekg(0, is.end);
    auto length = is.tellg() - currentPosition;
    is.seekg(currentPosition, is.beg);
    return length;
}
int main()
{
    std::ifstream is;
    std::ofstream os;
    ...
    const auto offset = 100;
    is.seekg(offset);
    std::vector<char> buff(getSizeToEnd(is), 0);
    // read the data chunk to buffer and from buffer to output stream
    is.read(buff.data(), buff.size());
    os.write(buff.data(), buff.size());
    ...
}