Unfortunately, there are no "good" rules for choosing the tolerance.
You have at your disposal the "machine epsilon"
double epsilon = std::numeric_limits<double>::epsilon()
which is the smallest value which, added to one, gives a result different from one.
I usually write my tolerances as a function of epsilon. There are no good rules, but for instance comparing like this
bool fuzzy_equals(double a, double b)
{
    static const double eps = std::numeric_limits<double>::epsilon();
    return std::fabs(b - a) < 1024 * eps * std::max(a, b);
}
works well in many cases. You can adjust the 1024, I like powers of two, but you might not. The actual value you choose is problem-dependent. Epsilon for doubles is around 10^-16, so 1024 is quite small, and you will need a bigger number in many cases (virtually any operation, including the minus operation inside fuzzy_equals will "eat" one epsilon -- they can cancel out, but on average, n operations mean sqrt(n) * epsilon precision, so 1024 corresponds to the expected precision after one million operations).
In other cases, where the precision is not as good, for instance when testing the minimum of a function against a known value (minima are usually only determined up to sqrt(eps) accuracy), I use
bool fuzzy_equals2(double a, double b)
{
    static const double eps = std::numeric_limits<double>::epsilon();
    return std::fabs(b - a) < 1024 * std::sqrt(eps) * std::max(a, b);
}
I often use other functions, like std::pow(eps, something), or even -1 / std::log(eps). This depends on what prior information I can derive from the problem, and what is the error I expect.
When it comes to code structure, I use a functional approach and pass a comparer to my algorithms, a bit like STL predicates. This enables you not to hardcode the logic of comparing into your algorithms.
In short, there is no one-size-fits-all rule. You have to choose depending on the problem