No, it is not. However, this should be no problem as different priors are available in bnlearn and, unless you have some very specific reason to use Laplace smoothing, which is one particular prior, these should do.
Once you have a structure, you learn parameters with the bn.fit() function. Setting method = "bayes" uses Bayesian estimation and the optional argument iss determines the prior. The definition of iss: "the imaginary sample size used by the bayes method to estimate the conditional probability tables (CPTs) associated with discrete nodes".
As an example, consder a binary root node X in some network. bn.fit() returns (Nx + iss / cptsize) / (N + iss) as the probability of X = x, where N is your number of samples, Nx the number of samples with X = x, and cptsize the cardinality of X; in this case cptsize = 2 because X is binary. Laplace correction would require that iss / cptsize always be equal to 1. Yet, bnlearn uses the same iss value for all CPTs and, iss / cptsize will only be 1 if all variables have the same cardinality. Thus, for binary variables, you could indeed have Laplace correction by setting iss = 2. In the general case, however, it is not possible.
See bnlearn::bn.fit difference and calculation of methods "mle" and "bayes" for some additional details.