I feel this would optimize the calls a bit as there is no bash process setup and teardown.
subprocess never runs the shell unless you ask it explicitly e.g.,
#!/usr/bin/env python
import subprocess
subprocess.check_call(['ls', '-l'])
This call runs ls program without invoking /bin/sh.
Or will it give no performance advantage?
If your subprocess calls actually use the shell e.g., to specify a pipeline consicely or you use bash process substitution that could be verbose and error-prone to define using subprocess module directly then it is unlikely that invoking bash is a performance bottleneck -- measure it first.
There are Python packages that too allow to specify such commands consicely e.g., plumbum could be used to emulate a shell pipeline.
If you want to use bash as a server process then pexpect is useful for dialog-based interactions with an external process -- though it is unlikely that it affects time performance. fabric allows to run both local and remote commands (ssh).
There are other subprocess wrappers such as sarge which can parse a pipeline specified in a string without invoking the shell e.g., it enables cross-platform support for bash-like syntax (&&, ||, & in command lines) or sh -- a complete subprocess replacement on Unix that provides TTY by default (it seems full-featured but the shell-like piping is less straightforward). You can even use Python-ish BASHwards-looking syntax to run commands with xonsh shell.
Again, it is unlikely that it affects performance in a meaningful way in most cases.
The problem of starting and communicating with external processes in a portable manner is complex -- the interaction between processes, pipes, ttys, signals, threading, async. IO, buffering in various places has rough edges. Introducing a new package may complicate things if you don't know how a specific package solve numerous issues related to running shell commands.