How can I call an external command in Python?
Here's a summary of the ways to call external programs and the advantages and disadvantages of each:
os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example,
os.system("some_command < input_file | another_command > output_file")
However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs.
stream = os.popen("some_command with args") will do the same thing as
os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything.
Popen class of the
subprocess module. This is intended as a replacement for
os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say
print Popen("echo Hello World", stdout=PIPE, shell=True).stdout.read()
print os.popen("echo Hello World").read()
but it is nice to have all of the options there in one unified class instead of 4 different popen functions.
call function from the
subprocess module. This is basically just like the
Popen class and takes all of the same arguments, but it simply wait until the command completes and gives you the return code. For example:
return_code = call("echo Hello World", shell=True)
The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.
subprocess module should probably be what you use.
Look at the subprocess module  in the stdlib:
from subprocess import call call(["ls", "-l"])
The advantage of subprocess vs system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...). I think os.system is deprecated, too, or will be:
For quick/dirty/one time scripts,
os.system is enough, though.
I typically use:
import subprocess p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) for line in p.stdout.readlines(): print line, retval = p.wait()
You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().
Some hints on detaching the child process from the calling one (starting the child process in background).
Suppose you want to start a long task from a CGI-script, that is the child process should live longer than the CGI-script execution process.
The classical example from the subprocess module docs is:
import subprocess import sys # some code here pid = subprocess.Popen([sys.executable, "longtask.py"]) # call subprocess # some more code here
The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.
My target platform was freebsd, but the development was on windows, so I faced the problem on windows first.
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008 pid = subprocess.Popen([sys.executable, "longtask.py"], creationflags=DETACHED_PROCESS).pid
On freebsd we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in CGI-script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:
pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
I have not checked the code on other platforms and do not know the reasons of the behaviour on freebsd. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.
I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer: http://docs.python.org/lib/module-subprocess.html
Check "pexpect" python library, too. It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet etc. You can just type something like:
child = pexpect.spawn('ftp 192.168.0.24') child.expect('(?i)name .*: ') child.sendline('anonymous') child.expect('(?i)password')
I always use
fabric for this things like:
from fabric.operations import local result = local('ls', capture=True) print "Content:/n%s" % (result, )
But this seem to be a good tool:
(Python subprocess interface)
Look an example:
from sh import vgdisplay print vgdisplay() print vgdisplay('-v') print vgdisplay(v=True)
import os cmd = 'ls -al' os.system(cmd)
If you want to return the results of the command you need os.popen : http://oreilly.com/catalog/lpython/chapter/ch09.html
If what you need is the output from the command you are calling you can use subprocess.check_output since Python 2.7
>>> subprocess.check_output(["ls", "-l", "/dev/null"]) 'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
This is how I run my commands. This code has everything you need pretty much
from subprocess import Popen, PIPE cmd = "ls -l ~/" p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE) out, err = p.communicate() print "Return code: ", p.returncode print out.rstrip(), err.rstrip()
without the output of result
import os os.system("your command here")
with output of result
import commands commands.getoutput("your command here") or commands.getstatusoutput("your command here")
os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.
Get more information at http://docs.python.org/lib/module-subprocess.html
Example usage from the readme :
>>> r = envoy.run('git config', data='data to pipe in', timeout=2) >>> r.status_code 129 >>> r.std_out 'usage: git config [options]' >>> r.std_err ''
Pipe stuff around too:
>>> r = envoy.run('uptime | pbcopy') >>> r.command 'pbcopy' >>> r.status_code 0 >>> r.history [<Response 'uptime'>]
import os os.system("your command")
Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant docs on the 'os' and 'sys' modules. There are a bunch of functions (exec* , spawn*) that will do similar things.
os.system has been superceeded by the subprocess module. Use subproccess instead.
In case you need to go only with standard library, use subprocess module :
from subprocess import call call(['ls', '-l'])
It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tiring to construct and write.
If you do not mind external dependencies, install and use sh :
from sh import ifconfig print ifconfig('wlan0')
It is the best and the most developer-friendly
subprocess wrapper I have seen. It is under active development, it has good documentation and you will be usually able to solve any of your tasks on just couple of lines and in a very readable form. The only thing you need to do to have it available is to type
pip install sh in your terminal :-)
subprocess.check_call is convenient if you don't want to test return values. It throws an exception on any error.
os.system does not allow you to store results, so if you want to store results in some list or something
There are a lot of different ways to run external commands in python, and all of them have their own plus sides and drawbacks.
My colleagues and me have been writing python sysadmin tools, so we need to run a lot of external commands, and sometimes you want them to block or run asynchronously, time-out, update every second...
There are also different ways of handling the return code and errors, and you might want to parse the output, and provide new input (in an expect  kind of style) Or you will need to redirect stdin, stdout and stderr to run in a different tty (e.g., when using screen)
So you will probably have to write a lot of wrappers around the external command. So here is a python module which we have written which can handle almost anything you would want, and if not, it's very flexible so you can easily extend it:
There is another difference here which is not mentioned above.
subprocess.Popen executes the as a subprocess. In my case, I need to execute file which needs to communicate with another program .
I tried subprocess, execution was successful. However could not comm w/ . everything normal when I run both from the terminal.
One more: (NOTE: kwrite behaves different from other apps. If you try below with firefox results will not be the same)
If you try os.system("kwrite"), program flow freezes until user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow but kwrite became the subprocess of the konsole.
Anyone runs the kwrite not being a subprocess (i.e. at the system monitor it must be appear at the leftmost edge of the tree)
you can use Popen, then you can check procedure's status
from subprocess import Popen proc = Popen(['ls', '-l']) if proc.poll() is None: proc.kill()
Check this out subprocess.Popen  http://docs.python.org/library/subprocess.html#popen-objects
Just to add to the discussion, if you include using a Python console, you can call external commands from ipython. While in the ipython prompt, you can call call shell commands by prefixing '!'. You can also combine python code with shell, and assign the output of shell scripts to python variables.
In : mylist = !ls In : mylist Out: ['file1', 'file2', 'file3',]
...or for a very simple command:
import os os.system('cat testfile')
Very simplest way to run any command and get result back:
from commands import getstatusoutput try: return getstatusoutput("ls -ltr") except Exception, e: return None
I quite like shell_command  for its simplicity. It's built on top of the subprocess module. http://shell-command.readthedocs.org/en/latest/index.html
The subprocess module  described above by Eli is very powerful, but the syntax to make a bog-standard system call and inspect its output, is unnecessarily prolix.
The easiest way to make a system call is with the commands module  (Linux only).
> import commands > commands.getstatusoutput("grep matter alice-in-wonderland.txt") (0, "'Then it doesn't matter which way you go,' said the Cat.")
The first item in the tuple is the return code of the process. The second item is its standard output (and standard error, merged).
The Python devs have 'deprecated' the commands module, but that doesn't mean you shouldn't use it. Only that they're not developing it anymore, which is okay, because it's already perfect (at its small but important function). http://docs.python.org/2/library/subprocess.html