Stack OverflowCalling an external command in Python
[+2956] [44] freshWoWer
[2008-09-18 01:35:30]
[ python shell command subprocess external ]
[ ]

How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script?

hey here's a good tutorial on integrating python with shell:‌​p - Triton Man
(6) @TritonMan: it is not a good tutorial. Use for line in proc.stdout: (or for line in iter(proc.stdout.readline, '') in Python 2) instead of (moronic) for line in proc.stdout.readlines():. See Python: read streaming input from subprocess.communicate() - jfs
[+2838] [2008-09-18 01:39:35] David Cournapeau [ACCEPTED]

Look at the subprocess module [1] in the standard library:

from subprocess import call
call(["ls", "-l"])

The advantage of subprocess vs system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...).

The official docs [2] recommend the subprocess module over the alternative os.system():

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function [ os.system() [3]].

The " Replacing Older Functions with the subprocess Module [4]" section in the subprocess documentation may have some helpful recipes.

Official documentation on the subprocess module:


(155) Can't see why you'd use os.system even for quick/dirty/one-time. subprocess seems so much better. - nosklo
(33) I agree completely that subprocess is better. I just had to write a quick script like this to run on an old server with Python 2.3.4 which does not have the subprocess module. - Liam
(50) here are the subprocess docs - daonb
For some reason, this only works if I do import subprocess. If I use from subprocess import call, I get a generic error message. - Brian Z
(2) @BrianZ That's because call uses Popen. I find it better to just always use Popen, then I can get the returncode, stderr & stdout if needed. - senorsmile
(6) call (..) Gave me an error on oython 2.7.6 : Traceback (most recent call last): File "E:\Ajit\MyPython\", line 27, in <module> call("dir") <br/> File "C:\Python27\lib\", line 524, in call return Popen(*popenargs, **kwargs).wait() <br/> File "C:\Python27\lib\", line 711, in init errread, errwrite)<br/> File "C:\Python27\lib\", line 948, in _execute_child startupinfo)<br/> WindowsError: [Error 2] The system cannot find the file specified - goldenmean
(61) @goldenmean: my guess, there is no ls.exe on Windows. Try call("dir", shell=True) - jfs
(1) Looking at the documentation and at your answer, I cannot understand why ['ls','-l'] is input as a list. It led me to believe that everything I would normally separate by a space I would put as a separate list element, but this broke. What's the purpose of the list? - Kyle Heuton
(3) @Snoozer: It handles escaping of spaces so 'command one two' is handled different from 'command "one two"'. See my answer: - Emil Stenström
Took me maybe ten years to finally learn how to use subprocess, but yes, this is indeed, the best way to call a command. - ThorSummoner
@David Cournapeau: How can I invoke developer terminal of visual studio and execute commands in it? - Nevin Raj Victor
Is there a way to use variable substitution? IE I tried to do echo $PATH by using call(["echo", "$PATH"]), but it just echoed the literal string $PATH instead of doing any substitution. I know I could get the PATH environment variable, but I'm wondering if there is an easy way to have the command behave exactly as if I had executed it in bash. - Kevin Wheeler
@KevinWheeler You'll have to use shell=True for that to work. - SethMMorton
(3) You can substitute call(["ls", "-l"]) with call("ls -l".split(" ")) - Andrej Gajduk
(1) @KevinWheeler You should NOT use shell=True, for this purpose Python comes with os.path.expandvars. In your case you can write: os.path.expandvars("$PATH"). @SethMMorton please reconsider your comment -> Why not to use shell=True - user1885518
I prefer using check_call which automatically raises if the external command fails. - Bluehorn
Subprocess would be the better option. You can capture the output and stderr too from the process - Arockia
(3) As of Python 3.5, it is suggested that you use instead of - Hannes Karppila
@HannesKarppila yes that's correct I agree with you. - Ashraf.Shk786
The example calls ls -l but does not give access to its output (stdout is not accessible). I find that confusing -- you could use a command without stdout instead, such as touch. - florisla
Seems to be pretty clunky on Python3 + Windows. If I enter a filename with special characters like &, it will throw a FileNotFoundError. Even though the file is in the working directory where I executed python, and obviously does exist. - Braden Best
you can also do it using os module. - Vineet Jain
[+2172] [2008-09-18 13:11:46] Eli Courtwright

Here's a summary of the ways to call external programs and the advantages and disadvantages of each:

  1. os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:

    os.system("some_command < input_file | another_command > output_file")  

    However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs. See the documentation [1].

  2. stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. See the documentation [2].

  3. The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:

    print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE)

    instead of:

    print os.popen("echo Hello World").read()

    but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation [3].

  4. The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:

    return_code ="echo Hello World", shell=True)  

    See the documentation [4].

  5. If you're on Python 3.5 or later, you can use the new [5] function, which is a lot like the above but even more flexible and returns a CompletedProcess [6] object when the command finishes executing.

  6. The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.

The subprocess module should probably be what you use.

Finally please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:

print subprocess.Popen("echo %s " % user_input, stdout=PIPE)

and imagine that the user enters "my mama didnt love me && rm -rf /".


(8) You didn't mention the commands module - Casebash
(112) @Casebash: I didn't bother mentioning it because the documentation states that The subprocess module provides more powerful facilities for spawning new processes and retrieving their results. Using the subprocess module is preferable to using the commands module. I similarly didn't mention the popen2 module because it's also obsolete, and both it and the commands module are actually gone in Python 3.0 and later. In lieu of editing my answer, I'll let these comment be the way in which these modules are mentioned. - Eli Courtwright
(15) Great article on the use of subprocess here : - PhoebeB
(11) commands module is deprecated now. - jldupont
(13) For many cases you don't need to instantiate a Popen object directly, you can use subprocess.check_call and subprocess.check_output - simao
(3) For those that are confused why he didn't convert the call example to a list: he added shell=True because the command is given as a string. This is false by default so a list is needed. - dlite922
(3) It's worth noting that the PIPE constant referenced is actually subprocess.PIPE (unless you've imported everything from subprocess). - plowman
(1) its sad that the commands module is deprecated, I loved it because of its ultra simple API. the subprocess module doc page is loooong and we need to deal with waiting. - v.oddou
(1) @v.oddou: rejoice, there is subprocess.getstatusoutput() that is almost the same as commands.getstatusoutput(). The subprocess version is more portable (it works on Windows). - jfs
Nice answer/explanation. How is this answer justifying Python's motto as described in this article ?… "Stylistically, Perl and Python have different philosophies. Perl’s best known mottos is " There’s More Than One Way to Do It". Python is designed to have one obvious way to do it" Seem like it should be the other way! In Perl I know only two ways to execute a command - using back-tick or open. - Jean
(3) If using Python 3.5+, use - phoenix
(1) @EliCourtwright would you add phoenix's comment to your answer? Being the current recommended option it shouldn't be in a hidden comment. - Federico
(1) @Federico: Good idea; I've just done so. - Eli Courtwright
What one typically needs to know is what is done with the child process's STDOUT and STDERR, because if they are ignored, under some (quite common) conditions, eventually the child process will issue a system call to write to STDOUT (STDERR too?) that would exceed the output buffer provided for the process by the OS, and the OS will cause it to block until some process reads from that buffer. So, with the currently recommended ways,, what exactly does "This does not capture stdout or stderr by default." imply? What about subprocess.check_output(..) and STDERR? - Evgeni Sergeev
You wrote about os.system that "this also lets you run commands which are simply shell commands and not actually external programs.". You did not, however, say which of the other options let you do that. Could you add this to the answer? - Stefan Monov
escaping is pretty simple if you use triple quotes """ - gerardw
[+204] [2008-09-18 18:20:46] EmmEff

I typically use:

import subprocess

p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
    print line,
retval = p.wait()

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().

(21) .readlines() reads all lines at once i.e., it blocks until the subprocess exits (closes its end of the pipe). To read in real time (if there is no buffering issues) you could: for line in iter(p.stdout.readline, ''): print line, - jfs
Could you elaborate on what you mean by "if there is no buffering issues"? If the process blocks definitely, the subprocess call also blocks. The same could happen with my original example as well. What else could happen with respect to buffering? - EmmEff
(9) the child process may use block-buffering in non-interactive mode instead of line-buffering so p.stdout.readline() (note: no s at the end) won't see any data until the child fills its buffer. If the child doesn't produce much data then the output won't be in real time. See the second reason in Q: Why not just use a pipe (popen())?. Some workarounds are provided in this answer (pexpect, pty, stdbuf) - jfs
the buffering issue only matters if you want output in real time and doesn't apply to your code that doesn't print anything until all data is received - jfs
@J.F.Sebastian tried your method, however every time a command is entered teh result of the previous command is printed out. - Paul
(2) @Paul: If your code produces unexpected results then you could create a complete minimal code example that reproduces the problem and post it as a new question. Mention what do you expect to happen and what happens instead. - jfs
(2) All right took your advice… thanks! - Paul
[+119] [2010-02-12 10:15:34] newtover

Some hints on detaching the child process from the calling one (starting the child process in background).

Suppose you want to start a long task from a CGI-script, that is the child process should live longer than the CGI-script execution process.

The classical example from the subprocess module docs is:

import subprocess
import sys

# some code here

pid = subprocess.Popen([sys.executable, ""]) # call subprocess

# some more code here

The idea here is that you do not want to wait in the line 'call subprocess' until the is finished. But it is not clear what happens after the line 'some more code here' from the example.

My target platform was freebsd, but the development was on windows, so I faced the problem on windows first.

On windows (win xp), the parent process will not finish until the has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.

The solution is to pass DETACHED_PROCESS Process Creation Flag [1] to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:


pid = subprocess.Popen([sys.executable, ""],

/* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */

On freebsd we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in CGI-script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:

pid = subprocess.Popen([sys.executable, ""], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)

I have not checked the code on other platforms and do not know the reasons of the behaviour on freebsd. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.


thanks for the answer! i noticed a possible "quirk" with developing py2exe apps in pydev+eclipse. i was able to tell that the main script was not detached because eclipse's output window was not terminating; even if the script executes to completion it is still waiting for returns. but, when i tried compiling to a py2exe executable, the expected behavior occurs (runs the processes as detached, then quits). i am not sure, but the executable name is not in the process list anymore. this works for all approaches (os.system("start *"), os.spawnl with os.P_DETACH, subprocs, etc.) - maranas
Windows gotcha: even though I spawned process with DETACHED_PROCESS, when I killed my Python daemon all ports opened by it wouldn't free until all spawned processes terminate. WScript.Shell solved all my problems. Example here: - Alexey Lebedev
(1) you might also need CREATE_NEW_PROCESS_GROUP flag. See Popen waiting for child process even when the immediate child has terminated - jfs
I'm seeing import subprocess as sp;sp.Popen('calc') not waiting for the subprocess to complete. It seems the creationflags aren't necessary. What am I missing? - ubershmekel
@ubershmekel, I am not sure what you mean and don't have a windows installation. If I recall correctly, without the flags you can not close the cmd instance from which you started the calc. - newtover
I'm on Windows 8.1 and calc seems to survive the closing of python. - ubershmekel
Is there any significance to using '0x00000008'? Is that a specific value that has to be used or one of multiple options? - SuperBiasedMan
(1) The following is incorrect: "[o]n windows (win xp), the parent process will not finish until the has finished its work". The parent will exit normally, but the console window (conhost.exe instance) only closes when the last attached process exits, and the child may have inherited the parent's console. Setting DETACHED_PROCESS in creationflags avoids this by preventing the child from inheriting or creating a console. If you instead want a new console, use CREATE_NEW_CONSOLE (0x00000010). - eryksun
@eryksun, thank you. I wish I knew that 4 years ago. I added your remark to the answer. - newtover
(1) I didn't mean that executing as a detached process is incorrect. That said, you may need to set the standard handles to files, pipes, or os.devnull because some console programs exit with an error otherwise. Create a new console when you want the child process to interact with the user concurrently with the parent process. It would be confusing to try to do both in a single window. - eryksun
[+77] [2008-09-18 01:42:30] sirwart

I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer:['ping', 'localhost'])

And subprocess will allow you to easily attach to the input/output streams of the process, etc. - Joe Skora
(7) subprocess doesn't do shell escaping for you because it avoids using the shell entirely. It actually means that startup is a little faster and there's less overhead. - habnabit
[+63] [2008-09-18 01:37:49] Alexandra Franks
import os
cmd = 'ls -al'

If you want to return the results of the command, you can use os.popen [1]. However, this is deprecated since version 2.6 in favor of the subprocess module [2], which other answers have covered well.


(2) popen is deprecated in favor of subprocess. - Fox Wilson
Add cmd = subprocess.list2cmdline( [ 'my','list','of','tokens' ] ) to handle escapes. - BuvinJ
Concise respect mapping local os when running in multiple environment systems - Karlos
[+55] [2008-09-18 01:37:24] nimish
import os
os.system("your command")

Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant docs on the 'os' and 'sys' modules. There are a bunch of functions (exec* , spawn*) that will do similar things.

[+41] [2010-10-07 07:09:04] athanassis

Check the "pexpect" Python library, too.

It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet, etc. You can just type something like:

child = pexpect.spawn('ftp')

child.expect('(?i)name .*: ')



[+40] [2012-03-13 00:12:54] Jorge E. Cardona

I always use fabric for this things like:

from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )

But this seem to be a good tool: sh (Python subprocess interface) [1].

Look an example:

from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)

(2) sh is superior to subprocess module. It allows a better shell integration - Yauhen Yakimovich
[+36] [2011-04-28 20:29:29] Facundo Casco

If what you need is the output from the command you are calling,
then you can use subprocess.check_output [1] (Python 2.7+).

>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'

Also note the shell [2] parameter.

If shell is True, the specified command will be executed through the shell. This can be useful if you are using Python primarily for the enhanced control flow it offers over most system shells and still want convenient access to other shell features such as shell pipes, filename wildcards, environment variable expansion, and expansion of ~ to a user’s home directory. However, note that Python itself offers implementations of many shell-like features (in particular, glob, fnmatch, os.walk(), os.path.expandvars(), os.path.expanduser(), and shutil).


[+34] [2012-10-28 05:14:01] Usman Khan

This is how I run my commands. This code has everything you need pretty much

from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()

Passing commands as strings is normally a bad idea - Eric
(1) I think it's acceptable for hard-coded commands, if it increases readability. - Adam Matan
Thanks. Coming from Perl and Ruby, Python is a PITA when it comes to running commands. Read a lot of solutions. Like yours with popen. - sam
[+31] [2016-10-29 14:02:50] Tom Fuller

There are lots of different libraries which allow you to call external commands with Python. For each library I've given a description and shown an example of calling an external command. The command I used as the example is ls -l (list all files). If you want to find out more about any of the libraries I've listed and linked the documentation for each of them.


These are all the libraries:

Hopefully this will help you make a decision on which library to use :)


Subprocess allows you to call external commands and connect them to their input/output/error pipes (stdin, stdout, and stderr). Subprocess is the default choice for running commands, but sometimes other modules are better.["ls", "-l"]) # Run command["ls", "-l"], stdout=subprocess.PIPE) # This will run the command and return any output"ls -l")) # You can also use the shlex library to split the command


os is used for "operating system dependent functionality". It can also be used to call external commands with os.system and os.popen (Note: There is also a subprocess.popen). os will always run the shell and is a simple alternative for people who don't need to, or don't know how to use

os.system("ls -l") # run command
os.popen("ls -l").read() # This will run the command and return any output


sh is a subprocess interface which lets you call programs as if they were functions. This is useful if you want to run a command multiple times."-l") # Run command normally
ls_cmd = sh.Command("ls") # Save command as a variable
ls_cmd() # Run command as if it were a function


plumbum is a library for "script-like" Python programs. You can call programs like functions as in sh. Plumbum is useful if you want to run a pipeline without the shell.

ls_cmd = plumbum.local("ls -l") # get command
ls_cmd() # run command


pexpect lets you spawn child applications, control them and find patterns in their output. This is a better alternative to subprocess for commands that expect a tty on Unix."ls -l") # Run command as normal
child = pexpect.spawn('scp foo') # Spawns child application
child.expect('Password:') # When this is the output


fabric is a Python 2.5 and 2.7 library. It allows you to execute local and remote shell commands. Fabric is simple alternative for running commands in a secure shell (SSH)

fabric.operations.local('ls -l') # Run command as normal
fabric.operations.local('ls -l', capture = True) # Run command and receive output


envoy is known as "subprocess for humans". It is used as a convenience wrapper around the subprocess module.

r ="ls -l") # Run command
r.std_out # get output


commands contains wrapper functions for os.popen, but it has been removed from Python 3 since subprocess is a better alternative.

The edit was based on J.F. Sebastian's comment.

Did I miss any? - Tom Fuller
(3) It could be useful to specify explicitly when and why you would prefer one library over another e.g., pexpect is useful for commands that expect a tty on Unix, plumbum could be use to run a pipeline without invoking the shell, fabric is a simple way to run commands via ssh, subprocess (unlike os) never runs the shell unless you ask—it is the default choice for running external commands, sometimes you might need alternatives. - jfs
I've edited my answer based on your feedback :) - Tom Fuller
(1) os "external commands" functions are implemented in terms of subprocess internally. It might be useful for people from other languages (system(), popen() is a common API) who do not need the full power of subprocess module and who do not have the time to learn how to use and other subprocess' functionality. - jfs
[+30] [2013-04-11 17:17:53] Honza Javorek

With Standard Library

Use subprocess module [1]:

from subprocess import call
call(['ls', '-l'])

It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.

Note: shlex.split [2] can help you to parse the command for call and other subprocess functions in case you don't want (or you can't!) provide them in form of lists:

import shlex
from subprocess import call
call(shlex.split('ls -l'))

With External Dependencies

If you do not mind external dependencies, use plumbum [3]:

from plumbum.cmd import ifconfig

It is the best subprocess wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by pip install plumbum.

Another popular library is sh [4]:

from sh import ifconfig

However, sh dropped Windows support, so it's not as awesome as it used to be. Install by pip install sh.


[+28] [2012-11-15 17:13:22] Joe

Update: is the recommended approach as of Python 3.5 [1] if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See this question for how [2].)

Here's some examples from the docs [3].

Run a process:

>>>["ls", "-l"])  # doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)

Raise on failed run:

>>>"exit 1", shell=True, check=True)
Traceback (most recent call last):
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Capture output:

>>>["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')

Original answer:

I recommend trying Envoy [4]. It's a wrapper for subprocess, which in turn aims to replace [5] the older modules and functions. Envoy is subprocess for humans.

Example usage from the readme [6]:

>>> r ='git config', data='data to pipe in', timeout=2)

>>> r.status_code
>>> r.std_out
'usage: git config [options]'
>>> r.std_err

Pipe stuff around too:

>>> r ='uptime | pbcopy')

>>> r.command
>>> r.status_code

>>> r.history
[<Response 'uptime'>]

(4) note: that ignores non-zero exit status by default is a regression compared to subprocess.check_call() or subprocess.check_output(). python -mthis: "Errors should never pass silently. Unless explicitly silenced." - jfs
Thanks man envoy is working better than subprocess, You saved my day. The command which is command = "ansible-playbook playbook.yaml --extra-vars=\"esxi_host={0} extravar1={1} extravar2={2} extravar3={3}\"".format(extravar1,extravar2,extravar3) and - itirazimvar
[+24] [2013-04-18 01:09:33] Zuckonit

Without the output of the result:

import os
os.system("your command here")

With output of the result:

import commands
commands.getoutput("your command here")
commands.getstatusoutput("your command here")

(2) I like the part with output of result. I needed this for using in sublime console. - Ramsharan
[+18] [2014-10-10 17:41:13] stuckintheshuck

There is also Plumbum [1]

>>> from plumbum import local
>>> ls = local["ls"]
>>> ls
LocalCommand(<LocalPath /bin/ls>)
>>> ls()
>>> notepad = local["c:\\windows\\notepad.exe"]
>>> notepad()                                   # Notepad window pops up
u''                                             # Notepad window is closed by user, command returns

or to add a bit of magic: from plumbum.cmd import ls, grep; output = (ls | grep['pattern'])() - jfs
[+16] [2008-09-18 01:43:30] Ben Hoffstein

...or for a very simple command:

import os
os.system('cat testfile')

[+16] [2008-09-18 01:53:27] Martin W

os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.

Get more information here [1].


[+13] [2008-09-18 01:43:56] William Keller

os.system has been superseded by the subprocess module. Use subproccess instead.

(14) Perhaps an example of using subprocess? - Michael Mior
(5) Given that the accepted answer suggested subprocess earlier and with more detail, I see no value to this answer sticking around. - Mark Amery
What's wrong with os.system? It's the most intuitive, just runs what you put in the string, and doesn't have all the caveats people are listing under the accepted answer. - sudo
[+13] [2015-06-29 11:34:22] Priyankara


import os

cmd = 'ls -al'


os - This module provides a portable way of using operating system-dependent functionality.

For the more os functions, here [1] is the documentation.


Is there a way to push the result of cmd to a file? I am curling a website and I want it to go to a file. - PolarisUser
This is by far the simplest and powerful solution. @PolarisUser you can use the generic linux command: <command> > outputfile.txt - user2820579
it's also deprecated. use subprocess - Corey Goldberg
[+12] [2010-01-08 21:11:30] Atinc Delican

There is another difference here which is not mentioned above.

subprocess.Popen executes the as a subprocess. In my case, I need to execute file which needs to communicate with another program .

I tried subprocess, execution was successful. However could not comm w/ . everything normal when I run both from the terminal.

One more: (NOTE: kwrite behaves different from other apps. If you try below with firefox results will not be the same)

If you try os.system("kwrite"), program flow freezes until user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow but kwrite became the subprocess of the konsole.

Anyone runs the kwrite not being a subprocess (i.e. at the system monitor it must be appear at the leftmost edge of the tree)

[+12] [2011-01-18 19:21:44] cdunn2001

subprocess.check_call is convenient if you don't want to test return values. It throws an exception on any error.

[+11] [2012-06-11 22:28:35] Saurabh Bangad

os.system does not allow you to store results, so if you want to store results in some list or something works.

[+11] [2014-04-30 14:37:04] Emil Stenström

I tend to use subprocess [1] together with shlex [2] (to handle escaping of quoted strings):

>>> import subprocess, shlex
>>> command = 'ls -l "/your/path/with spaces/"'
>>> call_params = shlex.split(command)
>>> print call_params
["ls", "-l", "/your/path/with spaces/"]

[+9] [2012-07-16 15:16:24] admire

You can use Popen, and then you can check the procedure's status:

from subprocess import Popen

proc = Popen(['ls', '-l'])
if proc.poll() is None:

Check out subprocess.Popen [1].


[+9] [2014-05-01 20:49:01] houqp

Shameless plug, I wrote a library for this :P

It's basically a wrapper for popen and shlex for now. It also supports piping commands so you can chain commands easier in Python. So you can do things like:

ex('echo hello') | "awk '{print $2}'"

[+7] [2015-10-14 07:12:51] urosjarc

Here are my two cents: In my view, this is the best practice when dealing with external commands...

These are the return values from the execute method...

pass, stdout, stderr = execute(["ls","-la"],"/home/user/desktop")

This is the execute method...

def execute(cmdArray,workingDir):

    stdout = ''
    stderr = ''

            process = subprocess.Popen(cmdArray,cwd=workingDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1)
        except OSError:
            return [False, '', 'ERROR : command(' + ' '.join(cmdArray) + ') could not get executed!']

        for line in iter(process.stdout.readline, b''):

                echoLine = line.decode("utf-8")
                echoLine = str(line)

            stdout += echoLine

        for line in iter(process.stderr.readline, b''):

                echoLine = line.decode("utf-8")
                echoLine = str(line)

            stderr += echoLine

    except (KeyboardInterrupt,SystemExit) as err:
        return [False,'',str(err)]


    returnCode = process.wait()
    if returnCode != 0 or stderr != '':
        return [False, stdout, stderr]
        return [True, stdout, stderr]

Deadlock potential: use the .communicate method instead - ppperry
[+7] [2016-06-17 09:14:24] Swadhikar C

In Windows you can just import the subprocess module and run external commands by calling subprocess.Popen(), subprocess.Popen().communicate() and subprocess.Popen().wait() as below:

# Python script to run a command line
import subprocess

def execute(cmd):
        Purpose  : To execute a command and return exit status
        Argument : cmd - command to execute
        Return   : exit_code
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    (result, error) = process.communicate()

    rc = process.wait()

    if rc != 0:
        print "Error: failed to execute command:", cmd
        print error
    return result
# def

command = "tasklist | grep python"
print "This process detail: \n", execute(command)


This process detail:
python.exe                     604 RDP-Tcp#0                  4      5,660 K

[+7] [2016-07-20 09:50:01] IRSHAD

To fetch the network id from the openstack neutron:

import os
netid= "nova net-list | awk '/ External / { print $2 }'"
temp=os.popen(netid).read()  /* here temp also contains new line (\n) */

Output of nova net-list

| ID                                   | Label      | CIDR |
| 431c9014-5b5d-4b51-a357-66020ffbb123 | test1      | None |
| 27a74fcd-37c0-4789-9414-9531b7e3f126 | External   | None |
| 5a2712e9-70dc-4b0e-9281-17e02f4684c9 | management | None |
| 7aa697f5-0e60-4c15-b4cc-9cb659698512 | Internal   | None |

Output of print(networkId)


[+7] [2016-11-27 00:15:34] yuval

Under Linux, in case you would like to call an external command that will execute independently (will keep running after the python script terminates), you can use a simple queue as task spooler [1] or the at [2] command

An example with task spooler:

import os
os.system('ts <your-command>')

Notes about task spooler (ts):

  1. You could set the number of concurrent processes to be run ("slots") with:

    ts -S <number-of-slots>

  2. Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.


[+4] [2012-07-25 06:51:50] JustCode

Very simplest way to run any command and get result back:

from commands import getstatusoutput

    return getstatusoutput("ls -ltr")
except Exception, e:
    return None

I like this one: is this going to be deprecated in python 3.0? - 719016
[+4] [2012-08-13 18:36:32] mdwhatcott

I quite like shell_command [1] for its simplicity. It's built on top of the subprocess module.


[+4] [2013-04-17 14:10:06] Jens Timmerman

There are a lot of different ways to run external commands in Python, and all of them have their own plus sides and drawbacks.

My colleagues and me have been writing Python system administration tools, so we need to run a lot of external commands, and sometimes you want them to block or run asynchronously, time-out, update every second, etc.

There are also different ways of handling the return code and errors, and you might want to parse the output, and provide new input (in an expect [1] kind of style). Or you will need to redirect stdin, stdout and stderr to run in a different tty (e.g., when using screen).

So you will probably have to write a lot of wrappers around the external command. So here is a Python module which we have written which can handle almost anything you would want, and if not, it's very flexible so you can easily extend it:


[+4] [2013-06-19 23:18:34] imagineerThat

Just to add to the discussion, if you include using a Python console, you can call external commands from IPython [1]. While in the IPython prompt, you can call shell commands by prefixing '!'. You can also combine Python code with the shell, and assign the output of shell scripts to Python variables.

For instance:

In [9]: mylist = !ls

In [10]: mylist

[+4] [2016-03-17 10:48:32] Chiel ten Brinke

For Python 3.5+ it is recommended that you use the run function from the subprocess module [1]. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.

from subprocess import PIPE, run

command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)

@downvoter afaik there's nothing wrong with this answer. Please leave a constructive comment if you think otherwise. - Chiel ten Brinke
answer with run function was added in 2015 year. You repeated it. I think it was a reason of down vote - Budulianin
[+4] [2016-10-11 02:26:49] Rajiv Sharma

Here is calling an external command and return or print the command's output:

Python Subprocess [1] check_output is good for

Run command with arguments and return its output as a byte string.

import subprocess
proc = subprocess.check_output('ipconfig /all')
print proc

[+3] [2014-08-24 21:46:12] amehta

A simple way is to use the os module:

import os

Alternatively you can also use the subprocess module

import subprocess

If you want the result to be stored in a variable try:

import subprocess
r = subprocess.check_output('ls')

[+3] [2015-07-24 19:12:21] Asif Hasnain

Using the Popen function of the subprocess Python module is the simplest way of running Linux commands. In that, the Popen.communicate() function will give your commands output. For example

import subprocess

process = subprocess.Popen(..)   # Pass command and arguments to the function
stdout, stderr = process.communicate()   # Get command output and error

[+3] [2016-09-12 09:44:09] liuyip

There are many ways to call a command.

  • For example:

if and.exe needs two parameters. In cmd we can call sample.exe use this: and.exe 2 3 and it show 5 on screen.

If we use a Python script to call and.exe, we should do like..

  1. os.system(cmd,...)

    • os.system(("and.exe" + " " + "2" + " " + "3"))
  2. os.popen(cmd,...)

    • os.popen(("and.exe" + " " + "2" + " " + "3"))
  3. subprocess.Popen(cmd,...)
    • subprocess.Popen(("and.exe" + " " + "2" + " " + "3"))

It's too hard, so we can join cmd with a space:

import os
cmd = " ".join(exename,parameters)

[+2] [2014-03-14 02:59:05] Jake W

After some research, I have the following code which works very well for me. It basically prints both stdout and stderr in real time. Hope it helps someone else who needs it.

stdout_result = 1
stderr_result = 1

def stdout_thread(pipe):
    global stdout_result
    while True:
        out =
        stdout_result = pipe.poll()
        if out == '' and stdout_result is not None:

        if out != '':

def stderr_thread(pipe):
    global stderr_result
    while True:
        err =
        stderr_result = pipe.poll()
        if err == '' and stderr_result is not None:

        if err != '':

def exec_command(command, cwd=None):
    if cwd is not None:
        print '[' + ' '.join(command) + '] in ' + cwd
        print '[' + ' '.join(command) + ']'

    p = subprocess.Popen(
        command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd

    out_thread = threading.Thread(name='stdout_thread', target=stdout_thread, args=(p,))
    err_thread = threading.Thread(name='stderr_thread', target=stderr_thread, args=(p,))



    return stdout_result + stderr_result

(1) your code may lose data when the subprocess exits while there is some data is buffered. Read until EOF instead, see teed_call() - jfs
[+2] [2014-04-12 11:58:23] andruso

Use [1]:

from subprocess import call

# using list
call(["echo", "Hello", "world"])

# single string argument varies across platforms so better split it
call("echo Hello world".split(" "))

[+2] [2016-04-28 11:18:22] Viswesn

I would recommend the following method 'run' and it will help us in getting STDOUT, STDERR and exit status as dictionary; The caller of this can read the dictionary return by 'run' method to know the actual state of process.

  def run (cmd):
       print "+ DEBUG exec({0})".format(cmd)
       p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, shell=True)
       (out, err) = p.communicate()
       ret        = p.wait()
       out        = filter(None, out.split('\n'))
       err        = filter(None, err.split('\n'))
       ret        = True if ret == 0 else False
       return dict({'output': out, 'error': err, 'status': ret})

[+2] [2016-06-24 11:29:00] David Okwii


import subprocess

p = subprocess.Popen("df -h", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")

It gives nice output which is easier to work with:

['Filesystem      Size  Used Avail Use% Mounted on',
 '/dev/sda6        32G   21G   11G  67% /',
 'none            4.0K     0  4.0K   0% /sys/fs/cgroup',
 'udev            1.9G  4.0K  1.9G   1% /dev',
 'tmpfs           387M  1.4M  386M   1% /run',
 'none            5.0M     0  5.0M   0% /run/lock',
 'none            1.9G   58M  1.9G   3% /run/shm',
 'none            100M   32K  100M   1% /run/user',
 '/dev/sda5       340G  222G  100G  69% /home',

[+1] [2013-04-18 17:39:50] Colonel Panic

The subprocess module [1] described above by Eli is very powerful, but the syntax to make a bog-standard system call and inspect its output, is unnecessarily prolix.

The easiest way to make a system call is with the commands module [2] (Linux only).

> import commands
> commands.getstatusoutput("grep matter alice-in-wonderland.txt")
(0, "'Then it doesn't matter which way you go,' said the Cat.")

The first item in the tuple is the return code of the process. The second item is its standard output (and standard error, merged).

The Python devs have 'deprecated' the commands module, but that doesn't mean you shouldn't use it. Only that they're not developing it anymore, which is okay, because it's already perfect (at its small but important function).


(6) Deprecated doesn't only mean "isn't developed anymore" but also "you are discouraged from using this". Deprecated features may break anytime, may be removed anytime, or may dangerous. You should never use this in important code. Deprecation is merely a better way than removing a feature immediately, because it gives programmers the time to adapt and replace their deprecated functions. - Misch
(2) Just to prove my point: "Deprecated since version 2.6: The commands module has been removed in Python 3. Use the subprocess module instead." - Misch
It's not dangerous! The Python devs are careful only to break features between major releases (ie. between 2.x and 3.x). I've been using the commands module since 2004's Python 2.4. It works the same today in Python 2.7. - Colonel Panic
(6) With dangerous, I didn't mean that it may be removed anytime (that's a different problem), neither did I say that it is dangerous to use this specific module. However it may become dangerous if a security vulnerability is discovered but the module isn't further developed or maintained. (I don't want to say that this module is or isn't vulnerable to security issues, just talking about deprecated stuff in general) - Misch