share
Stack OverflowCalling an external command in Python
[+3085] [45] freshWoWer
[2008-09-18 01:35:30]
[ python shell command subprocess external ]
[ https://stackoverflow.com/questions/89228/calling-an-external-command-in-python ]

How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script?

[+2957] [2008-09-18 01:39:35] David Cournapeau [ACCEPTED]

Look at the subprocess module [1] in the standard library:

from subprocess import call
call(["ls", "-l"])

The advantage of subprocess vs system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...).

The official docs [2] recommend the subprocess module over the alternative os.system():

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function [ os.system() [3]].

The " Replacing Older Functions with the subprocess Module [4]" section in the subprocess documentation may have some helpful recipes.

Official documentation on the subprocess module:

[1] https://docs.python.org/2/library/subprocess.html
[2] https://docs.python.org/library/os.html#os.system
[3] https://docs.python.org/library/os.html#os.system
[4] https://docs.python.org/2/library/subprocess.html#replacing-older-functions-with-the-subprocess-module
[5] https://docs.python.org/2/library/subprocess.html#module-subprocess
[6] https://docs.python.org/3/library/subprocess.html#module-subprocess

Is there a way to use variable substitution? IE I tried to do echo $PATH by using call(["echo", "$PATH"]), but it just echoed the literal string $PATH instead of doing any substitution. I know I could get the PATH environment variable, but I'm wondering if there is an easy way to have the command behave exactly as if I had executed it in bash. - Kevin Wheeler
@KevinWheeler You'll have to use shell=True for that to work. - SethMMorton
(4) @KevinWheeler You should NOT use shell=True, for this purpose Python comes with os.path.expandvars. In your case you can write: os.path.expandvars("$PATH"). @SethMMorton please reconsider your comment -> Why not to use shell=True - Murmel
(7) As of Python 3.5, it is suggested that you use subprocess.run instead of subprocess.call. docs.python.org/3/library/subprocess.html - Hannes Karppila
The example calls ls -l but does not give access to its output (stdout is not accessible). I find that confusing -- you could use a command without stdout instead, such as touch. - florisla
Seems to be pretty clunky on Python3 + Windows. If I enter a filename with special characters like &, it will throw a FileNotFoundError. Even though the file is in the working directory where I executed python, and obviously does exist. - Braden Best
does call block? i.e. if I want to run multiple commands in a for loop how do I do it without it blocking my python script? I don't care about the output of the command I just want to run lots of them. - Charlie Parker
I understand using call for more advanced features, but I don't see anything wrong with using system if it does what you need. - sudo
1
[+2222] [2008-09-18 13:11:46] Eli Courtwright

Here's a summary of the ways to call external programs and the advantages and disadvantages of each:

  1. os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:

    os.system("some_command < input_file | another_command > output_file")  
    

    However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs. See the documentation [1].

  2. stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. See the documentation [2].

  3. The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:

    print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
    

    instead of:

    print os.popen("echo Hello World").read()
    

    but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation [3].

  4. The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:

    return_code = subprocess.call("echo Hello World", shell=True)  
    

    See the documentation [4].

  5. If you're on Python 3.5 or later, you can use the new subprocess.run [5] function, which is a lot like the above but even more flexible and returns a CompletedProcess [6] object when the command finishes executing.

  6. The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.

The subprocess module should probably be what you use.

Finally please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:

print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()

and imagine that the user enters "my mama didnt love me && rm -rf /".

[1] https://docs.python.org/2/library/os.html#os.system
[2] https://docs.python.org/2/library/os.html#os.popen
[3] https://docs.python.org/2/library/subprocess.html#popen-constructor
[4] https://docs.python.org/2/library/subprocess.html#subprocess.call
[5] https://docs.python.org/3.5/library/subprocess.html#subprocess.run
[6] https://docs.python.org/3.5/library/subprocess.html#subprocess.CompletedProcess

(2) Nice answer/explanation. How is this answer justifying Python's motto as described in this article ? fastcompany.com/3026446/… "Stylistically, Perl and Python have different philosophies. Perl’s best known mottos is " There’s More Than One Way to Do It". Python is designed to have one obvious way to do it" Seem like it should be the other way! In Perl I know only two ways to execute a command - using back-tick or open. - Jean
(4) If using Python 3.5+, use subprocess.run(). docs.python.org/3.5/library/subprocess.html#subprocess.run - phoenix
(1) What one typically needs to know is what is done with the child process's STDOUT and STDERR, because if they are ignored, under some (quite common) conditions, eventually the child process will issue a system call to write to STDOUT (STDERR too?) that would exceed the output buffer provided for the process by the OS, and the OS will cause it to block until some process reads from that buffer. So, with the currently recommended ways, subprocess.run(..), what exactly does "This does not capture stdout or stderr by default." imply? What about subprocess.check_output(..) and STDERR? - Evgeni Sergeev
which of the commands you recommended block my script? i.e. if I want to run multiple commands in a for loop how do I do it without it blocking my python script? I don't care about the output of the command I just want to run lots of them. - Charlie Parker
2
[+212] [2008-09-18 18:20:46] EmmEff

I typically use:

import subprocess

p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
    print line,
retval = p.wait()

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().


(23) .readlines() reads all lines at once i.e., it blocks until the subprocess exits (closes its end of the pipe). To read in real time (if there is no buffering issues) you could: for line in iter(p.stdout.readline, ''): print line, - jfs
Could you elaborate on what you mean by "if there is no buffering issues"? If the process blocks definitely, the subprocess call also blocks. The same could happen with my original example as well. What else could happen with respect to buffering? - EmmEff
(10) the child process may use block-buffering in non-interactive mode instead of line-buffering so p.stdout.readline() (note: no s at the end) won't see any data until the child fills its buffer. If the child doesn't produce much data then the output won't be in real time. See the second reason in Q: Why not just use a pipe (popen())?. Some workarounds are provided in this answer (pexpect, pty, stdbuf) - jfs
(1) the buffering issue only matters if you want output in real time and doesn't apply to your code that doesn't print anything until all data is received - jfs
3
[+126] [2010-02-12 10:15:34] newtover

Some hints on detaching the child process from the calling one (starting the child process in background).

Suppose you want to start a long task from a CGI-script, that is the child process should live longer than the CGI-script execution process.

The classical example from the subprocess module docs is:

import subprocess
import sys

# some code here

pid = subprocess.Popen([sys.executable, "longtask.py"]) # call subprocess

# some more code here

The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.

My target platform was freebsd, but the development was on windows, so I faced the problem on windows first.

On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.

The solution is to pass DETACHED_PROCESS Process Creation Flag [1] to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:

DETACHED_PROCESS = 0x00000008

pid = subprocess.Popen([sys.executable, "longtask.py"],
                       creationflags=DETACHED_PROCESS).pid

/* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */

On freebsd we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in CGI-script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:

pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)

I have not checked the code on other platforms and do not know the reasons of the behaviour on freebsd. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.

[1] https://msdn.microsoft.com/en-us/library/windows/desktop/ms684863(v=vs.85).aspx

i noticed a possible "quirk" with developing py2exe apps in pydev+eclipse. i was able to tell that the main script was not detached because eclipse's output window was not terminating; even if the script executes to completion it is still waiting for returns. but, when i tried compiling to a py2exe executable, the expected behavior occurs (runs the processes as detached, then quits). i am not sure, but the executable name is not in the process list anymore. this works for all approaches (os.system("start *"), os.spawnl with os.P_DETACH, subprocs, etc.) - maranas
Windows gotcha: even though I spawned process with DETACHED_PROCESS, when I killed my Python daemon all ports opened by it wouldn't free until all spawned processes terminate. WScript.Shell solved all my problems. Example here: pastebin.com/xGmuvwSx - Alexey Lebedev
(1) you might also need CREATE_NEW_PROCESS_GROUP flag. See Popen waiting for child process even when the immediate child has terminated - jfs
I'm seeing import subprocess as sp;sp.Popen('calc') not waiting for the subprocess to complete. It seems the creationflags aren't necessary. What am I missing? - ubershmekel
@ubershmekel, I am not sure what you mean and don't have a windows installation. If I recall correctly, without the flags you can not close the cmd instance from which you started the calc. - newtover
I'm on Windows 8.1 and calc seems to survive the closing of python. - ubershmekel
Is there any significance to using '0x00000008'? Is that a specific value that has to be used or one of multiple options? - SuperBiasedMan
(1) The following is incorrect: "[o]n windows (win xp), the parent process will not finish until the longtask.py has finished its work". The parent will exit normally, but the console window (conhost.exe instance) only closes when the last attached process exits, and the child may have inherited the parent's console. Setting DETACHED_PROCESS in creationflags avoids this by preventing the child from inheriting or creating a console. If you instead want a new console, use CREATE_NEW_CONSOLE (0x00000010). - eryksun
@eryksun, thank you. I wish I knew that 4 years ago. I added your remark to the answer. - newtover
(1) I didn't mean that executing as a detached process is incorrect. That said, you may need to set the standard handles to files, pipes, or os.devnull because some console programs exit with an error otherwise. Create a new console when you want the child process to interact with the user concurrently with the parent process. It would be confusing to try to do both in a single window. - eryksun
4
[+80] [2008-09-18 01:42:30] sirwart

I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer: http://docs.python.org/library/subprocess.html

subprocess.call(['ping', 'localhost'])

5
[+67] [2008-09-18 01:37:49] Alexandra Franks
import os
cmd = 'ls -al'
os.system(cmd)

If you want to return the results of the command, you can use os.popen [1]. However, this is deprecated since version 2.6 in favor of the subprocess module [2], which other answers have covered well.

[1] https://docs.python.org/2/library/os.html#os.popen
[2] https://docs.python.org/2/library/subprocess.html#module-subprocess

(3) popen is deprecated in favor of subprocess. - Fox Wilson
You can also save your result with the os.system call, since it works like the UNIX shell itself, like for example os.system('ls -l > test2.txt') - Stefan Gruenwald
6
[+58] [2008-09-18 01:37:24] nimish
import os
os.system("your command")

Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant docs on the 'os' and 'sys' modules. There are a bunch of functions (exec* , spawn*) that will do similar things.


7
[+42] [2010-10-07 07:09:04] athanassis

Check the "pexpect" Python library, too.

It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet, etc. You can just type something like:

child = pexpect.spawn('ftp 192.168.0.24')

child.expect('(?i)name .*: ')

child.sendline('anonymous')

child.expect('(?i)password')

8
[+41] [2012-03-13 00:12:54] Jorge E. Cardona

I always use fabric for this things like:

from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )

But this seem to be a good tool: sh (Python subprocess interface) [1].

Look an example:

from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)
[1] https://github.com/amoffat/sh

9
[+37] [2011-04-28 20:29:29] Facundo Casco

If what you need is the output from the command you are calling,
then you can use subprocess.check_output [1] (Python 2.7+).

>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'

Also note the shell [2] parameter.

If shell is True, the specified command will be executed through the shell. This can be useful if you are using Python primarily for the enhanced control flow it offers over most system shells and still want convenient access to other shell features such as shell pipes, filename wildcards, environment variable expansion, and expansion of ~ to a user’s home directory. However, note that Python itself offers implementations of many shell-like features (in particular, glob, fnmatch, os.walk(), os.path.expandvars(), os.path.expanduser(), and shutil).

[1] https://docs.python.org/3.6/library/subprocess.html#subprocess.check_output
[2] https://docs.python.org/3.6/library/subprocess.html#frequently-used-arguments

10
[+37] [2016-10-29 14:02:50] Tom Fuller

There are lots of different libraries which allow you to call external commands with Python. For each library I've given a description and shown an example of calling an external command. The command I used as the example is ls -l (list all files). If you want to find out more about any of the libraries I've listed and linked the documentation for each of them.

Sources:

These are all the libraries:

Hopefully this will help you make a decision on which library to use :)

subprocess

Subprocess allows you to call external commands and connect them to their input/output/error pipes (stdin, stdout, and stderr). Subprocess is the default choice for running commands, but sometimes other modules are better.

subprocess.run(["ls", "-l"]) # Run command
subprocess.run(["ls", "-l"], stdout=subprocess.PIPE) # This will run the command and return any output
subprocess.run(shlex.split("ls -l")) # You can also use the shlex library to split the command

os

os is used for "operating system dependent functionality". It can also be used to call external commands with os.system and os.popen (Note: There is also a subprocess.popen). os will always run the shell and is a simple alternative for people who don't need to, or don't know how to use subprocess.run.

os.system("ls -l") # run command
os.popen("ls -l").read() # This will run the command and return any output

sh

sh is a subprocess interface which lets you call programs as if they were functions. This is useful if you want to run a command multiple times.

sh.ls("-l") # Run command normally
ls_cmd = sh.Command("ls") # Save command as a variable
ls_cmd() # Run command as if it were a function

plumbum

plumbum is a library for "script-like" Python programs. You can call programs like functions as in sh. Plumbum is useful if you want to run a pipeline without the shell.

ls_cmd = plumbum.local("ls -l") # get command
ls_cmd() # run command

pexpect

pexpect lets you spawn child applications, control them and find patterns in their output. This is a better alternative to subprocess for commands that expect a tty on Unix.

pexpect.run("ls -l") # Run command as normal
child = pexpect.spawn('scp foo user@example.com:.') # Spawns child application
child.expect('Password:') # When this is the output
child.sendline('mypassword')

fabric

fabric is a Python 2.5 and 2.7 library. It allows you to execute local and remote shell commands. Fabric is simple alternative for running commands in a secure shell (SSH)

fabric.operations.local('ls -l') # Run command as normal
fabric.operations.local('ls -l', capture = True) # Run command and receive output

envoy

envoy is known as "subprocess for humans". It is used as a convenience wrapper around the subprocess module.

r = envoy.run("ls -l") # Run command
r.std_out # get output

commands

commands contains wrapper functions for os.popen, but it has been removed from Python 3 since subprocess is a better alternative.

The edit was based on J.F. Sebastian's comment.


11
[+35] [2012-10-28 05:14:01] Usman Khan

This is how I run my commands. This code has everything you need pretty much

from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()

(1) Passing commands as strings is normally a bad idea - Eric
(1) I think it's acceptable for hard-coded commands, if it increases readability. - Adam Matan
12
[+31] [2013-04-11 17:17:53] Honza Javorek

With Standard Library

Use subprocess module [1]:

from subprocess import call
call(['ls', '-l'])

It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.

Note: shlex.split [2] can help you to parse the command for call and other subprocess functions in case you don't want (or you can't!) provide them in form of lists:

import shlex
from subprocess import call
call(shlex.split('ls -l'))

With External Dependencies

If you do not mind external dependencies, use plumbum [3]:

from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())

It is the best subprocess wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by pip install plumbum.

Another popular library is sh [4]:

from sh import ifconfig
print(ifconfig('wlan0'))

However, sh dropped Windows support, so it's not as awesome as it used to be. Install by pip install sh.

[1] http://docs.python.org/2/library/subprocess.html
[2] https://docs.python.org/2/library/shlex.html#shlex.split
[3] https://pypi.python.org/pypi/plumbum
[4] https://pypi.python.org/pypi/sh

13
[+29] [2012-11-15 17:13:22] Joe

Update:

subprocess.run is the recommended approach as of Python 3.5 [1] if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See this question for how [2].)

Here's some examples from the docs [3].

Run a process:

>>> subprocess.run(["ls", "-l"])  # doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)

Raise on failed run:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Capture output:

>>> subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')

Original answer:

I recommend trying Envoy [4]. It's a wrapper for subprocess, which in turn aims to replace [5] the older modules and functions. Envoy is subprocess for humans.

Example usage from the readme [6]:

>>> r = envoy.run('git config', data='data to pipe in', timeout=2)

>>> r.status_code
129
>>> r.std_out
'usage: git config [options]'
>>> r.std_err
''

Pipe stuff around too:

>>> r = envoy.run('uptime | pbcopy')

>>> r.command
'pbcopy'
>>> r.status_code
0

>>> r.history
[<Response 'uptime'>]
[1] https://docs.python.org/3.6/whatsnew/3.5.html#whatsnew-subprocess
[2] https://stackoverflow.com/questions/7389662/link-several-popen-commands-with-pipes
[3] https://docs.python.org/3.6/library/subprocess.html#subprocess.run
[4] https://github.com/kennethreitz/envoy
[5] http://docs.python.org/2/library/subprocess.html
[6] https://github.com/kennethreitz/envoy#readme

14
[+25] [2013-04-18 01:09:33] Zuckonit

Without the output of the result:

import os
os.system("your command here")

With output of the result:

import commands
commands.getoutput("your command here")
or
commands.getstatusoutput("your command here")

15
[+19] [2014-10-10 17:41:13] stuckintheshuck

There is also Plumbum [1]

>>> from plumbum import local
>>> ls = local["ls"]
>>> ls
LocalCommand(<LocalPath /bin/ls>)
>>> ls()
u'build.py\ndist\ndocs\nLICENSE\nplumbum\nREADME.rst\nsetup.py\ntests\ntodo.txt\n'
>>> notepad = local["c:\\windows\\notepad.exe"]
>>> notepad()                                   # Notepad window pops up
u''                                             # Notepad window is closed by user, command returns
[1] http://plumbum.readthedocs.org/en/latest/

16
[+17] [2008-09-18 01:43:30] Ben Hoffstein

https://docs.python.org/2/library/subprocess.html

...or for a very simple command:

import os
os.system('cat testfile')

17
[+17] [2008-09-18 01:53:27] Martin W

os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.

Get more information here [1].

[1] https://docs.python.org/library/subprocess.html

18
[+14] [2015-06-29 11:34:22] Priyankara

Use:

import os

cmd = 'ls -al'

os.system(cmd)

os - This module provides a portable way of using operating system-dependent functionality.

For the more os functions, here [1] is the documentation.

[1] https://docs.python.org/2/library/os.html

(1) it's also deprecated. use subprocess - Corey Goldberg
19
[+13] [2010-01-08 21:11:30] Atinc Delican

There is another difference here which is not mentioned above.

subprocess.Popen executes the as a subprocess. In my case, I need to execute file which needs to communicate with another program .

I tried subprocess, execution was successful. However could not comm w/ . everything normal when I run both from the terminal.

One more: (NOTE: kwrite behaves different from other apps. If you try below with firefox results will not be the same)

If you try os.system("kwrite"), program flow freezes until user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow but kwrite became the subprocess of the konsole.

Anyone runs the kwrite not being a subprocess (i.e. at the system monitor it must be appear at the leftmost edge of the tree)


20
[+13] [2011-01-18 19:21:44] cdunn2001

subprocess.check_call is convenient if you don't want to test return values. It throws an exception on any error.


21
[+12] [2012-06-11 22:28:35] Saurabh Bangad

os.system does not allow you to store results, so if you want to store results in some list or something subprocess.call works.


22
[+12] [2014-04-30 14:37:04] Emil Stenström

I tend to use subprocess [1] together with shlex [2] (to handle escaping of quoted strings):

>>> import subprocess, shlex
>>> command = 'ls -l "/your/path/with spaces/"'
>>> call_params = shlex.split(command)
>>> print call_params
["ls", "-l", "/your/path/with spaces/"]
>>> subprocess.call(call_params)
[1] https://docs.python.org/2/library/subprocess.html
[2] https://docs.python.org/2/library/shlex.html

23
[+10] [2012-07-16 15:16:24] admire

You can use Popen, and then you can check the procedure's status:

from subprocess import Popen

proc = Popen(['ls', '-l'])
if proc.poll() is None:
    proc.kill()

Check out subprocess.Popen [1].

[1] http://docs.python.org/library/subprocess.html#popen-objects

24
[+10] [2014-05-01 20:49:01] houqp

Shameless plug, I wrote a library for this :P https://github.com/houqp/shell.py

It's basically a wrapper for popen and shlex for now. It also supports piping commands so you can chain commands easier in Python. So you can do things like:

ex('echo hello shell.py') | "awk '{print $2}'"

25
[+8] [2015-10-14 07:12:51] urosjarc

Here are my two cents: In my view, this is the best practice when dealing with external commands...

These are the return values from the execute method...

pass, stdout, stderr = execute(["ls","-la"],"/home/user/desktop")

This is the execute method...

def execute(cmdArray,workingDir):

    stdout = ''
    stderr = ''

    try:
        try:
            process = subprocess.Popen(cmdArray,cwd=workingDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1)
        except OSError:
            return [False, '', 'ERROR : command(' + ' '.join(cmdArray) + ') could not get executed!']

        for line in iter(process.stdout.readline, b''):

            try:
                echoLine = line.decode("utf-8")
            except:
                echoLine = str(line)

            stdout += echoLine

        for line in iter(process.stderr.readline, b''):

            try:
                echoLine = line.decode("utf-8")
            except:
                echoLine = str(line)

            stderr += echoLine

    except (KeyboardInterrupt,SystemExit) as err:
        return [False,'',str(err)]

    process.stdout.close()

    returnCode = process.wait()
    if returnCode != 0 or stderr != '':
        return [False, stdout, stderr]
    else:
        return [True, stdout, stderr]

(1) Deadlock potential: use the .communicate method instead - ppperry
26
[+8] [2016-06-17 09:14:24] Swadhikar C

In Windows you can just import the subprocess module and run external commands by calling subprocess.Popen(), subprocess.Popen().communicate() and subprocess.Popen().wait() as below:

# Python script to run a command line
import subprocess

def execute(cmd):
    """
        Purpose  : To execute a command and return exit status
        Argument : cmd - command to execute
        Return   : exit_code
    """
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    (result, error) = process.communicate()

    rc = process.wait()

    if rc != 0:
        print "Error: failed to execute command:", cmd
        print error
    return result
# def

command = "tasklist | grep python"
print "This process detail: \n", execute(command)

Output:

This process detail:
python.exe                     604 RDP-Tcp#0                  4      5,660 K

27
[+8] [2016-07-20 09:50:01] IRSHAD

To fetch the network id from the openstack neutron:

#!/usr/bin/python
import os
netid= "nova net-list | awk '/ External / { print $2 }'"
temp=os.popen(netid).read()  /* here temp also contains new line (\n) */
networkId=temp.rstrip()
print(networkId)

Output of nova net-list

+--------------------------------------+------------+------+
| ID                                   | Label      | CIDR |
+--------------------------------------+------------+------+
| 431c9014-5b5d-4b51-a357-66020ffbb123 | test1      | None |
| 27a74fcd-37c0-4789-9414-9531b7e3f126 | External   | None |
| 5a2712e9-70dc-4b0e-9281-17e02f4684c9 | management | None |
| 7aa697f5-0e60-4c15-b4cc-9cb659698512 | Internal   | None |
+--------------------------------------+------------+------+

Output of print(networkId)

27a74fcd-37c0-4789-9414-9531b7e3f126

28
[+8] [2016-11-27 00:15:34] yuval

Under Linux, in case you would like to call an external command that will execute independently (will keep running after the python script terminates), you can use a simple queue as task spooler [1] or the at [2] command

An example with task spooler:

import os
os.system('ts <your-command>')

Notes about task spooler (ts):

  1. You could set the number of concurrent processes to be run ("slots") with:

    ts -S <number-of-slots>

  2. Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.

[1] http://vicerveza.homeunix.net/~viric/soft/ts/
[2] https://linux.die.net/man/1/at

29
[+5] [2013-04-17 14:10:06] Jens Timmerman

There are a lot of different ways to run external commands in Python, and all of them have their own plus sides and drawbacks.

My colleagues and me have been writing Python system administration tools, so we need to run a lot of external commands, and sometimes you want them to block or run asynchronously, time-out, update every second, etc.

There are also different ways of handling the return code and errors, and you might want to parse the output, and provide new input (in an expect [1] kind of style). Or you will need to redirect stdin, stdout and stderr to run in a different tty (e.g., when using screen).

So you will probably have to write a lot of wrappers around the external command. So here is a Python module which we have written which can handle almost anything you would want, and if not, it's very flexible so you can easily extend it:

https://github.com/hpcugent/vsc-base/blob/master/lib/vsc/utils/run.py

[1] http://en.wikipedia.org/wiki/Expect

30
[+4] [2012-07-25 06:51:50] Cut-n-paster

Very simplest way to run any command and get result back:

from commands import getstatusoutput

try:
    return getstatusoutput("ls -ltr")
except Exception, e:
    return None

(1) Is this going to be deprecated in python 3.0? - 719016
31
[+4] [2012-08-13 18:36:32] mdwhatcott

I quite like shell_command [1] for its simplicity. It's built on top of the subprocess module.

Here's an example from the docs:

>>> from shell_command import shell_call
>>> shell_call("ls *.py")
setup.py  shell_command.py  test_shell_command.py
0
>>> shell_call("ls -l *.py")
-rw-r--r-- 1 ncoghlan ncoghlan  391 2011-12-11 12:07 setup.py
-rw-r--r-- 1 ncoghlan ncoghlan 7855 2011-12-11 16:16 shell_command.py
-rwxr-xr-x 1 ncoghlan ncoghlan 8463 2011-12-11 16:17 test_shell_command.py
0
[1] http://shell-command.readthedocs.org/en/latest/index.html

32
[+4] [2013-06-19 23:18:34] imagineerThat

Just to add to the discussion, if you include using a Python console, you can call external commands from IPython [1]. While in the IPython prompt, you can call shell commands by prefixing '!'. You can also combine Python code with the shell, and assign the output of shell scripts to Python variables.

For instance:

In [9]: mylist = !ls

In [10]: mylist
Out[10]:
['file1',
 'file2',
 'file3',]
[1] http://en.wikipedia.org/wiki/IPython

33
[+4] [2016-03-17 10:48:32] Chiel ten Brinke

For Python 3.5+ it is recommended that you use the run function from the subprocess module [1]. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.

from subprocess import PIPE, run

command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)
[1] https://docs.python.org/3.5/library/subprocess.html#subprocess.run

(1) answer with run function was added in 2015 year. You repeated it. I think it was a reason of down vote - Greg Eremeev
34
[+4] [2016-10-11 02:26:49] Rajiv Sharma

Here is calling an external command and return or print the command's output:

Python Subprocess [1] check_output is good for

Run command with arguments and return its output as a byte string.

import subprocess
proc = subprocess.check_output('ipconfig /all')
print proc
[1] https://docs.python.org/2/library/subprocess.html

35
[+3] [2014-08-24 21:46:12] amehta

A simple way is to use the os module:

import os
os.system('ls')

Alternatively you can also use the subprocess module

import subprocess
subprocess.check_call('ls')

If you want the result to be stored in a variable try:

import subprocess
r = subprocess.check_output('ls')

36
[+3] [2015-07-24 19:12:21] Asif Hasnain

Using the Popen function of the subprocess Python module is the simplest way of running Linux commands. In that, the Popen.communicate() function will give your commands output. For example

import subprocess

..
process = subprocess.Popen(..)   # Pass command and arguments to the function
stdout, stderr = process.communicate()   # Get command output and error
..

37
[+3] [2016-09-12 09:44:09] liuyip

There are many ways to call a command.

  • For example:

if and.exe needs two parameters. In cmd we can call sample.exe use this: and.exe 2 3 and it show 5 on screen.

If we use a Python script to call and.exe, we should do like..

  1. os.system(cmd,...)

    • os.system(("and.exe" + " " + "2" + " " + "3"))
  2. os.popen(cmd,...)

    • os.popen(("and.exe" + " " + "2" + " " + "3"))
  3. subprocess.Popen(cmd,...)
    • subprocess.Popen(("and.exe" + " " + "2" + " " + "3"))

It's too hard, so we can join cmd with a space:

import os
cmd = " ".join(exename,parameters)
os.popen(cmd)

38
[+2] [2014-03-14 02:59:05] Jake W

After some research, I have the following code which works very well for me. It basically prints both stdout and stderr in real time. Hope it helps someone else who needs it.

stdout_result = 1
stderr_result = 1


def stdout_thread(pipe):
    global stdout_result
    while True:
        out = pipe.stdout.read(1)
        stdout_result = pipe.poll()
        if out == '' and stdout_result is not None:
            break

        if out != '':
            sys.stdout.write(out)
            sys.stdout.flush()


def stderr_thread(pipe):
    global stderr_result
    while True:
        err = pipe.stderr.read(1)
        stderr_result = pipe.poll()
        if err == '' and stderr_result is not None:
            break

        if err != '':
            sys.stdout.write(err)
            sys.stdout.flush()


def exec_command(command, cwd=None):
    if cwd is not None:
        print '[' + ' '.join(command) + '] in ' + cwd
    else:
        print '[' + ' '.join(command) + ']'

    p = subprocess.Popen(
        command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
    )

    out_thread = threading.Thread(name='stdout_thread', target=stdout_thread, args=(p,))
    err_thread = threading.Thread(name='stderr_thread', target=stderr_thread, args=(p,))

    err_thread.start()
    out_thread.start()

    out_thread.join()
    err_thread.join()

    return stdout_result + stderr_result

(1) your code may lose data when the subprocess exits while there is some data is buffered. Read until EOF instead, see teed_call() - jfs
39
[+2] [2014-04-12 11:58:23] andruso

Use subprocess.call [1]:

from subprocess import call

# using list
call(["echo", "Hello", "world"])

# single string argument varies across platforms so better split it
call("echo Hello world".split(" "))
[1] https://docs.python.org/2/library/subprocess.html

40
[+2] [2016-04-28 11:18:22] Viswesn

I would recommend the following method 'run' and it will help us in getting STDOUT, STDERR and exit status as dictionary; The caller of this can read the dictionary return by 'run' method to know the actual state of process.

  def run (cmd):
       print "+ DEBUG exec({0})".format(cmd)
       p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, shell=True)
       (out, err) = p.communicate()
       ret        = p.wait()
       out        = filter(None, out.split('\n'))
       err        = filter(None, err.split('\n'))
       ret        = True if ret == 0 else False
       return dict({'output': out, 'error': err, 'status': ret})
  #end

41
[+2] [2016-06-24 11:29:00] David Okwii

Use:

import subprocess

p = subprocess.Popen("df -h", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")

It gives nice output which is easier to work with:

['Filesystem      Size  Used Avail Use% Mounted on',
 '/dev/sda6        32G   21G   11G  67% /',
 'none            4.0K     0  4.0K   0% /sys/fs/cgroup',
 'udev            1.9G  4.0K  1.9G   1% /dev',
 'tmpfs           387M  1.4M  386M   1% /run',
 'none            5.0M     0  5.0M   0% /run/lock',
 'none            1.9G   58M  1.9G   3% /run/shm',
 'none            100M   32K  100M   1% /run/user',
 '/dev/sda5       340G  222G  100G  69% /home',
 '']

42
[+1] [2013-04-18 17:39:50] Colonel Panic

The subprocess module [1] described above by Eli is very powerful, but the syntax to make a bog-standard system call and inspect its output, is unnecessarily prolix.

The easiest way to make a system call is with the commands module [2] (Linux only).

> import commands
> commands.getstatusoutput("grep matter alice-in-wonderland.txt")
(0, "'Then it doesn't matter which way you go,' said the Cat.")

The first item in the tuple is the return code of the process. The second item is its standard output (and standard error, merged).


The Python devs have 'deprecated' the commands module, but that doesn't mean you shouldn't use it. Only that they're not developing it anymore, which is okay, because it's already perfect (at its small but important function).

[1] http://docs.python.org/2/library/subprocess.html
[2] http://docs.python.org/2/library/commands.html

(7) Deprecated doesn't only mean "isn't developed anymore" but also "you are discouraged from using this". Deprecated features may break anytime, may be removed anytime, or may dangerous. You should never use this in important code. Deprecation is merely a better way than removing a feature immediately, because it gives programmers the time to adapt and replace their deprecated functions. - Misch
(3) Just to prove my point: "Deprecated since version 2.6: The commands module has been removed in Python 3. Use the subprocess module instead." - Misch
It's not dangerous! The Python devs are careful only to break features between major releases (ie. between 2.x and 3.x). I've been using the commands module since 2004's Python 2.4. It works the same today in Python 2.7. - Colonel Panic
(6) With dangerous, I didn't mean that it may be removed anytime (that's a different problem), neither did I say that it is dangerous to use this specific module. However it may become dangerous if a security vulnerability is discovered but the module isn't further developed or maintained. (I don't want to say that this module is or isn't vulnerable to security issues, just talking about deprecated stuff in general) - Misch
43
[+1] [2017-10-18 16:37:52] Aaron Hall

Calling an external command in Python

Simple, use subprocess.run, which returns a CompletedProcess object:

>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)

Why?

As of Python 3.5, the documentation recommends subprocess.run [1]:

The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.

Here's an example of the simplest possible usage - and it does exactly as asked:

>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)

run waits for the command to successfully finish, then returns a CompletedProcess object. It may instead raise TimeoutExpired (if you give it a timeout= argument) or CalledProcessError (if it fails and you pass check=True).

As you might infer from the above example, stdout and stderr both get piped to your own stdout and stderr by default.

We can inspect the returned object and see the command that was given and the returncode:

>>> completed_process.args
'python --version'
>>> completed_process.returncode
0

Capturing output

If you want to capture the output, you can pass subprocess.PIPE to the appropriate stderr or stdout:

>>> cp = subprocess.run('python --version', 
                        stderr=subprocess.PIPE, 
                        stdout=subprocess.PIPE)
>>> cp.stderr
b'Python 3.6.1 :: Anaconda 4.4.0 (64-bit)\r\n'
>>> cp.stdout
b''

(I find it interesting and slightly counterintuitive that the version info gets put to stderr instead of stdout.)

Pass a command list

One might easily move from manually providing a command string (like the question suggests) to providing a string built programmatically. Don't build strings programmatically. This is a potential security issue. It's better to assume you don't trust the input.

>>> import textwrap
>>> args = ['python', textwrap.__file__]
>>> cp = subprocess.run(args, stdout=subprocess.PIPE)
>>> cp.stdout
b'Hello there.\r\n  This is indented.\r\n'

Note, only args should be passed positionally.

Full Signature

Here's the actual signature in the source and as shown by help(run):

def run(*popenargs, input=None, timeout=None, check=False, **kwargs):

The popenargs and kwargs are given to the Popen constructor. input can be a string of bytes (or unicode, if specify encoding or universal_newlines=True) that will be piped to the subprocess's stdin.

The documentation describes timeout= and check=True better than I could:

The timeout argument is passed to Popen.communicate(). If the timeout expires, the child process will be killed and waited for. The TimeoutExpired exception will be re-raised after the child process has terminated.

If check is true, and the process exits with a non-zero exit code, a CalledProcessError exception will be raised. Attributes of that exception hold the arguments, the exit code, and stdout and stderr if they were captured.

and this example for check=True is better than one I could come up with:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Expanded Signature

Here's an expanded signature, as given in the documentation:

subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, 
shell=False, cwd=None, timeout=None, check=False, encoding=None, 
errors=None)

Note that this indicates that only the args list should be passed positionally. So pass the remaining arguments as keyword arguments.

Popen

When use Popen instead? I would struggle to find use-case based on the arguments alone. Direct usage of Popen would, however, give you access to its methods, including poll, 'send_signal', 'terminate', and 'wait'.

Here's the Popen signature as given in the source [2]. I think this is the most precise encapsulation of the information (as opposed to help(Popen)):

def __init__(self, args, bufsize=-1, executable=None,
             stdin=None, stdout=None, stderr=None,
             preexec_fn=None, close_fds=_PLATFORM_DEFAULT_CLOSE_FDS,
             shell=False, cwd=None, env=None, universal_newlines=False,
             startupinfo=None, creationflags=0,
             restore_signals=True, start_new_session=False,
             pass_fds=(), *, encoding=None, errors=None):

But more informative is the Popen documentation [3]:

subprocess.Popen(args, bufsize=-1, executable=None, stdin=None,
                 stdout=None, stderr=None, preexec_fn=None, close_fds=True,
                 shell=False, cwd=None, env=None, universal_newlines=False,
                 startupinfo=None, creationflags=0, restore_signals=True,
                 start_new_session=False, pass_fds=(), *, encoding=None, errors=None)

Execute a child program in a new process. On POSIX, the class uses os.execvp()-like behavior to execute the child program. On Windows, the class uses the Windows CreateProcess() function. The arguments to Popen are as follows.

Understanding the remaining documentation on Popen will be left as an exercise for the reader.

[1] https://docs.python.org/3/library/subprocess.html#subprocess.run
[2] https://github.com/python/cpython/blob/master/Lib/subprocess.py#L587
[3] https://docs.python.org/3/library/subprocess.html#popen-constructor

44
[0] [2017-10-24 23:30:20] Asav Patel

I have written a wrapper to handle errors and redirecting output and other stuff.

import shlex
import psutil
import subprocess

def call_cmd(cmd, stdout=sys.stdout, quite=False, shell=False, raise_exceptions=True, use_shlex=True, timeout=None):
    """Exec command by command line like 'ln -ls "/var/log"'
    """
    if not quite:
        print("Run %s", str(cmd))
    if use_shlex and isinstance(cmd, (str, unicode)):
        cmd = shlex.split(cmd)
    if timeout is None:
        process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
        retcode = process.wait()
    else:
        process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
        p = psutil.Process(process.pid)
        finish, alive = psutil.wait_procs([p], timeout)
        if len(alive) > 0:
            ps = p.children()
            ps.insert(0, p)
            print('waiting for timeout again due to child process check')
            finish, alive = psutil.wait_procs(ps, 0)
        if len(alive) > 0:
            print('process {} will be killed'.format([p.pid for p in alive]))
            for p in alive:
                p.kill()
            if raise_exceptions:
                print('External program timeout at {} {}'.format(timeout, cmd))
                raise CalledProcessTimeout(1, cmd)
        retcode = process.wait()
    if retcode and raise_exceptions:
        print("External program failed %s", str(cmd))
        raise subprocess.CalledProcessError(retcode, cmd)

you can call it like this:

cmd = 'ln -ls "/var/log"'
stdout = 'out.txt'
call_cmd(cmd, stdout)

hope this helps.


"quite" or "quiet"? - pstanton
45