Stack OverflowHidden features of Python
[+1416] [191] jelovirt
[2008-09-19 11:50:36]
[ python hidden-features ]
[ ]

What are the lesser-known but useful features of the Python programming language?

Quick links to answers:

[+739] [2008-09-19 13:47:15] Thomas Wouters

Chaining comparison operators:

>>> x = 5
>>> 1 < x < 10
>>> 10 < x < 20 
>>> x < 10 < x*10 < 100
>>> 10 > x <= 9
>>> 5 == x > 4

In case you're thinking it's doing 1 < x, which comes out as True, and then comparing True < 10, which is also True, then no, that's really not what happens (see the last example.) It's really translating into 1 < x and x < 10, and x < 10 and 10 < x * 10 and x*10 < 100, but with less typing and each term is only evaluated once.

Isn't 10 > x <= 9 the same as x <= 9 (ignoring overloaded operators, that is) - tzot
(1) Of course. It was just an example of mixing different operators. - Thomas Wouters
(19) This applies to other comparison operators as well, which is why people are sometimes surprised why code like (5 in [5] is True) is False (but it's unpythonic to explicitly test against booleans like that to begin with). - Miles
Lisp does not have anything similar? - Hai
(1) Not that I know of. Perl 6 does have this feature, though :) - ephemient
(19) Good but watch out for equal prcedence, like 'in' and '='. 'A in B == C in D' means '(A in B) and (B == C) and (C in D)' which might be unexpected. - Charles Merriam
(2) "each term evaluated only once" That's key. - wilhelmtell
(15) Azafe: Lisp's comparisons naturally work this way. It's not a special case because there's no other (reasonable) way to interpret (< 1 x 10). You can even apply them to single arguments, like (= 10):… - Ken
@Miles a less confusing example might be "a == b in c" which is equivalent to "a == b and b in c". See - poolie
@Charles Merriam for me its not unexpected, just logical. Although its ugly to use A in B == C in D. - Joschua
This is also great for tests. You can do a == b == c, and it will return True only if all three items are equal. - asmeurer
is not and not in are similarly surprisingly good too. Apparently is not is 1 binary operator, not a binary and then a unary. not in is the same too. This makes code like 'foo' is not 'bar' so much more readable. - Y.H Wong
Ken: I like Python's version better than Lisp's, since it allows for mixing different kinds of comparisons, such as a <= b < c. Mathematica, which is more or a dialect of Lisp, does allow you to use different comparisons --it uses what would in Lisp syntax be (inequality a '<= b '< c). - Omar Antolín-Camarena
[+511] [2008-09-27 13:18:09] BatchyX

Get the python regex parse tree to debug your regex.

Regular expressions are a great feature of python, but debugging them can be a pain, and it's all too easy to get a regex wrong.

Fortunately, python can print the regex parse tree, by passing the undocumented, experimental, hidden flag re.DEBUG (actually, 128) to re.compile.

>>> re.compile("^\[font(?:=(?P<size>[-+][0-9]{1,2}))?\](.*?)[/font]",
at at_beginning
literal 91
literal 102
literal 111
literal 110
literal 116
max_repeat 0 1
  subpattern None
    literal 61
    subpattern 1
        literal 45
        literal 43
      max_repeat 1 2
          range (48, 57)
literal 93
subpattern 2
  min_repeat 0 65535
    any None
  literal 47
  literal 102
  literal 111
  literal 110
  literal 116

Once you understand the syntax, you can spot your errors. There we can see that I forgot to escape the [] in [/font].

Of course you can combine it with whatever flags you want, like commented regexes:

>>> re.compile("""
 ^              # start of a line
 \[font         # the font tag
 (?:=(?P<size>  # optional [font=+size]
 [-+][0-9]{1,2} # size specification
 \]             # end of tag
 (.*?)          # text between the tags
 \[/font\]      # end of the tag
 """, re.DEBUG|re.VERBOSE|re.DOTALL)

(3) Except parsing HTML using regular expression is slow and painful. Even the built-in 'html' parser module doesn't use regexes to get the work done. And if the html module doesn't please you, there is plenty of XML/HTML parser modules that does the job without having to reinvent the wheel. - BatchyX
A link to documentation on the output syntax would be great. - Personman
(1) This should be an official part of Python, not experimental... RegEx is always tricky and being able to trace what's happening is really helpful. - Cahit
[+459] [2008-09-22 19:51:20] Dave


Wrap an iterable with enumerate and it will yield the item along with its index.

For example:

>>> a = ['a', 'b', 'c', 'd', 'e']
>>> for index, item in enumerate(a): print index, item
0 a
1 b
2 c
3 d
4 e



i think it's been deprecated in python3 - Berry Tsakala
(45) And all this time I was coding this way: for i in range(len(a)): ... and then using a[i] to get the current item. - Fernando Martin
(4) @Berry Tsakala: To my knowledge, it has not been deprecated. - JAB
shorter than using zip and count for index, item in zip(itertools.count(), a): print(index,item) - Yoo
(1) Great feature, +1. @Draemon: this is actually covered in the Python tutorial that comes installed with Python (there's a section on various looping constructs), so I'm always surprised that this is so little known. - Edan Maor
(1) The nice thing about this is when you're iterating through more than one loop simultaneously - dassouki
@Berry Tsakala: definitely not deprecated. - ncoghlan
Sorry my ignorance, but isn't it enough to just do: a = ["a","b","c"] >>> for x in enumerate(a): ... print x why do you do for index, item in enumerate(a): print index, item - Trufa
I always hacked idx, elem in itertools.izip(itertools.count(), iterable):... - Jeeyoung Kim
(1) @Tufa: You might not want to use the index and item in the same statement. In this simple example your code is equivalent, but in a more sophisticated scenario it won't be able to do all that for inx, itm in enumerate(a): can do. - Tomas Aschan
(1) for i in range(len(a)) is still a lot better than for (int i=0;i<a.size();i++) .... - Gravity
(15) enumerate can start from arbitrary index, not necessary 0. Example: 'for i, item in enumerate(list, start=1): print i, item' will start enumeration from 1, not 0. - dmitry_romanov
(3) Not deprecated in Py3K… - Léo Germond
[+418] [2008-09-19 11:59:28] freespace

Creating generators objects

If you write

x=(n for n in foo if bar(n))

you can get out the generator and assign it to x. Now it means you can do

for n in x:

The advantage of this is that you don't need intermediate storage, which you would need if you did

x = [n for n in foo if bar(n)]

In some cases this can lead to significant speed up.

You can append many if statements to the end of the generator, basically replicating nested for loops:

>>> n = ((a,b) for a in range(0,2) for b in range(4,6))
>>> for i in n:
...   print i 

(0, 4)
(0, 5)
(1, 4)
(1, 5)

You could also use a nested list comprehension for that, yes? - shapr
(54) Of particular note is the memory overhead savings. Values are computed on-demand, so you never have the entire result of the list comprehension in memory. This is particularly desirable if you later iterate over only part of the list comprehension. - saffsd
I use ifilter for this kind of thing: - Dan
(19) This is not particularly "hidden" imo, but also worth noting is the fact that you could not rewind a generator object, whereas you can reiterate over a list any number of times. - susmits
ditto susmits. Although these are extremely cool, it's a documented feature of Python Using callbacks with your generators, also documented, adds to the coolness of generators. - Justin
(13) The "no rewind" feature of generators can cause some confusion. Specifically, if you print a generator's contents for debugging, then use it later to process the data, it doesn't work. The data is produced, consumed by print(), then is not available for the normal processing. This doesn't apply to list comprehensions, since they're completely stored in memory. - johntellsall
(4) Similar (dup?) answer:‌​/… Note, however, that the answer I linked here mentions a REALLY GOOD presentation about the power of generators. You really should check it out. - Denilson Sá Maia
Here's a good article in using generator in solving real-world problems - OnesimusUnbound
[+352] [2008-09-19 14:20:38] mbac32768

iter() can take a callable argument

For instance:

def seek_next_line(f):
    for c in iter(lambda:,'\n'):

The iter(callable, until_value) function repeatedly calls callable and yields its result until until_value is returned.

As a newbie to python, can you please explain why the lambda keyword is necessary here? - SiegeX
@SiegeX without the lambda, would be evaluated (returning a string) before being passed to the iter function. Instead, the lambda creates an anonymous function and passes that to iter. - jmilloy
[+339] [2008-09-22 04:34:39] Jason Baker

Be careful with mutable default arguments

>>> def foo(x=[]):
...     x.append(1)
...     print x
>>> foo()
>>> foo()
[1, 1]
>>> foo()
[1, 1, 1]

Instead, you should use a sentinel value denoting "not given" and replace with the mutable you'd like as default:

>>> def foo(x=None):
...     if x is None:
...         x = []
...     x.append(1)
...     print x
>>> foo()
>>> foo()

(39) That's definitely one of the more nasty hidden features. I've run into it from time to time. - Torsten Marek
(77) I found this a lot easier to understand when I learned that the default arguments live in a tuple that's an attribute of the function, e.g. foo.func_defaults. Which, being a tuple, is immutable. - Robert Rossney
Could you explain how it happens in detail? - grayger
(2) @grayger: As the def statement is executed its arguments are evaluated by the interpreter. This creates (or rebinds) a name to a code object (the suite of the function). However, the default arguments are instantiated as objects at the time of definition. This is true of any time of defaulted object, but only significant (exposing visible semantics) when the object is mutable. There's no way of re-binding that default argument name in the function's closure although it can obviously be over-ridden for any call or the whole function can be re-defined). - Jim Dennis
(3) @Robert of course the arguments tuple might be immutable, but the objects it point to are not necessarily immutable. - poolie
(16) One quick hack to make your initialization a little shorter: x = x or []. You can use that instead of the 2 line if statement. - dave mankoff
Default values also become nasty if you use more than one of them. For example - say you wrote a function like: <function> def f(a=[], b=[], c=[]): a.append(3) </ function>. You will have inadvertently changed the values of a, b and c without having touched them. This is because similar default values seem to point to the same thing in memory. Nasty bugs arise - inspectorG4dget
this feature / wart or what you'd call it is one of the most important things to understand when you start learning python. it directly connects you to understanding what is done when in a program, and without that knowledge, any code beyond a pretty low threshold of complexity cannot be written. - flow
This seems like a bug in the compiler, right? - Leo
(1) Just a comment that pylint complains vigorously about usage like this, as it should. - Peter V
(1) @davemankoff: I think that's a bad habit to get into, because sooner or later you'll reject a valid falsey value by mistake. - Ori
(1) This was literally an interview question for my current job. :) It is probably the classic Python gotcha. - Adam Parkin
[+316] [2008-09-19 13:18:19] Rafał Dowgird

Sending values into generator functions [1]. For example having this function:

def mygen():
    """Yield 5 until something else is passed back via send()"""
    a = 5
    while True:
        f = (yield a) #yield a and possibly get f in return
        if f is not None: 
            a = f  #store the new value

You can:

>>> g = mygen()
>>> g.send(7)  #we send this back to the generator
>>> #now it will yield 7 until we send something else

Agreed. Let's treat this as a nasty example of a hidden feature of Python :) - Rafał Dowgird
(89) In other languages, I believe this magical device is called a "variable". - finnw
(5) coroutines should be coroutines and genenerator should be themselves too, without mixing. Mega-great link and talk and examples about this here: - u0b34a0f6ae
(31) @finnw: the example implements something that's similar to a variable. However, the feature could be used in many other ways ... unlike a variable. It should also be obvious that similar semantics can be implemented using objects (a class implemting Python's call method, in particular). - Jim Dennis
(4) This is too trivial an example for people who've never seen (and probably won't understand) co-routines. The example that implements the running average without risk of sum variable overflow is a good one. - Prashant Kumar
More on the yield topic here:… - gecco
@finnw and his upvoters, I think you've misundetstood the point of this examble. The important bit is not storing a value in 'a', it's that 'mygen' is acting like a function with multiple entry points, and with the ability to suspend execution halfway through, and return a value to the caller, but then resume execution later, from that same point, with all local variables intact. You can read more about them here - Jonathan Hartley
[+312] [2008-09-21 22:01:53] eduffy

If you don't like using whitespace to denote scopes, you can use the C-style {} by issuing:

from __future__ import braces

(122) That's evil. :) - Jason Baker
(37) >>> from __future__ import braces File "<stdin>", line 1 SyntaxError: not a chance :P - Benjamin W. Smith
Wait, isn't the future package future additions to the language? So are they planning to add braces at some point? - James McMahon
(3) Dynamic whitespace is half of python's goodness. That's... twisted. - stalepretzel
(40) that's blasphemy! - Berk D. Demir
(335) I think that we may have a syntactical mistake here, shouldn't that be "from __past__ import braces"? - Bill K
(47) from __cruft__ import braces - Phillip B Oldham
(7) I admit that's funny, but inversely what about the blind? I remember reading a while back of an individual who was blind and frustrated that he/she couldn't use Python due to the lack of brackets for statements. - David
(1) I can understand the use of braces for minification of code :) - Jiaaro
(1) Totally breaks the Python idiom - Joshua Partogi
(3) @David: How are braces better for the blind? In the best-case scenario (Well-indented code, which Python enforces), braces would only add a minuscule amount of clarity. A block of text with whitespace before would be in my opinion much easier to notice than the presence of a small typographical character. The legibility of braces depends on which version of the OTBS that person believes in. The inline braces I prefer would be horrible to read without proper vision. - 3Doubloons
(6) @Alex: How does the text reader say the indentation level? You would need a Python specific text reader to tell you "for <stuff> colon newline indent pass newline <next statement>". Now add some indents: "indent indent indent for <stuff> colon newline indent indent indent indent pass newline indent indent indent <next statement>" - jmucchiello
(6) jmucchiello: Yes you need something python-specific. The screen reader should speak the tokens that the python interpreter uses, "intent in", "indent out". - u0b34a0f6ae
(6) @David, @jmucchiello: there is a script that adds braces to every block in a comment (# }), and in fact I've read of blind people that uses it to allow them to write Python :) - Esteban Küber
(1) @David, @jmucchiello: Ah, you meant blind-blind, not just "horribly bad eyesight"-blind. - 3Doubloons
I know a few devs that are learning Python (but know a c style language) who would love this. It's just because they don't know any better ;) - Justin
Blind programmer can use this syntax:… - Joschua
A strange feature indeed. Props for sharing the first thing I didn't know as I read through this thread. - Adam Fraser
(1) I had my braces removed when I grew up! - Chris Johnson
There is pybraces, an encoding you can use for your python source code files in order to really enable braces. ;) - Sebastian Noack
That's one hell of an easter egg. I love oss communities... - AndreasT
[+305] [2008-09-19 13:33:42] Rafał Dowgird

The step argument in slice operators. For example:

a = [1,2,3,4,5]
>>> a[::2]  # iterate over the whole list in 2-increments

The special case x[::-1] is a useful idiom for 'x reversed'.

>>> a[::-1]

(31) much clearer, in my opinion, is the reversed() function. >>> list(reversed(range(4))) [3, 2, 1, 0] - Christian Oudard
(3) then how to write "this i a string"[::-1] in a better way? reversed doesnt seem to help - Berry Tsakala
(2) "".join(reversed("this i a string")) - erikprice
(24) The problem with reversed() is that it returns an iterator, so if you want to preserve the type of the reversed sequence (tuple, string, list, unicode, user types...), you need an additional step to convert it back. - Rafał Dowgird
(6) def reverse_string(string): return string[::-1] - pi.
(4) @pi I think if one knows enough to define reverse_string as you have then one can leave the [::-1] in your code and be comfortable with its meaning and the feeling it is appropriate. - physicsmichael
Is there a speed difference between [::-1] and reversed()? - Austin Richardson
(1) -1, because it is not hidden and you learn it early enought, but its an useful feature - Quonux
ooh, noticed that [1,2,3,4,5][::-2] also works as expected, which is quite cool - Sam Elliott
(1) You can make a cool palindrome finder with this! - Trufa
@Berry list(reversed('blah blah')) - Tobu
@Austin: yes a huge difference with strings: - Adam Parkin
@Trufa: yeah very easy to find a palindrome: if someseq == (someseq[::-1]) then it's a palindrome, and this would work with any sequence type (strings, lists, etc). - Adam Parkin
[+288] [2008-09-19 12:32:26] DzinX


Decorators [1] allow to wrap a function or method in another function that can add functionality, modify arguments or results, etc. You write decorators one line above the function definition, beginning with an "at" sign (@).

Example shows a print_args decorator that prints the decorated function's arguments before calling it:

>>> def print_args(function):
>>>     def wrapper(*args, **kwargs):
>>>         print 'Arguments:', args, kwargs
>>>         return function(*args, **kwargs)
>>>     return wrapper

>>> @print_args
>>> def write(text):
>>>     print text

>>> write('foo')
Arguments: ('foo',) {}

(54) When defining decorators, I'd recommend decorating the decorator with @decorator. It creates a decorator that preserves a functions signature when doing introspection on it. More info here: - sirwart
(45) How is this a hidden feature? - Vetle
(50) Well, it's not present in most simple Python tutorials, and I stumbled upon it a long time after I started using Python. That is what I would call a hidden feature, just about the same as other top posts here. - DzinX
(16) vetler, the questions asks for "lesser-known but useful features of the Python programming language." How do you measure 'lesser-known but useful features'? I mean how are any of these responses hidden features? - Johnd
(4) @vetler Most of the thing here are hardly "hidden". - Humphrey Bogart
Hidden? this is a documented feature - Justin
(3) If the standard is whether or not a feature is documented, then this question should be closed. - Jesse Dhillon
(1) I thought we were supposed to list hidden features of python not the awesome features of python. ;-) - Kamil Szot
why would this be useful except in the very rare situations? Why not just redefine the function and add optional parameters? - Dexter
(1) @Dexter: Because that decorator may be universal -- it can be attached to any function for a short moment, e.g. when you need to debug it, and then very easily removed. Besides, there are many uses of decorators other than debugging. - DzinX
Decorating a decorator with the decorator decorator? We must go deeper. - Casey Rodarmor
As for useful (arguable), some more common ones: @ property, @ classmethod, @ staticmethod, @ coroutine, @ _o (monocle) - XTL
Decorators are extremely handy, but they can be a PITA to write. There's so many variations -- class based vs not class based, decorators which can decorate methods vs functions (or both), adding descriptors, decorators which take arguments, etc. So while the simple example above may not be a "hidden" feature of Python, I'd say consider it a starting point for learning about a rather beefy topic in the language, and should be in the list. - Adam Parkin
[+288] [2008-09-22 11:55:40] rlerallut

The for...else syntax (see )

for i in foo:
    if i == 0:
    print("i was never 0")

The "else" block will be normally executed at the end of the for loop, unless the break is called.

The above code could be emulated as follows:

found = False
for i in foo:
    if i == 0:
        found = True
if not found: 
    print("i was never 0")

(219) I think the for/else syntax is awkward. It "feels" as if the else clause should be executed if the body of the loop is never executed. - codeape
It becomes less awkward if we think of it as for/if/else, with the else belonging to the if. And it's so useful an idiom that I wonder other language designers didn't think of it! - Sundar R
(14) ah. Never saw that one! But I must say it is a bit of a misnomer. Who would expect the else block to execute only if break never does? I agree with codeape: It looks like else is entered for empty foos. - Daren Thomas
I've added an equivalent code that is not using else. - jfs
I find this much less useful than if the else clause executed if the for loop didn't. I've wanted that so many times, but I've never found a case I wanted to use this. - Draemon
(5) Anyone remember the FOR var … NEXT var … END FOR var of Sinclair QL's SuperBasic? Everything between NEXT and END FOR would execute at the end of the loop, unless an EXIT FOR was issued. That syntax was cleaner :) - tzot
(52) seems like the keyword should be finally, not else - Jiaaro
(21) Except finally is already used in a way where that suite is always executed. - Roger Pate
(2) This is really convenient, and I use it, but it needs an explaining comment each time. - u0b34a0f6ae
(7) Should definately not be 'else'. Maybe 'then' or something, and then 'else' for when the loop was never executed. - Tor Valamo
(2) I used this on a programming assignment for a class and lost points because the grader had never seen it before... totally got those back. - flatpickles
(4) Hey, people forgot to mention that this idiom also works for while loops. - Denilson Sá Maia
(4) I've always thought a for...then...else construct would be better, where then is only executed if the for is successful, else when the for cannot be entered (eg: for i in []; pass; else; print "empty list". But then I'm a novice. :) - Phillip B Oldham
Does this work ONLY if there is a break statement in the for loop or are there any other circumstances where this trick works this way? - inspectorG4dget
@inspectorG4dget: it works fine without a break... but serves no purpose if there's no break. (The code in the else might as well just be outdented one level) - jkerian
@jkerian: Many thanks. I observed that behavior, but was wondering more along the lines of "would this work the same way if return was used instead of break?" - inspectorG4dget
(2) i shun this feature. every time i want to use it i have to read up on it, and then i still find it hard to get right. - flow
Yeah, using this syntax got me screamed at by a couple of PHP and C programmers. Go figure. :-) - terminus
Never use for-else. It does not do what the next programmer to see your code thinks it does. - inanutshellus
(2) I used to be confused by this behavior as well until I thought of it in terms of try.. except.. else.. Python's for.. else.. behavior is consistent with how try blocks are executed. If the contents of try or for succeed, jump to else. - ryanshow
Control flow altering statements (like break) are generally poor choices to begin with, combine this with an unusual use of a keyword (in this case "else") and you end up with code that is even harder to read, especially for novices. I'd definitely shy away from this one. - Adam Parkin
(1) if any(i == 0 for i in foo): ... Would be my choice of phrasing for this kind of code. Maybe it's my Haskell influence. - Theo Belaire
[+258] [2008-09-21 21:54:12] Armin Ronacher

From 2.5 onwards dicts have a special method __missing__ that is invoked for missing items:

>>> class MyDict(dict):
...  def __missing__(self, key):
...   self[key] = rv = []
...   return rv
>>> m = MyDict()
>>> m["foo"].append(1)
>>> m["foo"].append(2)
>>> dict(m)
{'foo': [1, 2]}

There is also a dict subclass in collections called defaultdict that does pretty much the same but calls a function without arguments for not existing items:

>>> from collections import defaultdict
>>> m = defaultdict(list)
>>> m["foo"].append(1)
>>> m["foo"].append(2)
>>> dict(m)
{'foo': [1, 2]}

I recommend converting such dicts to regular dicts before passing them to functions that don't expect such subclasses. A lot of code uses d[a_key] and catches KeyErrors to check if an item exists which would add a new item to the dict.

This is where I put fork bombs. - Vince
(10) I prefer using setdefault. m={} ; m.setdefault('foo',1) - grayger
(22) @grayger meant this m={}; m.setdefault('foo', []).append(1). - Cristian Ciupitu
(1) There are however cases where passing the defaultdict is very handy. The function may for example iter over the value and it works for undefined keys without extra code, as the default is an empty list. - Marian
(3) defaultdict is better in some circumstances than setdefault, since it doesn't create the default object unless the key is missing. setdefault creates it whether it's missing or not. If your default object is expensive to create this can be a performance hit - I got a decent speedup out of one program simply by changing all setdefault calls. - Whatang
(2) defaultdict is also more powerful than the setdefault method in other cases. For example, for a counter—dd = collections.defaultdict(int) ... dd[k] += 1 vs d.setdefault(k, 0) += 1. - Mike Graham
[+247] [2008-09-19 14:00:11] Lucas S.

In-place value swapping

>>> a = 10
>>> b = 5
>>> a, b
(10, 5)

>>> a, b = b, a
>>> a, b
(5, 10)

The right-hand side of the assignment is an expression that creates a new tuple. The left-hand side of the assignment immediately unpacks that (unreferenced) tuple to the names a and b.

After the assignment, the new tuple is unreferenced and marked for garbage collection, and the values bound to a and b have been swapped.

As noted in the Python tutorial section on data structures [1],

Note that multiple assignment is really just a combination of tuple packing and sequence unpacking.


(1) Does this use more real memory than the traditional way? I would guess do since you are creating a tuple, instead of just one swap variable - Nathan
(75) It doesn't use more memory. It uses less.. I just wrote it both ways, and de-compiled the bytecode.. the compiler optimizes, as you'd hope it would. dis results showed it's setting up the vars, and then ROT_TWOing. ROT_TWO means 'swap the two top-most stack vars'... Pretty snazzy, actually. - royal
(5) You also inadvertently point out another nice feature of Python, which is that you can implicitly make a tuple of items just by separating them by commas. - asmeurer
I would prefer (a, b) = (b, a). I don't think it is necessarily clear whether , or = has higher precedence. - Dana the Sane
(3) Dana the Sane: assignment in Python is a statement, not an expression, so that expression would be invalid if = had higher priority (i.e. it was interpreted as a, (b = b), a). - hbn
royal: it did actually create tuples in older versions of Python (I think pre-2.4). - hbn
(5) This is the least hidden feature I've read here. Nice, but explicitly described in every Python tutorial. - Thiago Chaves
I love this feature, but we have to be careful about the semantics of the objects being swapped. I got bitten when doing foo[x:y], bar[x:y] = bar[x:y], foo[x:y] with foo and bar being numpy arrays, because slicing numpy arrays creates views, not copies of the data! - cberzan
[+235] [2008-09-19 12:44:42] MvdD

Readable regular expressions

In Python you can split a regular expression over multiple lines, name your matches and insert comments.

Example verbose syntax (from Dive into Python [1]):

>>> pattern = """
... ^                   # beginning of string
... M{0,4}              # thousands - 0 to 4 M's
... (CM|CD|D?C{0,3})    # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
...                     #            or 500-800 (D, followed by 0 to 3 C's)
... (XC|XL|L?X{0,3})    # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
...                     #        or 50-80 (L, followed by 0 to 3 X's)
... (IX|IV|V?I{0,3})    # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
...                     #        or 5-8 (V, followed by 0 to 3 I's)
... $                   # end of string
... """
>>>, 'M', re.VERBOSE)

Example naming matches (from Regular Expression HOWTO [2])

>>> p = re.compile(r'(?P<word>\b\w+\b)')
>>> m = '(((( Lots of punctuation )))' )

You can also verbosely write a regex without using re.VERBOSE thanks to string literal concatenation.

>>> pattern = (
...     "^"                 # beginning of string
...     "M{0,4}"            # thousands - 0 to 4 M's
...     "(CM|CD|D?C{0,3})"  # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
...                         #            or 500-800 (D, followed by 0 to 3 C's)
...     "(XC|XL|L?X{0,3})"  # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
...                         #        or 50-80 (L, followed by 0 to 3 X's)
...     "(IX|IV|V?I{0,3})"  # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
...                         #        or 5-8 (V, followed by 0 to 3 I's)
...     "$"                 # end of string
... )
>>> print pattern

(7) I don't know if I'd really consider that a Python feature, most RE engines have a verbose option. - Jeremy
(18) Yes, but because you can't do it in grep or in most editors, a lot of people don't know it's there. The fact that other languages have an equivalent feature doesn't make it not a useful and little known feature of python - Mark Baker
(7) In a large project with lots of optimized regular expressions (read: optimized for machine but not human beings), I bit the bullet and converted all of them to verbose syntax. Now, introducing new developers to projects is much easier. From now on we enforce verbose REs on every project. - Berk D. Demir
I'd rather just say: hundreds = "(CM|CD|D?C{0,3})" # 900 (CM), 400 (CD), etc. The language already has a way to give things names, a way to add comments, and a way to combine strings. Why use special library syntax here for things the language already does perfectly well? It seems to go directly against Perlis' Epigram 9. - Ken
(3) @Ken: a regex may not always be directly in the source, it could be read from settings or a config file. Allowing comments or just additional whitespace (for readability) can be a great help. - Roger Pate
If you're writing a Python program and your config file isn't Python, then (Yegge would say and I'd agree that) "you're talking out of both sides of your mouth" re OO: - Ken
Nice! With the string literal concatenation, the comments are parsed as actual comments. - asmeurer
I start my verbose patterns with (?x) # Use verbose mode, which feels more self-documenting than using re.VERBOSE at the compile step. These must be the very first characters in the pattern - no leading whitespace. Also, when using a verbose pattern, remember to us \s or [ ] to signify spaces (depending on if you want to capture all whitespace or just spaces). It can be easy to forget when converting from standard to verbose patterns. - jwhitlock
+1 for string literal concatenation, but -1 for Python even having the re.VERBOSE flag, which I think leads to terrible-to-read code. - orokusaki
[+222] [2008-09-21 15:00:37] e-satis

Function argument unpacking

You can unpack a list or a dictionary as function arguments using * and **.

For example:

def draw_point(x, y):
    # do some magic

point_foo = (3, 4)
point_bar = {'y': 3, 'x': 2}


Very useful shortcut since lists, tuples and dicts are widely used as containers.

Use this all the time, love it. - Skurmedel
(27) * is also known as the splat operator - Gabriel
(3) I like this feature, but pylint doesn't sadly. - Stephen Paulger
(5) pylint's advice is not a law. The other way, apply(callable, arg_seq, arg_map), is deprecated since 2.3. - Yann Vernier
(1) pylint's advice may not be law, but it's damn good advice. Debugging code that over-indulges in stuff like this is pure hell. As the original poster notes, this is a useful shortcut. - Andrew
(2) I saw this being used in code once and wondered what it did. Unfortunately it's hard to google for "Python **" - Fraser Graham
It's called the splat operator. So you can google for "python splat", but it's unlikely anybody would know the name if he doesn't know the feature :-p - e-satis
@andrew: Pylint tends to complain for a lot of classic idioms like try/except ImportError - e-satis
(1) @e-satis: that's because a lot of those "classic idioms" are poor practice. I agree that Pylint can be overly "nitpicky", but the vast majority of the time unless you have a clear reason to not adhere to its suggestions it's best to comply (and for the cases where you have a good reason, you can always do a pylint-disable to suppress the warning in the specific case) - Adam Parkin
@Adam: don't get me wrong, I actually have pylint run everytime my document is saved automatically. but Try/ except error is really useful. Dictionary comprehensions as well, and pylint don't understand them. Pylint typically complains a lot in unittest as well were you set variables without using them, because the test actually require to. It's important not to get psycho with pylint alerts, or you will just stop coding. And using * or ** is actually clean code, not dirty one. - e-satis
Weird, I never have Pylint complain about dict comprehensions & I use those all the time. Most of the unit test warnings are due to the fact that the unittest module isn't PEP-8 compliant (ex: forced to name setup method "setUp" instead of "set_up"). But yeah, I agree Pylint can be (shall we say) overzealous at times. What I do is everytime I put in a disable-pylint directive I follow it with a comment justifying its use. I find this works well & there's been a few times where my team has challenged me & in the end I discovered a better way to do something. - Adam Parkin
(1) If pylint complains about dict comprehensions, then it is running under python 2.6 (the version built in to vim.) To fix this, run pylint using 2.7 (which for me on OSX meant I had to compile macvim myself but on other platforms I think vim binaries with 2.7 support are out there. You need 2.7 installed for this to work, macvim uses it. - Jonathan Hartley
(1) Yeah, the pylint warning about "** magic" is dumb and can be turned off globally. - Jonathan Hartley
[+205] [2009-06-21 20:32:22] André

ROT13 is a valid encoding for source code, when you use the right coding declaration at the top of the code file:

#!/usr/bin/env python
# -*- coding: rot13 -*-

cevag "Uryyb fgnpxbiresybj!".rapbqr("rot13")

(10) Great! Notice how byte strings are taken literally, but unicode strings are decoded: try cevag h"Uryyb fgnpxbiresybj!" - u0b34a0f6ae
(12) unfortunately it is removed from py3k - mykhal
(9) This is good for bypassing antivirus. - L̲̳o̲̳̳n̲̳̳g̲̳̳p̲̳o̲̳̳k̲̳̳e̲̳̳
(96) That has nothing to do with the encoding, it is just Python written in Welsh. :-P - Olivier Verdier
(33) Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn! - Manuel Ferreria
(5) see? you can write unintelligible code in any languages, even in python - Lie Ryan
Uryyb fgnpxbiresybj! -> Hello stackoverflow! - wim
@Manuel Ferreria : sry, but i couldn't figure what u said... is it ROT13 ?? - amyassin
(3) @amyassin I too was flumoxxed, until I remembered about google, and found… - Jordan Reiter
[+183] [2008-09-20 14:25:58] Torsten Marek

Creating new types in a fully dynamic manner

>>> NewType = type("NewType", (object,), {"x": "hello"})
>>> n = NewType()
>>> n.x

which is exactly the same as

>>> class NewType(object):
>>>     x = "hello"
>>> n = NewType()
>>> n.x

Probably not the most useful thing, but nice to know.

Edit: Fixed name of new type, should be NewType to be the exact same thing as with class statement.

Edit: Adjusted the title to more accurately describe the feature.

(8) This has a lot of potential for usefulness, e.g., JIT ORMs - Mark Cidade
(8) I use it to generate HTML-Form classes based on a dynamic input. Very nice! - pi.
I also used it to generate dynamic django forms (until i discovered formsets) - Jiaaro
(15) Note: all classes are created at runtime. So you can use the 'class' statement within a conditional, or within a function (very useful for creating families of classes or classes that act as closures). The improvement that 'type' brings is the ability to neatly define a dynamically generated set of attributes (or bases). - spookylukey
Extremely useful, in Django, for generating dynamic models that wrap existing sets of tables with similar structures. - Josh Smeaton
(1) You can also create anonymous types with a blank string like: type('', (object,), {'x': 'blah'}) - jmhmccr
(3) Could be very useful for code injections. - Avihu Turzion
You can also instantiate this class in one line too. x = type("X", (object,), {'val':'Hello'})() - Nick Radford
[+179] [2008-09-20 20:06:16] Ycros

Context managers and the "with" Statement

Introduced in PEP 343 [1], a context manager [2] is an object that acts as a run-time context for a suite of statements.

Since the feature makes use of new keywords, it is introduced gradually: it is available in Python 2.5 via the __future__ [3] directive. Python 2.6 and above (including Python 3) has it available by default.

I have used the "with" statement [4] a lot because I think it's a very useful construct, here is a quick demo:

from __future__ import with_statement

with open('foo.txt', 'w') as f:

What's happening here behind the scenes, is that the "with" statement [5] calls the special __enter__ and __exit__ methods on the file object. Exception details are also passed to __exit__ if any exception was raised from the with statement body, allowing for exception handling to happen there.

What this does for you in this particular case is that it guarantees that the file is closed when execution falls out of scope of the with suite, regardless if that occurs normally or whether an exception was thrown. It is basically a way of abstracting away common exception-handling code.

Other common use cases for this include locking with threads and database transactions.


(3) I wouldn't approve a code review which imported anything from future. The features are more cute than useful, and usually they just end up confusing Python newcomers. - a paid nerd
(6) Yes, such "cute" features as nested scopes and generators are better left to those who know what they're doing. And anyone who wants to be compatible with future versions of Python. For nested scopes and generators, "future versions" of Python means 2.2 and 2.5, respectively. For the with statement, "future versions" of Python means 2.6. - Chris B.
(10) This may go without saying, but with python v2.6+, you no longer need to import from future. with is now a first class keyword. - fitzgeraldsteele
(25) In 2.7 you can have multiple withs :) with open('filea') as filea and open('fileb') as fileb: ... - Austin Richardson
(5) @Austin i could not get that syntax to work on 2.7. this however did work: with open('filea') as filea, open('fileb') as fileb: ... - wim
It could be useful to explain why, in which cases, this with statement is different from f = open('foo.txt', 'w'). - gb.
[+168] [2008-09-21 20:18:19] Amandasaurus

Dictionaries have a get() method

Dictionaries have a 'get()' method. If you do d['key'] and key isn't there, you get an exception. If you do d.get('key'), you get back None if 'key' isn't there. You can add a second argument to get that item back instead of None, eg: d.get('key', 0).

It's great for things like adding up numbers:

sum[value] = sum.get(value, 0) + 1

(39) also, checkout the setdefault method. - Daren Thomas
(27) also, checkout collections.defaultdict class. - jfs
(8) If you are using Python 2.7 or later, or 3.1 or later, check out the Counter class in the collections module. - Elias Zamaria
Oh man, this whole time I've been doing get(key, None). Had no idea that None was provided by default. - Jordan Reiter
[+152] [2008-09-19 14:04:38] Nick Johnson


They're the magic behind a whole bunch of core Python features.

When you use dotted access to look up a member (eg, x.y), Python first looks for the member in the instance dictionary. If it's not found, it looks for it in the class dictionary. If it finds it in the class dictionary, and the object implements the descriptor protocol, instead of just returning it, Python executes it. A descriptor is any class that implements the __get__, __set__, or __delete__ methods.

Here's how you'd implement your own (read-only) version of property using descriptors:

class Property(object):
    def __init__(self, fget):
        self.fget = fget

    def __get__(self, obj, type):
        if obj is None:
            return self
        return self.fget(obj)

and you'd use it just like the built-in property():

class MyClass(object):
    def foo(self):
        return "Foo!"

Descriptors are used in Python to implement properties, bound methods, static methods, class methods and slots, amongst other things. Understanding them makes it easy to see why a lot of things that previously looked like Python 'quirks' are the way they are.

Raymond Hettinger has an excellent tutorial [1] that does a much better job of describing them than I do.


This is a duplicate of decorators, isn't it!? (… ) - gecco
(2) no, decorators and descriptors are totally different things, though in the example code, i'm creating a descriptor decorator. :) - Nick Johnson
(1) The other way to do this is with a lambda: foo = property(lambda self: self.__foo) - Pete Peterson
(1) @PetePeterson Yes, but property itself is implemented with descriptors, which was the point of my post. - Nick Johnson
[+142] [2008-09-22 18:08:54] tghw

Conditional Assignment

x = 3 if (y == 1) else 2

It does exactly what it sounds like: "assign 3 to x if y is 1, otherwise assign 2 to x". Note that the parens are not necessary, but I like them for readability. You can also chain it if you have something more complicated:

x = 3 if (y == 1) else 2 if (y == -1) else 1

Though at a certain point, it goes a little too far.

Note that you can use if ... else in any expression. For example:

(func1 if y == 1 else func2)(arg1, arg2) 

Here func1 will be called if y is 1 and func2, otherwise. In both cases the corresponding function will be called with arguments arg1 and arg2.

Analogously, the following is also valid:

x = (class1 if y == 1 else class2)(arg1, arg2)

where class1 and class2 are two classes.

(29) The assignment is not the special part. You could just as easily do something like: return 3 if (y == 1) else 2. - Brian
(1) An alternate way to do this is: y == 1 and 3 or 2 - yuriks
That alternate way is fraught with problems. For one thing, normally this works: if y == 1: #3 else if y == 70: #2 Why? y == 1 is only evaluated, THEN y == 70 if y == 1 is false. In this statement: y == 1 and 3 or 2 - 3 and 2 are evaluated as well as y == 1. - kylebrooks
(25) That alternate way is the first time I've seen obfuscated Python. - Craig McQueen
(3) Kylebrooks: It doesn't in that case, boolean operators short circuit. It will only evaluate 2 if bool(3) == False. - RoadieRich
(15) this backwards-style coding confusing me. something like x = ((y == 1) ? 3 : 2) makes more sense to me - mpen
(13) I feel just the opposite of @Mark, C-style ternary operators have always confused me, is the right side or the middle what gets evaluated on a false condition? I much prefer Python's ternary syntax. - Jeffrey Harris
(1) @Mark "x = (y == 1) and 3 or 2" is also valid. - Kyle Ambroff
(3) I think C-style ternary operators are simpler, more english-like: 'am I drunk' ? 'yes, make out with her' : 'no, dont even think about it' - adamJLev
(1) x = 3 if (y == 1) else 2 - I find that in many cases, x = (2, 3)[y==1] is actually more readable (normally with really long statements, so you can keep the results (2, 3) together). - Ponkadoodle
(1) Somehow Guido and the Python folks managed to make one of the most contorted parts of the C language readable and easily understandable, even if you don't know what it is. - asmeurer
(1) @Infinity, you should consult with a doctor to replace the always-true constant 'am I drunk' with a non-deterministic function am_i_drunk(). - j0057
The first time I saw the ternary op in Python I found it confusing to read, largely due to my familiarity with the C-style one. Not sure which one is better ("the grass is wet if it is raining otherwise the grass is dry" vs "if it is raining then the grass is wet otherwise the grass is dry") - Adam Parkin
[+141] [2008-09-19 14:04:50] Pierre-Jean Coudert

Doctest [1]: documentation and unit-testing at the same time.

Example extracted from the Python documentation:

def factorial(n):
    """Return the factorial of n, an exact integer >= 0.

    If the result is small enough to fit in an int, return an int.
    Else return a long.

    >>> [factorial(n) for n in range(6)]
    [1, 1, 2, 6, 24, 120]
    >>> factorial(-1)
    Traceback (most recent call last):
    ValueError: n must be >= 0

    Factorials of floats are OK, but the float must be an exact integer:

    import math
    if not n >= 0:
        raise ValueError("n must be >= 0")
    if math.floor(n) != n:
        raise ValueError("n must be exact integer")
    if n+1 == n:  # catch a value like 1e300
        raise OverflowError("n too large")
    result = 1
    factor = 2
    while factor <= n:
        result *= factor
        factor += 1
    return result

def _test():
    import doctest

if __name__ == "__main__":

(6) Doctests are certainly cool, but I really dislike all the cruft you have to type to test that something should raise an exception - TM.
(60) Doctests are overrated and pollute the documentation. How often do you test a standalone function without any setUp() ? - a paid nerd
(2) who says you can't have setup in a doctest? write a function that generates the context and returns locals() then in your doctest do locals().update(setUp()) =D - Jiaaro
(2) These are nice for making sure examples in docstrings don't go out of sync. - L̲̳o̲̳̳n̲̳̳g̲̳̳p̲̳o̲̳̳k̲̳̳e̲̳̳
(12) If a standalone function requires a setUp, chances are high that it should be decoupled from some unrelated stuff or put into a class. Class doctest namespace can then be re-used in class method doctests, so it's a bit like setUp, only DRY and readable. - Andy Mikhaylenko - doctests make for ok docs, bad tests - poolie
(4) "How often do you test a standalone function" - lots. I find doctests often emerge naturally from the design process when I am deciding on facades. - Gregg Lind
(1) Doctest is hard to use with some modules and frameworks, such as Django. Usually, which makes Doctest hard to use is some point of the API design that is heavyweight, overcoupled to other components or has a lot of dependencies. Doctest has some problems and limitations but most of the time I feel that an API that makes it hard to use Doctest is more complex than it is needed. - brandizzi
(1) I think doctests are misnamed. They are really useful if you look at them as small usage examples, coming with a guarantee that they run. - vene
I've never understood the point of doctests, if you have a snippet of code that tests a function then put it into a proper unit test. - Adam Parkin
[+138] [2008-09-22 04:23:22] Pasi Savolainen

Named formatting

% -formatting takes a dictionary (also applies %i/%s etc. validation).

>>> print "The %(foo)s is %(bar)i." % {'foo': 'answer', 'bar':42}
The answer is 42.

>>> foo, bar = 'question', 123

>>> print "The %(foo)s is %(bar)i." % locals()
The question is 123.

And since locals() is also a dictionary, you can simply pass that as a dict and have % -substitions from your local variables. I think this is frowned upon, but simplifies things..

New Style Formatting

>>> print("The {foo} is {bar}".format(foo='answer', bar=42))

(60) Will be phased out and eventually replaced with string's format() method. - Constantin
(3) Named formatting is very useful for translators as they tend to just see the format string without the variable names for context - pixelbeat
(2) Appears to work in python 3.0.1 (needed to add parenttheses around print call). - Pasi Savolainen
(9) a hash, huh? I see where you came from. - shylent
%-formatting won't go away any time soon, but the "format" method on strings is the new (current) best-practices method. It supports everything %-formatting does and most people think the API and the formatting syntax is much nicer. (Myself included.) Python has a third method, string.Template added in 2.4; basically nobody likes that one. - Larry Hastings
(11) %s formatting will not be phased out. str.format() is certainly more pythonic, however is actually 10x's slower for simple string replacement. My belief is %s formatting is still best practice. - Kenneth Reitz
For completeness, the locals()-equivalent for new-style formatting is of course print "The {foo} is {bar}".format(**locals()). - Ben Blank
I love locals(), but it has the annoying side-effect that if you use pylint, you will often get errors for not using a variable in the scope of the function. - hughdbrown
(1) As of Python 3.2, the locals() equivalent is print("The {foo} is {bar}".format_map(locals())) - ncoghlan
That format is slower should be fixable. After all it does the same as % formatting. And in 3.1.3 timeit gives me these speed measurements: >>> timeit('''"a %(b)s" % {"b": "c"}''') 0.2503829002380371 >>> timeit('''"a {b}".format(b="c")''') 0.41667699813842773 - Arne Babenhauserheide
(1) Hey @matt, it's not clear which kind of formatting you're recommending against, and it's especially not clear why. - Jonathan Hartley
[+132] [2008-09-22 08:43:11] dgrant

To add more python modules (espcially 3rd party ones), most people seem to use PYTHONPATH environment variables or they add symlinks or directories in their site-packages directories. Another way, is to use *.pth files. Here's the official python doc's explanation:

"The most convenient way [to modify python's search path] is to add a path configuration file to a directory that's already on Python's path, usually to the .../site-packages/ directory. Path configuration files have an extension of .pth, and each line must contain a single path that will be appended to sys.path. (Because the new paths are appended to sys.path, modules in the added directories will not override standard modules. This means you can't use this mechanism for installing fixed versions of standard modules.)"

(1) I never made the connection between that .pth file in the site-packages directory from setuptools and this idea. awesome. - dave paola
[+122] [2008-09-22 10:31:50] Constantin

Exception else clause:

except Voom:
  print "'E's pining!"
  print "This parrot is no more!"

The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try ... except statement.


(8) +1 this is awesome. If the try block executes without entering any exception blocks, then the else block is entered. And then of course, the finally block is executed - inspectorG4dget
I finally get why the 'else' is there! Thanks. - taynaron
It would make more sense to use continue, but I guess it's already taken ;) - Paweł Prażak
Note that on older versions of Python2 you can't have both else: and finally: clauses for the same try: block - Kevin Horn
@Paweł Prażak: I don't think it would. As continue and break refer to loops and this is a single conditional statement. - Isaac Nequittepas
@IsaacRemuant you are right. Maybe something like expected or default or action or normal? :) - Paweł Prażak
(1) @Paweł Prażak, as Kevin Horn mentioned, this syntax was introduced after the initial release of Python and adding new reserved keywords to existing language is always problematic. That's why an existing keyword is usually reused (c.f. "auto" in recent C++ standard). - Constantin
[+114] [2008-09-19 13:56:27] Thomas Wouters

Re-raising exceptions:

# Python 2 syntax
except SomeError, e:
    if is_fatal(e):

# Python 3 syntax
except SomeError as e:
    if is_fatal(e):

The 'raise' statement with no arguments inside an error handler tells Python to re-raise the exception with the original traceback intact, allowing you to say "oh, sorry, sorry, I didn't mean to catch that, sorry, sorry."

If you wish to print, store or fiddle with the original traceback, you can get it with sys.exc_info(), and printing it like Python would is done with the 'traceback' module.

Sorry but this is a well known and common feature of almost all languages. - Lucas S.
(6) Note the italicized text. Some people will do raise e instead, which doesn't preserve the original traceback. - habnabit
(12) Maybe more magical, exc_info = sys.exc_info(); raise exc_info[0], exc_info[1], exc_info[2] is equivalent to this, but you can change those values around (e.g., change the exception type or message) - ianb
(3) @Lucas S. Well, I didn't know it, and I'm glad it's written here. - e-satis
i may be showing my youth here, but i have always used the python 3 syntax in python 2.7 with no issue - wim
The Python 3 syntax works in 2.6 and 2.7 as well, yes. - Thomas Wouters
[+106] [2008-09-19 11:53:19] cleg

Main messages :)

import this
# btw look at this module's source :)

De-cyphered [1]:

The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!


Loving the source for that :D - Teifion
(1) Any idea why the source was cyphered that way? Was it just for fun, or was there some other reason? - MiniQuark
(42) the way the source is written goes against the zen! - hasen
It should be easier to understand if instead of 65 it used ord("A"), ord("a") instead of 97 and ord("z")-ord("a") instead of 26. The rest is just a Caesar cipher by 13 (A.K.A. ROT13). But indeed it would have been more pythonic to use the str.translate method :-p - fortran
(2) I've updated my /usr/lib/python2.6/ replacing the old code with this print s.translate("".join(chr(64<i<91 and 65+(i-52)%26 or 96<i<123 and 97+(i-84)%26 or i) for i in range(256))) and it looks much better now!! :-D - fortran
(1) year, that's called irony. (the reason, why they made it) - Joschua
(2) @MiniQuark: quick history lesson: - user21037
I found this history of import this the other day. Rather interesting: - asmeurer
@Dan: Damn. I didn't see your comment until just now. - asmeurer
I think the source was obfuscated to disguise the commit, so the easter egg really would be a surprise, even to people skimming commits. - Jonathan Hartley
[+105] [2008-10-03 18:38:15] mjard

Interactive Interpreter Tab Completion

    import readline
except ImportError:
    print "Unable to load readline module."
    import rlcompleter
    readline.parse_and_bind("tab: complete")

>>> class myclass:
...    def function(self):
...       print "my function"
>>> class_instance = myclass()
>>> class_instance.<TAB>
class_instance.__class__   class_instance.__module__
class_instance.__doc__     class_instance.function
>>> class_instance.f<TAB>unction()

You will also have to set a PYTHONSTARTUP environment variable.

(2) This is a very useful feature. So much so I've a simple script to enable it (plus a couple of other introspection enhancements): - pixelbeat
(43) IPython gives you this plus tons of other neat stuff - akaihola
This would have been more useful at pdb prompt than the regular python prompt (as IPython serves that purpose anyway). However, this doesn't seem to work at the pdb prompt, probably because pdb binds its own for tab (which is less useful). I tried calling parse_and_bind() at the pdb prompt, but it still didn't work. The alternative of getting pdb prompt with IPython is more work so I tend to not use it. - haridsv
Found this recipe, but this didn't work for me (using python 2.6): - haridsv
(2) @haridsv -- easy_install ipdb -- then you can use import ipdb; ipdb.set_trace() - Doug Harris
For me the best tip was to use the try:except:else:. I've forgotten about the else in the try block - neves
(1) On osx [and i imagine other systems which use libedit] you have to do readline.parse_and_bind ("bind ^I rl_complete") - Foo Bah
[+91] [2008-09-19 12:45:44] Rafał Dowgird

Nested list comprehensions and generator expressions:

[(i,j) for i in range(3) for j in range(i) ]    
((i,j) for i in range(4) for j in range(i) )

These can replace huge chunks of nested-loop code.

"for j in range(i)" - is this a typo? Normally you'd want fixed ranges for i and j. If you're accessing a 2d array, you'd miss out on half your elements. - Peter Gibson
I'm not accessing any arrays in this example. The only purpose of this code is to show that the expressions from the inner ranges can access those from the outer ones. The by-product is a list of pairs (x,y) such that 4>x>y>0. - Rafał Dowgird
(2) sorta like double integration in calculus, or double summation. - Yoo
(22) Key point to remember here (which took me a long time to realize) is that the order of the for statements are to be written in the order you'd expect them to be written in a standard for-loop, from the outside inwards. - sykora
(2) To add on to sykora's comment: imagine you're starting with a stack of fors and ifs with yield x inside. To convert that to a generator expression, move x first, delete all the colons (and the yield), and surround the whole thing in parentheses. To make a list comprehension instead, replace the outer parens with square brackets. - Ken Arnold
Great comment, Ken, I have trouble visualizing this as well but anyone could grasp from your comment. - Profane
[+91] [2009-01-01 16:05:42] Kiv

Operator overloading for the set builtin:

>>> a = set([1,2,3,4])
>>> b = set([3,4,5,6])
>>> a | b # Union
{1, 2, 3, 4, 5, 6}
>>> a & b # Intersection
{3, 4}
>>> a < b # Subset
>>> a - b # Difference
{1, 2}
>>> a ^ b # Symmetric Difference
{1, 2, 5, 6}

More detail from the standard library reference: Set Types [1]


[+85] [2008-12-17 08:09:01] Abgan

Negative round

The round() function rounds a float number to given precision in decimal digits, but precision can be negative:

>>> str(round(1234.5678, -2))
>>> str(round(1234.5678, 2))

Note: round() always returns a float, str() used in the above example because floating point math is inexact, and under 2.x the second example can print as 1234.5700000000001. Also see the decimal [1] module.


(3) So often I have to round a number to a multiple. Eg, round 17 to a multiple of 5 (15). But Python's round doesn't let me do that! IMO, it should be structured as round(num, precision=1) - round "num" to the nearest multiple of "precision" - Ponkadoodle
(3) @wallacoloo what's the matter with (17 / 5)*5 ? Isn't it short and expressive? - silviot
(1) @silviot try that with (19 / 5)*5. 19 rounded to the nearest 5 should be 20, right? But that seems to return 15. Also, that's relying on the integer division rules of Python 2.x. It won't work the same in 3.x. The most concise, correct solution imo is: roundNearest = lambda n, m: round(float(n)/m)*m - Ponkadoodle
(6) Or in general, roundNearest = lambda n, m: (n + (m/2)) / m * m. It's twice as fast as using round(float) on my system. - Mikel
[+81] [2009-12-05 21:50:27] jpsimons

Multiplying by a boolean

One thing I'm constantly doing in web development is optionally printing HTML parameters. We've all seen code like this in other languages:

class='<% isSelected ? "selected" : "" %>'

In Python, you can multiply by a boolean and it does exactly what you'd expect:

class='<% "selected" * isSelected %>'

This is because multiplication coerces the boolean to an integer (0 for False, 1 for True), and in python multiplying a string by an int repeats the string N times.

(8) +1, that's a nice one. OTOH, as it's just a bit arcane, it's easy to see why you might not want to do this, for readability reasons. - SingleNegationElimination
I would write bool(isSelected) both for reliability and readability. - Marian
(24) you could also use something like: ('not-selected', 'selected')[isSelected] if you need an option for False value too.. - redShadow
(9) Proper conditional expressions were added to Python in 2.5. If you're using 2.5+ you probably shouldn't use these tricks for readability reasons. - Peter Graham
[+74] [2008-09-21 22:07:44] Armin Ronacher

Python's advanced slicing operation has a barely known syntax element, the ellipsis:

>>> class C(object):
...  def __getitem__(self, item):
...   return item
>>> C()[1:2, ..., 3]
(slice(1, 2, None), Ellipsis, 3)

Unfortunately it's barely useful as the ellipsis is only supported if tuples are involved.

(13) see… for more info - molasses
(3) Actually, the ellipsis is quite useful when dealing with multi-dimensional arrays from numpy module. - Denilson Sá Maia
(2) This is supposed to be more useful in Python 3, where the ellipsis will become a literal. (Try it, you can type ... in a Python 3 interpreter and it will return Eillipsis) - asmeurer
[+72] [2009-04-29 20:56:59] Scott Kirkwood

re can call functions!

The fact that you can call a function every time something matches a regular expression is very handy. Here I have a sample of replacing every "Hello" with "Hi," and "there" with "Fred", etc.

import re

def Main(haystack):
  # List of from replacements, can be a regex
  finds = ('Hello', 'there', 'Bob')
  replaces = ('Hi,', 'Fred,', 'how are you?')

  def ReplaceFunction(matchobj):
    for found, rep in zip(matchobj.groups(), replaces):
      if found != None:
        return rep

    # log error

  named_groups = [ '(%s)' % find for find in finds ]
  ret = re.sub('|'.join(named_groups), ReplaceFunction, haystack)
  print ret

if __name__ == '__main__':
  str = 'Hello there Bob'
  # Prints 'Hi, Fred, how are you?'

(2) This is insane. I had no idea this existed. awesome. thanks a lot. - Jeffrey Jose
(2) Never seen this before, but a better example might be re.sub('[aeiou]', lambda match:*3, 'abcdefghijklmnopqrstuvwxyz') - Don Spaulding
[+70] [2011-01-05 09:29:54] Adrien Plisson

tuple unpacking in python 3

in python 3, you can use a syntax identical to optional arguments in function definition for tuple unpacking:

>>> first,second,*rest = (1,2,3,4,5,6,7,8)
>>> first
>>> second
>>> rest
[3, 4, 5, 6, 7, 8]

but a feature less known and more powerful allows you to have an unknown number of elements in the middle of the list:

>>> first,*rest,last = (1,2,3,4,5,6,7,8)
>>> first
>>> rest
[2, 3, 4, 5, 6, 7]
>>> last

(6) Quite haskellish :) cool one :) - pielgrzym
(2) i like it , bummer it doesn't work in 2.7.. - wim
[+67] [2010-07-27 11:07:33] sa125

Multi line strings

One approach is to use backslashes:

>>> sql = "select * from some_table \
where id > 10"
>>> print sql
select * from some_table where id > 10

Another is to use the triple-quote:

>>> sql = """select * from some_table 
where id > 10"""
>>> print sql
select * from some_table where id > 10

Problem with those is that they are not indented (look poor in your code). If you try to indent, it'll just print the white-spaces you put.

A third solution, which I found about recently, is to divide your string into lines and surround with parentheses:

>>> sql = ("select * from some_table " # <-- no comma, whitespace at end
           "where id > 10 "
           "order by name") 
>>> print sql
select * from some_table where id > 10 order by name

note how there's no comma between lines (this is not a tuple), and you have to account for any trailing/leading white spaces that your string needs to have. All of these work with placeholders, by the way (such as "my name is %s" % name).

have been looking for this for a long time - jassinm
(1) That's a gooood thing when writing long stuff in code, while keeping a low line length! - Joël
[+63] [2010-07-16 19:18:42] Wayne Werner

This answer has been moved into the question itself, as requested by many people.

[+59] [2008-09-22 18:22:29] Tzury Bar Yochay
  • The underscore, it contains the most recent output value displayed by the interpreter (in an interactive session):
>>> (a for a in xrange(10000))
<generator object at 0x81a8fcc>
>>> b = 'blah'
>>> _
<generator object at 0x81a8fcc>
  • A convenient Web-browser controller:
>>> import webbrowser
>>> webbrowser.open_new_tab('')
  • A built-in http server. To serve the files in the current directory:
python -m SimpleHTTPServer 8000
  • AtExit
>>> import atexit

(2) Why not just SimpleHTTPServer? - Andrew Szeto
(15) worth noting that the _ is available only in interactive mode. when running scripts from a file, _ has no special meaning. - SingleNegationElimination
(1) @TokenMacGuy: Actually, you can define _ to be a variable in a file (just in case you do want to go for obfuscated Python). - asmeurer
note: you can also use __ for the second-last and ___ for the third last - wim
(2) @asmeurer I frequently use _ as a name for variables I do not care about (eg for _, desired_value, _ in my_tuple_with_some_irrelevant_values). Yes, ike a prologger :) - brandizzi
[+56] [2010-07-30 12:36:40] Tamás

pow() can also calculate (x ** y) % z efficiently.

There is a lesser known third argument of the built-in pow() function that allows you to calculate xy modulo z more efficiently than simply doing (x ** y) % z:

>>> x, y, z = 1234567890, 2345678901, 17
>>> pow(x, y, z)            # almost instantaneous

In comparison, (x ** y) % z didn't given a result in one minute on my machine for the same values.

I've always wondered what the use case is for this. I haven't encountered one, but then again I don't do scientific computing. - bukzor
(4) @buzkor: it's pretty useful for cryptography, too - Agos
(3) Remember, this is the built-in pow() function. This is not the math.pow() function, which accepts only 2 arguments. - Denilson Sá Maia
I remember stating very adamantly that I could not code cryptography in pure Python without this feature. This was in 2003, and so the version of Python I was working with was 2.2 or 2.3. I wonder if I was making a fool of myself and pow had that third parameter then or not. - Omnifarious
pow had that third parameter at least since Python 2.1. However, according to the documentation, "[i]n Python 2.1 and before, floating 3-argument pow() returned platform-dependent results depending on floating-point rounding accidents." - Tamás
(3) The cool thing here is that you can override this behavior in your own objects using __pow__. You just have to define an optional third argument. And for more information on where this would be used, see - asmeurer
Fermats little theorem made quick! - zetavolt
[+52] [2008-11-28 23:27:59] FA.

You can easily transpose an array with zip.

a = [(1,2), (3,4), (5,6)]
# [(1, 3, 5), (2, 4, 6)]

(7) Basically, zip(*a) unzips a. So if b = zip(a), then a == zip(*b). - asmeurer
(3) map(None, *a) can come in handy if your tuples are of differing lengths: map(None, *[(1,2), (3,4,5), (5,)]) => [(1, 3, 5), (2, 4, None), (None, 5, None)] - hbn
Just found this feature at and was about to share it on here. Looks like ya beat me to the chase. - Adam Fraser
The way I remember how this works is that "zip* turns a list of pairs in to a pair of lists" (and vice versa) - Adam
[+52] [2010-10-19 09:53:40] Tamás

enumerate with different starting index

enumerate has partly been covered in this answer [1], but recently I've found an even more hidden feature of enumerate that I think deserves its own post instead of just a comment.

Since Python 2.6, you can specify a starting index to enumerate in its second argument:

>>> l = ["spam", "ham", "eggs"]
>>> list(enumerate(l))
>>> [(0, "spam"), (1, "ham"), (2, "eggs")]
>>> list(enumerate(l, 1))
>>> [(1, "spam"), (2, "ham"), (3, "eggs")]

One place where I've found it utterly useful is when I am enumerating over entries of a symmetric matrix. Since the matrix is symmetric, I can save time by iterating over the upper triangle only, but in that case, I have to use enumerate with a different starting index in the inner for loop to keep track of the row and column indices properly:

for ri, row in enumerate(matrix):
    for ci, column in enumerate(matrix[ri:], ri):
        # ci now refers to the proper column index

Strangely enough, this behaviour of enumerate is not documented in help(enumerate), only in the online documentation [2].


help(enumerate) has this proper function signature in python2.x, but not in py3k. I guess, a bug needs to be filled. - SilentGhost
help(enumerate) is definitely wrong in Python 2.6.5. Maybe they have fixed it already in Python 2.7. - Tamás
help(enumerate) from Python 3.1.2 says class enumerate(object) | enumerate(iterable) -> iterator for index, value of iterable, but the trick from the answer works fine. - Cristian Ciupitu
It looks like this was added in Python 2.6 as it does not work in Python 2.5. - Tamás
[+50] [2008-09-19 13:43:46] jfs

You can use property [1] to make your class interfaces more strict.

class C(object):
    def __init__(self, foo, bar): = foo # read-write property = bar # simple attribute

    def _set_foo(self, value):
        self._foo = value

    def _get_foo(self):
        return self._foo

    def _del_foo(self):
        del self._foo

    # any of fget, fset, fdel and doc are optional,
    # so you can make a write-only and/or delete-only property.
    foo = property(fget = _get_foo, fset = _set_foo,
                   fdel = _del_foo, doc = 'Hello, I am foo!')

class D(C):
    def _get_foo(self):
        return self._foo * 2

    def _set_foo(self, value):
        self._foo = value / 2

    foo = property(fget = _get_foo, fset = _set_foo,
                   fdel =, doc =

In Python 2.6 and 3.0 [2]:

class C(object):
    def __init__(self, foo, bar): = foo # read-write property = bar # simple attribute

    def foo(self):
        '''Hello, I am foo!'''

        return self._foo

    def foo(self, value):
        self._foo = value

    def foo(self):
        del self._foo

class D(C):
    def foo(self):
        return self._foo * 2

    def foo(self, value):
        self._foo = value / 2

To learn more about how property works refer to descriptors [3].


(6) It would be nice if your pre-2.6 and your 2.6 and 3.0 examples would actually present the exact same thing: classname is different, there are comments in the pre-2.6 version, the 2.6 and 3.0 versions don't contain initialization code. - Confusion
[+48] [2008-09-22 17:32:50] lacker

Many people don't know about the "dir" function. It's a great way to figure out what an object can do from the interpreter. For example, if you want to see a list of all the string methods:

>>> dir("foo")
['__add__', '__class__', '__contains__', (snipped a bunch), 'title',
 'translate', 'upper', 'zfill']

And then if you want more information about a particular method you can call "help" on it.

>>> help("foo".upper)
    Help on built-in function upper:

    S.upper() -> string

    Return a copy of the string S converted to uppercase.

(4) dir() is essential for development. For large modules I've enhanced it to add filtering. See - pixelbeat
(2) You can also directly use help: help('foo') - yuriks
(7) If you use IPython, you can append a question mark to get help on a variable/method. - akaihola
see: An alternative to Python's dir(). Easy to type; easy to read! For humans only: - compie
I call this python's man pages and can also be implemented to work when 'man' is called rather than 'help' - inspectorG4dget
@compie -- see() is very handy. Very nice! So much easier to read than the output of dir() - Adam Parkin
[+47] [2008-10-22 07:24:30] monkut

set/frozenset [1]

Probably an easily overlooked python builtin is "set/frozenset".

Useful when you have a list like this, [1,2,1,1,2,3,4] and only want the uniques like this [1,2,3,4].

Using set() that's exactly what you get:

>>> x = [1,2,1,1,2,3,4] 
>>> set(x) 
set([1, 2, 3, 4]) 
>>> for i in set(x):
...     print i

And of course to get the number of uniques in a list:

>>> len(set([1,2,1,1,2,3,4]))

You can also find if a list is a subset of another list using set().issubset():

>>> set([1,2,3,4]).issubset([0,1,2,3,4,5])

As of Python 2.7 and 3.0 you can use curly braces to create a set:

myset = {1,2,3,4}

as well as set comprehensions:

{x for x in stuff}

For more details:


(2) Also useful in cases where a dictionary were used only to test if a value is there. - Jacek Konieczny
(1) I use set about as much as tuple and list. - L̲̳o̲̳̳n̲̳̳g̲̳̳p̲̳o̲̳̳k̲̳̳e̲̳̳
for subsets, i believe it is issubset not isasubset. either way, the subset operator <= is nicer anyway. - wim
you can do dict comprehension too in python 2.7 like this { x:x*2 for x in range(3) } It's probably sort of confusing if you don't know what you are doing imho - Hassek
[+46] [2008-09-27 13:37:41] spiv

Built-in base64, zlib, and rot13 codecs

Strings have encode and decode methods. Usually this is used for converting str to unicode and vice versa, e.g. with u = s.encode('utf8'). But there are some other handy builtin codecs. Compression and decompression with zlib (and bz2) is available without an explicit import:

>>> s = 'a' * 100
>>> s.encode('zlib')

Similarly you can encode and decode base64:

>>> 'Hello world'.encode('base64')
>>> 'SGVsbG8gd29ybGQ=\n'.decode('base64')
'Hello world'

And, of course, you can rot13:

>>> 'Secret message'.encode('rot13')
'Frperg zrffntr'

(16) Sadly this will stop working in Python 3 - Marius Gedminas
Oh, will it stop working? That's too bad :/. I was just thinking how great this feature was. Then I saw your comment. - FeatureCreep
(3) Awe, the base64 one was pretty useful in interactive sessions handling data from the web. - L̲̳o̲̳̳n̲̳̳g̲̳̳p̲̳o̲̳̳k̲̳̳e̲̳̳
In my opionion it's some type of en/decoding, but on the other side there should "only one way to it" and I think, that these things are better put in its own module! - Joschua
[+43] [2008-12-26 16:05:39] James Brady

An interpreter within the interpreter

The standard library's code [1] module let's you include your own read-eval-print loop inside a program, or run a whole nested interpreter. E.g. (copied my example from here [2])

$ python
Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) 
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> shared_var = "Set in main console"
>>> import code
>>> ic = code.InteractiveConsole({ 'shared_var': shared_var })
>>> try:
...     ic.interact("My custom console banner!")
... except SystemExit, e:
...     print "Got SystemExit!"
My custom console banner!
>>> shared_var
'Set in main console'
>>> shared_var = "Set in sub-console"
>>> import sys
>>> sys.exit()
Got SystemExit!
>>> shared_var
'Set in main console'

This is extremely useful for situations where you want to accept scripted input from the user, or query the state of the VM in real-time.

TurboGears [3] uses this to great effect by having a WebConsole from which you can query the state of you live web app.


[+40] [2008-11-18 19:06:36] jamesturk
>>> from functools import partial
>>> bound_func = partial(range, 0, 10)
>>> bound_func()
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> bound_func(2)
[0, 2, 4, 6, 8]

not really a hidden feature but partial is extremely useful for having late evaluation of functions.

you can bind as many or as few parameters in the initial call to partial as you want, and call it with any remaining parameters later (in this example i've bound the begin/end args to range, but call it the second time with a step arg)

See the documentation [1].


(5) I wish curryfication add a decent operator in python though. - fulmicoton
[+36] [2008-11-04 13:09:28] utku_karatas

While debugging complex data structures pprint module comes handy.

Quoting from the docs..

>>> import pprint    
>>> stuff = sys.path[:]
>>> stuff.insert(0, stuff)
>>> pprint.pprint(stuff)
[<Recursion on list with id=869440>,

(10) pprint is also good for printing dictionaries in doctests, since it always sorts the output by keys - akaihola
[+34] [2008-10-05 09:51:09] Constantin

Python has GOTO

...and it's implemented by external pure-Python module [1] :)

from goto import goto, label
for i in range(1, 10):
    for j in range(1, 20):
        for k in range(1, 30):
            print i, j, k
            if k == 3:
                goto .end # breaking out from a deeply nested loop
label .end
print "Finished"

(65) Maybe it is best that this feature remains hidden. - James McMahon
(8) Well, the actual hidden feature here is mechanism used to implement GOTO. - Constantin
(2) Surely, for breaking out of a nested loop you can just raise an exception, no? - shylent
+1 first one I actually did not know about. - SingleNegationElimination
(7) @shylent: Exceptions should be exceptional. For that reason they are optimized for the case that they are not thrown. If you expect the condition to occur in the course of normal processing, you should use another method - SingleNegationElimination
(6) @shylent, the correct way to break out of a nested loop is to put the loop into a function, and return from the function - Christian Oudard
(1) External modules should not be included in this list. GOTO is not a feature of Python. - Nick Perkins
@TokenMacGuy: not in Python. Exception are used internally to end loops using StopIteration. Exception are not exceptional at all. - e-satis
[+32] [2008-09-26 20:51:53] Henry Precheur

dict's constructor accepts keyword arguments:

>>> dict(foo=1, bar=2)
{'foo': 1, 'bar': 2}

(8) So long as the keyword arguments are valid Python identifiers (names). You can't use: dict(1="one", two=2 ...) because the "1" is not a valid identifier even though it's a perfectly valid dictionary key. - Jim Dennis
It's perfect for copy-and-update: base = {'a': 4, 'b': 5}; updated = dict(base, c=5) - Tomek Paczkowski
[+29] [2010-09-12 05:11:28] Ruslan Spivak

Sequence multiplication and reflected operands

>>> 'xyz' * 3

>>> [1, 2] * 3
[1, 2, 1, 2, 1, 2]

>>> (1, 2) * 3
(1, 2, 1, 2, 1, 2)

We get the same result with reflected (swapped) operands

>>> 3 * 'xyz'

It works like this:

>>> s = 'xyz'
>>> num = 3

To evaluate an expression s * num interpreter calls s.___mul___(num)

>>> s * num

>>> s.__mul__(num)

To evaluate an expression num * s interpreter calls num.___mul___(s)

>>> num * s

>>> num.__mul__(s)

If the call returns NotImplemented then interpreter calls a reflected operation s.___rmul___(num) if operands have different types

>>> s.__rmul__(num)

See rmul [1]


(3) +1 I knew about sequence multiplication, but the reflected operands are new to me. - Björn Pollex
@Space, it would be unpythonic to have x * y != y * x, after all :) - badp
In python you may have x * y != y * x (it's just enough to play with the 'mul' methods). - rob
Seeing many questions about problems with x= [] * 20, i am thinking if it would be better to make shallow copies of the operands by default - warvariuc
[+28] [2008-09-19 13:16:49] Ber

Getter functions in module operator

The functions attrgetter() and itemgetter() in module operator can be used to generate fast access functions for use in sorting and search objects and dictionaries

Chapter 6.7 [1] in the Python Library Docs


This answer deserves good examples, for instance in conjunction with map() - Jonathan Livni
[+28] [2008-09-20 14:31:17] Torsten Marek

Interleaving if and for in list comprehensions

>>> [(x, y) for x in range(4) if x % 2 == 1 for y in range(4)]
[(1, 0), (1, 1), (1, 2), (1, 3), (3, 0), (3, 1), (3, 2), (3, 3)]

I never realized this until I learned Haskell.

way cool.… - jimmyorr
(6) Not so cool, you are just having a list comprehension with two for loops. What is so surprising about that? - Olivier Verdier
@Olivier: there's an if between the two for loops. - Torsten Marek
(1) @Torsten: well, the list comprehension comprises already a for .. if, so what is so interesting? You can write: [x for i in range(10) if i%2 for j in range(10) if j%2], nothing especially cool or interesting. The if in the middle of your example has nothing to do with the second for. - Olivier Verdier
(3) I was wondering, is there a way to do this with an else? [ a for (a, b) in zip(lista, listb) if a == b else: '-' ] - Austin Richardson
in [ _ for _ in _ if _ ] the if is a filter for the example above it would need to be [ _ if _ else _ for _ ] - Dan D.
[+27] [2008-09-22 05:33:15] ianb

Tuple unpacking:

>>> (a, (b, c), d) = [(1, 2), (3, 4), (5, 6)]
>>> a
(1, 2)
>>> b
>>> c, d
(4, (5, 6))

More obscurely, you can do this in function arguments (in Python 2.x; Python 3.x will not allow this anymore):

>>> def addpoints((x1, y1), (x2, y2)):
...     return (x1+x2, y1+y2)
>>> addpoints((5, 0), (3, 5))
(8, 5)

(6) For what it's worth, tuple unpacking in function definitions is going aaway in python 3.0 - Ryan
(3) Mostly because it makes the implementation really nasty, as far as I understand. ( inspect.getargs in the standard library - the normal path (no tuple args) is about 10 lines, and there are about 30 extra lines for handling tuple args (which only gets used occasionally).) Makes me sad though. - babbageclunk
Looks like they are removing some of the batteries in 3.0 :/ . - FeatureCreep
It's good, that they remove it, because it's ugly and you can just emulate this, by typing: x1, x2 = x; y1, y2 = y (if you have x,y arguments) - Joschua
That's a shame. I was hoping support for * would be added for remaining arguments, so you could do stuff like a, b, *c = [1, 2, 3, 4, 5] (equivalent to a = 1, b = 2, c = [3, 4, 5]). - hbn
(1) @yangyang: that was added. The only thing that was removed is the tuple unpacking in function definitions. Instead, you just move such unpacking to the first line of the function implementation. - ncoghlan
[+27] [2008-09-29 10:36:12] tadeusz

Obviously, the antigravity module. xkcd #353 [1]


(4) Probably my most used module. After the soul module, of course. - user13876
(21) Which actually works. Try putting "import antigravity" in the newest Py3K. - Andrew Szeto
@Andrew Szeto... what does it do? - Jiaaro
@Jim Robert: It opens up the webbrowser to the xkcd site ;) - poke
the skynet module is quite useful too… - Tshirtman
[+26] [2008-09-19 20:30:28] davidavr

The Python Interpreter


Maybe not lesser known, but certainly one of my favorite features of Python.

(5) The #1 reason Python is better than everything else. </fanboi> - user13876
(16) Everything else you've seen. </smuglispweenie> - Matt Curtis
(8) And it also has iPython which is much better than the default interpreter - juanjux
(1) I wish I could use iPython like SLIME in all of its glory - user130594
[+25] [2008-09-22 18:03:00] amix

Python sort function sorts tuples correctly (i.e. using the familiar lexicographical order):

a = [(2, "b"), (1, "a"), (2, "a"), (3, "c")]
print sorted(a)
#[(1, 'a'), (2, 'a'), (2, 'b'), (3, 'c')]

Useful if you want to sort a list of persons after age and then name.

(5) This is a consequence of tuple comparison working correctly in general, i.e. (1, 2) < (1, 3). - Constantin
(9) This is useful for version tuples: (1, 9) < (1, 10). - Roger Pate
[+25] [2008-10-21 13:26:46] Jake

Referencing a list comprehension as it is being built...

You can reference a list comprehension as it is being built by the symbol '_[1]'. For example, the following function unique-ifies a list of elements without changing their order by referencing its list comprehension.

def unique(my_list):
    return [x for x in my_list if x not in locals()['_[1]']]

(3) Nifty trick. Do you know if this is accepted behavior or is it more of a dirty hack that may change in the future? The underscore makes me think the latter. - Kiv
Interesting. I think it'd be a dirty hack of the locals() dictionary, but I'd be curious to know for sure. - Amandasaurus
Brilliant, I was literally just looking for this yesterday! - Rob Golding
(18) not a good idea for algorithmic as well as practical reasons. Algorithmically, this will give you a linear search of the list so far on every iteration, changing your O(n) loop into O(n**2); much better to just make the list into a set afterwards. Practically speaking, it's undocumented, may change, and probably doesn't work in ironpython/jython/pypy . - llimllib
(32) This is an undocumented implementation detail, not a hidden feature. It would be a bad idea to rely on this. - Marius Gedminas
(1) If you want to reference the list as you're building it, use an ordinary loop. This is very implementation dependent - CPython uses a hidden name in the locals dict because it is convenient, but other implementations are under no obligation to do the same thing. - ncoghlan
[+25] [2009-12-01 02:09:50] Noctis Skytower

The unpacking syntax has been upgraded in the recent version as can be seen in the example.

>>> a, *b = range(5)
>>> a, b
(0, [1, 2, 3, 4])
>>> *a, b = range(5)
>>> a, b
([0, 1, 2, 3], 4)
>>> a, *b, c = range(5)
>>> a, b, c
(0, [1, 2, 3], 4)

(4) never seen this before, it's pretty nice! - mdeous
which version? as this doesn't work in 2.5.2 - Dan D.
(1) works with 3.1, but not with 2.7 - Paweł Prażak
(1) Nice - been hoping for that! Shame the destructuring went. - hbn
[+25] [2011-01-23 18:24:40] Chmouel Boudjnah

The simplicity of :

>>> 'str' in 'string'
>>> 'no' in 'yes'

is something i love about Python, I have seen a lot of not very pythonic idiom like that instead :

if 'yes'.find("no") == -1:

I'm conflicted about this, because it's inconsistent with the in behavior on other kinds of sequences. 1 in [3, 2, 1] is True, but [2, 1] in [3, 2, 1] is False, and it could really be a problem if it were True. But that's what would be needed to make it consistent with the string behavior explained here. So I think the .find() approach is actually more Pythonic, although of course .find() ought to have returned None instead of -1. - Kragen Javier Sitaker
Also note: 'str' not in 'abc' #true - Kosta
[+24] [2008-09-19 11:55:12] Matthias Kestenholz


of course :-) What is a metaclass in Python? [1]


[+24] [2009-06-02 09:12:21] Tom

I personally love the 3 different quotes

str = "I'm a string 'but still I can use quotes' inside myself!"
str = """ For some messy multi line strings.
Such as
<head> ... </head>"""

Also cool: not having to escape regular expressions, avoiding horrible backslash salad by using raw strings:

str2 = r"\n" 
print str2
>> \n

(8) Four different quotes, if you include ''' - user1686
I enjoy having ' and " do pretty much the same thing in code. My IDE highlights strings from the two in different colors, and it makes it easy to differentiate short strings (with ') from longer ones (with "). - asmeurer
[+23] [2008-10-03 00:01:22] Robert Rossney


I think that a lot of beginning Python developers pass over generators without really grasping what they're for or getting any sense of their power. It wasn't until I read David M. Beazley's PyCon presentation on generators (it's available here [1]) that I realized how useful (essential, really) they are. That presentation illuminated what was for me an entirely new way of programming, and I recommend it to anyone who doesn't have a deep understanding of generators.


(2) Wow! My brain is fried and that was just the first 6 parts. Starting in 7 I had to start drawing pictures just to see if I really understood what was happening with multi-process / multi-thread / multi-machine processing pipelines. Amazing stuff! - Peter Rowell
(1) +1 for the link to the presentation - Mark Heath
[+22] [2008-09-19 13:39:43] e-satis

Implicit concatenation:

>>> print "Hello " "World"
Hello World

Useful when you want to make a long text fit on several lines in a script:

hello = "Greaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Hello " \


hello = ("Greaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Hello " 

(3) To make a long text fit on several lines, you can also use the triple quotes. - Rafał Dowgird
Your example is wrong and misleading. After running it, the "Word" part won't be on the end of the hello string. It won't concatenate. To continue on next line like that, you would need implicit line continuation and string concatenation and that only happens if you use some delimiter like () or []. - nosklo
Only one thing was wrong here: the tab before "word" (typo). What's more, you are really unfriendly, espacially for somebody who didn't even take the time to check if it works (since you would have seen it does). You may want to read that : - e-satis
(2) Anyone who has ever forgotten a comma in a list of strings knows how evil this 'feature' is. - Terhorst
(7) Well, a PEP had been set to get rid of it but Guido decided finally to keep it. I guess it's more useful than hateful. Actually the drawbacks are no so dangerous (no safety issues) and for long strings, it helps a lot. - e-satis
(2) This is probably my favorite feature of Python. You can forget correct syntax and it's still correct syntax. - user13876
even better: hello = "Greaaaaa Hello \<pretend there's a line break here>World" - JAB
I always write a + at the end of the line (though I still do use the implicit line continuations from parentheses). It just makes things clearer to read. - asmeurer
[+22] [2010-07-15 09:03:42] Giampaolo Rodolà

When using the interactive shell, "_" contains the value of the last printed item:

>>> range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> _
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

I always forget about this one! It's a great feature. - chimeracoder
_ automatic variable is the best feature when using Python shell as a calculator. Very powerful calculator, by the way. - Denilson Sá Maia
I still try to use %% in the python shell from too much Mathematica in a previous life... If only %% were a valid variable name, I'd set %% = _... - Conspicuous Compiler
This was already given by someone (I don't know if it was earlier, but it is voted higher). - asmeurer
__ for second-last and ___ for third-last - wim
[+22] [2010-07-22 14:47:37] Remco Wendt

The textwrap.dedent [1] utility function in python can come quite in handy testing that a multiline string returned is equal to the expected output without breaking the indentation of your unittests:

import unittest, textwrap

class XMLTests(unittest.TestCase):
    def test_returned_xml_value(self):
        returned_xml = call_to_function_that_returns_xml()
        expected_value = textwrap.dedent("""\
        <?xml version="1.0" encoding="utf-8"?>

        self.assertEqual(expected_value, returned_xml)

[+22] [2010-07-24 20:25:47] David Z

Zero-argument and variable-argument lambdas

Lambda functions are usually used for a quick transformation of one value into another, but they can also be used to wrap a value in a function:

>>> f = lambda: 'foo'
>>> f()

They can also accept the usual *args and **kwargs syntax:

>>> g = lambda *args, **kwargs: args[0], kwargs['thing']
>>> g(1, 2, 3, thing='stuff')
(1, 'stuff')

The main reason I see to keep lambda around: defaultdict(lambda: 1) - eswald
[+21] [2008-10-18 17:44:47] Kay Schluehr

Using keyword arguments as assignments

Sometimes one wants to build a range of functions depending on one or more parameters. However this might easily lead to closures all referring to the same object and value:

funcs = [] 
for k in range(10):
     funcs.append( lambda: k)

>>> funcs[0]()
>>> funcs[7]()

This behaviour can be avoided by turning the lambda expression into a function depending only on its arguments. A keyword parameter stores the current value that is bound to it. The function call doesn't have to be altered:

funcs = [] 
for k in range(10):
     funcs.append( lambda k = k: k)

>>> funcs[0]()
>>> funcs[7]()

(6) A less hackish way to do that (imho) is just to use a separate function to manufacture lambdas that don't close on a loop variable. Like this: def make_lambda(k): return lambda: k. - Jason Orendorff
"less hackish"?'s personal preference, I guess, but this is core Python stuff -- not really a hack. You certainly can structure it ( using functions ) so that the reader does not need to understand how Python's default arguments work -- but if you do understand how default arguments work, you will read the "lambda: k=k:k" and understand immediately that it is "saving" the current value of "k" ( as the lambda is created ), and attaching it to the lambda itself. This works the same with normal "def" functions, too. - Nick Perkins
Jason Orendorff's answer is correct, but this is how we used to emulate closures in Python before Guido finally agreed that nested scopes were a good idea. - Kragen Javier Sitaker
[+20] [2009-12-05 22:10:16] jpsimons

Mod works correctly with negative numbers

-1 % 5 is 4, as it should be, not -1 as it is in other languages like JavaScript. This makes "wraparound windows" cleaner in Python, you just do this:

index = (index + increment) % WINDOW_SIZE

In most languages, number = coefficient x quotient + remainder. In Python (and Ruby), quotient is different than in JavaScript (or C or Java), because integer division in Python rounds towards negative infinity, but in JavaScript it rounds towards zero (truncates). I agree that % in Python makes more sense, but I don't know if / does. See for details on each language. - Mikel
In general, if abs(increment) < WINDOW_SIZE, then you can say index = (index + WINDOW_SIZE + increment) in any language, and have it do the right thing. - George V. Reilly
[+19] [2008-09-20 02:55:10] Jeremy Cantrell

First-class functions

It's not really a hidden feature, but the fact that functions are first class objects is simply great. You can pass them around like any other variable.

>>> def jim(phrase):
...   return 'Jim says, "%s".' % phrase
>>> def say_something(person, phrase):
...   print person(phrase)

>>> say_something(jim, 'hey guys')
'Jim says, "hey guys".'

(2) This also makes callback and hook creation (and, thus, plugin creation for your Python scripts) so trivial that you might not even know you're doing it. - user13876
(4) Any langauge that doesn't have first class functions (or at least some good substitute, like C function pointers) it is a misfeature. It is completely unbearable to go without. - SingleNegationElimination
This might be a stupider question than I intend, but isn't this essentially a function pointer? Or do I have this mixed up? - inspectorG4dget
(1) @inspectorG4dget: It's certainly related to function pointers, in that it can accomplish all of the same purposes, but it's slightly more general, more powerful, and more intuitive. Particularly powerful when you combine it with the fact that functions can have attributes, or the fact that instances of certain classes can be called, but that starts to get arcane. - eswald
[+19] [2008-09-22 23:22:54] Alexander Kojevnikov

Ternary operator

>>> 'ham' if True else 'spam'
>>> 'ham' if False else 'spam'

This was added in 2.5, prior to that you could use:

>>> True and 'ham' or 'spam'
>>> False and 'ham' or 'spam'

However, if the values you want to work with would be considered false, there is a difference:

>>> [] if True else 'spam'
>>> True and [] or 'spam'

Prior to 2.5, "foo = bar and 'ham' or 'spam'" - a paid nerd
@a paid nerd - not quite: 1 == 1 and 0 or 3 => 3. The and short circuits on the 0 (as it equivalent to False - same deal with "" and None). - hbn
[+19] [2008-09-25 18:22:24] Torsten Marek

Assigning and deleting slices:

>>> a = range(10)
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a[:5] = [42]
>>> a
[42, 5, 6, 7, 8, 9]
>>> a[:1] = range(5)
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> del a[::2]
>>> a
[1, 3, 5, 7, 9]
>>> a[::2] = a[::-2]
>>> a
[9, 3, 5, 7, 1]

Note: when assigning to extended slices (s[start:stop:step]), the assigned iterable must have the same length as the slice.

[+19] [2009-06-18 15:40:54] Markus

Not very hidden, but functions have attributes:

def doNothing():

doNothing.monkeys = 4
print doNothing.monkeys

(11) It's because functions can be though of as objects with __call__() function defined. - Tomasz Zieliński
(2) It's because functions can be thought of as descriptors with __call__() function defined. - Jeffrey Jose
Wait, does __call__() also have a __call__() function? - user142019
(2) I'll bet it's __call__() functions all the way down. - Chris Pickett
[+19] [2010-03-20 08:58:42] evilpie

Passing tuple to builtin functions

Much Python functions accept tuples, also it doesn't seem like. For example you want to test if your variable is a number, you could do:

if isinstance (number, float) or isinstance (number, int):  
   print "yaay"

But if you pass us tuple this looks much cleaner:

if isinstance (number, (float, int)):  
   print "yaay"

cool, is this even documented? - Ponkadoodle
Yes, but nearly nobody knows about that. - evilpie
What other functions support this?? Good tip - adamJLev
Not sure about other functions, but this is supposed in except (FooError, BarError) clauses. - Beni Cherniavsky-Paskin
[+19] [2010-05-26 20:25:02] Evgeny

Nice treatment of infinite recursion in dictionaries:

>>> a = {}
>>> b = {}
>>> a['b'] = b
>>> b['a'] = a
>>> print a
{'b': {'a': {...}}}

(1) That is just the 'nice treatment' of "print", it doesn't imply a nice treatment across the language. - haridsv
Both str() and repr() return the string you posted above. However, the ipython shell returns something a little different, a little more informative: {'b': {'a': <Recursion on dict with id=17830960>}} - Denilson Sá Maia
(1) @denilson: ipython uses pprint module, which is available whithin standard python shell. - rafak
(1) +1 for the first one that I had absolutely no idea about whatsoever. - asmeurer
[+18] [2010-07-14 08:14:12] Marcin Swiderski

reversing an iterable using negative step

>>> s = "Hello World"
>>> s[::-1]
'dlroW olleH'
>>> a = (1,2,3,4,5,6)
>>> a[::-1]
(6, 5, 4, 3, 2, 1)
>>> a = [5,4,3,2,1]
>>> a[::-1]
[1, 2, 3, 4, 5]

(2) Good to know, but minor point: that only works with sequences not iterables in general. I.e., (n for n in (1,2,3,4,5))[::-1] doesn't work. - Don O'Donnell
(3) That notation will actually create a new (reversed) instance of that sequence, which might be undesirable in some cases. For such cases, reversed() function is better, as it returns a reverse iterator instead of allocating a new sequence. - Denilson Sá Maia
[+18] [2010-12-21 16:24:18] Noufal Ibrahim

Not "hidden" but quite useful and not commonly used

Creating string joining functions quickly like so

 comma_join = ",".join
 semi_join  = ";".join

 print comma_join(["foo","bar","baz"])


Ability to create lists of strings more elegantly than the quote, comma mess.

l = ["item1", "item2", "item3"]

replaced by

l = "item1 item2 item3".split()

I think these both make the thing more long and obfuscated. - XTL
I don't know. I've found places where judicious use made things easier to read. - Noufal Ibrahim
[+18] [2011-03-09 19:37:41] Kimvais

Arguably, this is not a programming feature per se, but so useful that I'll post it nevertheless.

$ python -m http.server

...followed by $ wget http://<ipnumber>:8000/filename somewhere else.

If you are still running an older (2.x) version of Python:

$ python -m SimpleHTTPServer

You can also specify a port e.g. python -m http.server 80 (so you can omit the port in the url if you have the root on the server side)

[+17] [2010-06-29 18:25:14] David Z

Multiple references to an iterator

You can create multiple references to the same iterator using list multiplication:

>>> i = (1,2,3,4,5,6,7,8,9,10) # or any iterable object
>>> iterators = [iter(i)] * 2
>>> iterators[0].next()
>>> iterators[1].next()
>>> iterators[0].next()

This can be used to group an iterable into chunks, for example, as in this example from the itertools documentation [1]

def grouper(n, iterable, fillvalue=None):
    "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return izip_longest(fillvalue=fillvalue, *args)

(1) You can do the opposite with itertools.tee -- take one iterator and return n that yield the same but do not share state. - Daenyth
(6) I actually don't see the difference to doing this one: "a = iter(i)" and subsequently "b = a" I also get multiple references to the same iterator -- there is no magic about that to me, no hidden feature it is just the normal reference copying stuff of the language. What is done, is creating the iterator, then (the list multiplication) copying this iterator several times. Thats all, its all in the language. - Juergen
(3) @Juergen: indeed, a = iter(i); b = a does the same thing and I could just as well have written that into the answer instead of [iter(i)] * n. Either way, there is no "magic" about it. That's no different from any of the other answers to this question - none of them are "magical", they are all in the language. What makes the features "hidden" is that many people don't realize they're possible, or don't realize interesting ways in which they can be used, until they are pointed out explicitly. - David Z
Well, for one thing, you can do it an arbitrary number of times with [iter(i)]*n. Also, it isn't necessarily well known (to many people's peril) that list*int creates referential, not actual, copies of the elements of the list. It's good to see that that is actually useful somehow. - asmeurer
[+17] [2010-07-18 20:59:08] Piotr Duda

From python 3.1 ( 2.7 ) dictionary and set comprehensions are supported :

{ a:a for a in range(10) }
{ a for a in range(10) }

there is no such thing as tuples comprehension, and this is not a syntax for dict comprehensions. - SilentGhost
Edited the typo with dict comprehensions. - Piotr Duda
uh oh, looks like I have to upgrade my version of python so I can play with dict and set comprehensions - Carson Myers
for dictionaries that way is better but dict( (a,a) for a in range(10) ) works too and your error is probably due to remembering this form - Dan D.
I cannot wait to use this feature. - asmeurer
[+15] [2009-11-06 13:18:00] u0b34a0f6ae

Python can understand any kind of unicode digits [1], not just the ASCII kind:

>>> s = u'10585'
>>> s
>>> print s
>>> int(s)
>>> float(s)

[+14] [2008-09-21 22:02:39] Armin Ronacher

__slots__ is a nice way to save memory, but it's very hard to get a dict of the values of the object. Imagine the following object:

class Point(object):
    __slots__ = ('x', 'y')

Now that object obviously has two attributes. Now we can create an instance of it and build a dict of it this way:

>>> p = Point()
>>> p.x = 3
>>> p.y = 5
>>> dict((k, getattr(p, k)) for k in p.__slots__)
{'y': 5, 'x': 3}

This however won't work if point was subclassed and new slots were added. However Python automatically implements __reduce_ex__ to help the copy module. This can be abused to get a dict of values:

>>> p.__reduce_ex__(2)[2][1]
{'y': 5, 'x': 3}

Oh wow, I might actually have good use for this! - user13876
Beware that __reduce_ex__ can be overridden in subclasses, and since it's also used for pickling, it often is. (If you're making data containers, you should think of using it too! or it's younger siblings __getstate__ and __setstate__.) - Ken Arnold
(2) You can still do object.__reduce_ex__(p, 2)[2][1] then. - Armin Ronacher
[+14] [2009-12-30 23:35:24] Xavier Martinez-Hidalgo


This module is often overlooked. The following example uses itertools.chain() to flatten a list:

>>> from itertools import *
>>> l = [[1, 2], [3, 4]]
>>> list(chain(*l))
[1, 2, 3, 4]

See for more applications.

[+14] [2010-02-13 21:12:21] Thomas Wouters

Manipulating sys.modules

You can manipulate the modules cache directly, making modules available or unavailable as you wish:

>>> import sys
>>> import ham
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named ham

# Make the 'ham' module available -- as a non-module object even!
>>> sys.modules['ham'] = 'ham, eggs, saussages and spam.'
>>> import ham
>>> ham
'ham, eggs, saussages and spam.'

# Now remove it again.
>>> sys.modules['ham'] = None
>>> import ham
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named ham

This works even for modules that are available, and to some extent for modules that already are imported:

>>> import os
# Stop future imports of 'os'.
>>> sys.modules['os'] = None
>>> import os
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named os
# Our old imported module is still available.
>>> os
<module 'os' from '/usr/lib/python2.5/os.pyc'>

As the last line shows, changing sys.modules only affects future import statements, not past ones, so if you want to affect other modules it's important to make these changes before you give them a chance to try and import the modules -- so before you import them, typically. None is a special value in sys.modules, used for negative caching (indicating the module was not found the first time, so there's no point in looking again.) Any other value will be the result of the import operation -- even when it is not a module object. You can use this to replace modules with objects that behave exactly like you want. Deleting the entry from sys.modules entirely causes the next import to do a normal search for the module, even if it was already imported before.

And you can do sys.modules['my_module'] = MyClass(), to implement read only attributes 'module' if MyClass has the right hooks. - warvariuc
[+14] [2010-07-14 02:06:20] John D. Cook

You can ask any object which module it came from by looking at its __ module__ property. This is useful, for example, if you're experimenting at the command line and have imported a lot of things.

Along the same lines, you can ask a module where it came from by looking at its __ file__ property. This is useful when debugging path issues.

[+13] [2008-09-20 20:09:34] daniel

Some of the builtin favorites, map(), reduce(), and filter(). All extremely fast and powerful.

Be careful of reduce(), If you're not careful, you can write really slow reductions. - S.Lott
And be careful of map(), it's depreciated in 2.6 and removed in 3.0. - user13876
(1) list comprehensions can achieve everything you can do with any of those functions. - recursive
(2) It can also obfuscate Python code if you abuse them - juanjux
(4) @sil: map still exists in Python 3, as does filter, and reduce exists as functools.reduce. - u0b34a0f6ae
(8) @recursive: I defy you to produce a list comprehension/generator expression that performs the action of reduce() - SingleNegationElimination
(1) The correct statement is "reduce() can achieve everything you can do with map(), filter(), or list comprehensions." - Kragen Javier Sitaker
[+13] [2009-06-09 03:14:25] Ken Arnold

One word: IPython [1]

Tab introspection, pretty-printing, %debug, history management, pylab, ... well worth the time to learn well.


That's not built in python core is it? - Joshua Partogi
You're right, it's not. And probably with good reason. But I recommend it without reservation to any Python programmer. (However, I heartily recommend turning off autocall. When it does something you don't expect, it can be very hard to realize why.) - Ken Arnold
I also love IPython. I've tried BPython, but it was too slow for me (although I agree it has some cool features). - Denilson Sá Maia
[+13] [2009-12-30 23:29:34] Xavier Martinez-Hidalgo

Guessing integer base

>>> int('10', 0)
>>> int('0x10', 0)
>>> int('010', 0)  # does not work on Python 3.x
>>> int('0o10', 0)  # Python >=2.6 and Python 3.x
>>> int('0b10', 0)  # Python >=2.6 and Python 3.x

[+12] [2008-09-22 23:56:40] Dan Lenski

You can build up a dictionary from a set of length-2 sequences. Extremely handy when you have a list of values and a list of arrays.

>>> dict([ ('foo','bar'),('a',1),('b',2) ])
{'a': 1, 'b': 2, 'foo': 'bar'}

>>> names = ['Bob', 'Marie', 'Alice']
>>> ages = [23, 27, 36]
>>> dict(zip(names, ages))
{'Alice': 36, 'Bob': 23, 'Marie': 27} = {} _i = 0 for keys in self.VDESC.split():[keys] = _data[_i] _i += 1 I replaced my code with this one-liner :) = dict(zip(self.VDESC.split(), _data)) Thanks for the handy tip. - Gökhan Sever
(1) Also helps in Python2.x where there is no dict comprehension syntax. Sou you can write dict((x, x**2) for x in range(10)). - Marian
[+12] [2009-10-27 15:49:21] Denis Otkidach

Extending properties (defined as descriptor) in subclasses

Sometimes it's useful to extent (modify) value "returned" by descriptor in subclass. It can be easily done with super():

class A(object):
    def prop(self):
        return {'a': 1}

class B(A):
    def prop(self):
        return dict(super(B, self).prop, b=2)

Store this in and run python -i (another hidden feature: -i option executed the script and allow you to continue in interactive mode):

>>> B().prop
{'a': 1, 'b': 2}

+1 properties! Cant get enough of them. - Jeffrey Jose
[+11] [2008-10-15 18:37:26] Martin Beckett

A slight misfeature of python. The normal fast way to join a list of strings together is,


(20) there are very good reasons that this is a method of string instead of a method of list. this allows the same function to join any iterable, instead of duplicating join for every iterable type. - Christian Oudard
Yes I know why it does - but would anyone discover this if they hadn't been told? - Martin Beckett
Discover? It's pretty hard to remember too, and I've used python since before there were methods om strings. - kaleissin
(10) If this is too ugly for you to cope with, you can write the very same thing as str.join('',list_of_strings) but other pythonistas may scorn you for trying to write java. - SingleNegationElimination
@TokenMacGuy: the reason why ''.join([...]) is preferred is because many people often mixes up the order of the arguments in string.join(..., ...); by putting ''.join() things become clearer - Lie Ryan
I'm fairly certain that the only reason most pythonistas use "".join(iterable) over str.join("",iterable) is because it's 4 characters shorter. - SingleNegationElimination
@TokenMacGuy No. And what is wrong with having split and join in the str-class? It IS easy to remember and btw. this is an example of 'Although practicality beats purity.' - Joschua
[+11] [2010-01-13 22:58:01] Chinmay Kanchi

Creating enums

In Python, you can do this to quickly create an enumeration:

>>> FOO, BAR, BAZ = range(3)
>>> FOO

But the "enums" don't have to have integer values. You can even do this:

class Colors(object):
    RED, GREEN, BLUE, YELLOW = (255,0,0), (0,255,0), (0,0,255), (0,255,255)

#now Colors.RED is a 3-tuple that returns the 24-bit 8bpp RGB 
#value for saturated red

[+11] [2010-12-28 06:18:56] asmeurer

The Object Data Model

You can override any operator in the language for your own classes. See this page [1] for a complete list. Some examples:

  • You can override any operator (* + - / // % ^ == < > <= >= . etc.). All this is done by overriding __mul__, __add__, etc. in your objects. You can even override things like __rmul__ to handle separately your_object*something_else and something_else*your_object. . is attribute access (a.b), and can be overridden to handle any arbitrary b by using __getattr__. Also included here is a(…) using __call__.

  • You can create your own slice syntax (a[stuff]), which can be very complicated and quite different from the standard syntax used in lists (numpy has a good example of the power of this in their arrays) using any combination of ,, :, and that you like, using Slice objects.

  • Handle specially what happens with many keywords in the language. Included are del, in, import, and not.

  • Handle what happens when many built in functions are called with your object. The standard __int__, __str__, etc. go here, but so do __len__, __reversed__, __abs__, and the three argument __pow__ (for modular exponentiation).


For in you have to override __contains__. - asmeurer
[+11] [2011-03-05 08:26:14] dan_waterworth
[+10] [2008-09-23 10:34:35] csl

"Unpacking" to function parameters

def foo(a, b, c):
        print a, b, c

bar = (3, 14, 15)

When executed prints:

3 14 15

(1) This is the canonical alternative to the old "apply()" built-in. - Jim Dennis
[+10] [2009-01-02 18:48:52] Christian Oudard

The reversed() builtin. It makes iterating much cleaner in many cases.

quick example:

for i in reversed([1, 2, 3]):



However, reversed() also works with arbitrary iterators, such as lines in a file, or generator expressions.

[+10] [2009-01-02 19:10:54] sprintf

The Zen of Python

>>> import this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!

Hidden? OTOH, This is one of the selling points of Python. - Jeffrey Jose
I like the syntax coloring, esp. for Dutch. - asmeurer
Duplicate of a previous answer - e-satis
Duplicate of a previous answer - warvariuc
[+10] [2011-02-10 15:14:31] Foo Bah

Changing function label at run time:

>>> class foo:
...   def normal_call(self): print "normal_call"
...   def call(self): 
...     print "first_call"
... = self.normal_call

>>> y = foo()

[+10] [2011-07-04 18:11:57] Roman Bodnarchuk

string-escape and unicode-escape encodings

Lets say you have a string from outer source, that contains \n, \t and so on. How to transform them into new-line or tab? Just decode string using string-escape encoding!

>>> print s
>>> print s.decode('string-escape')
Stack   overflow

Another problem. You have normal string with unicode literals like \u01245. How to make it work? Just decode string using unicode-escape encoding!

>>> s = '\u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442!'
>>> print s
\u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442!
>>> print unicode(s)
\u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442!
>>> print unicode(s, 'unicode-escape')
Привіт, світ!

[+9] [2008-09-22 06:32:00] Paddy3118

unzip un-needed in Python [1]

Someone blogged about Python not having an unzip function to go with zip(). unzip is straight-forward to calculate because:

>>> t1 = (0,1,2,3)
>>> t2 = (7,6,5,4)
>>> [t1,t2] == zip(*zip(t1,t2))

On reflection though, I'd rather have an explicit unzip().


(8) def unzip(x): return zip(*x) Done! - bukzor
The solution is slightly subtle (I can understand the point of view of anyone who asks for it), but I can also see why it would be redundant - inspectorG4dget
+1. I was going to add this, but it seems I was beat to it. - asmeurer
[+9] [2009-03-02 18:16:53] lprsd

Creating dictionary of two sequences that have related data

In [15]: t1 = (1, 2, 3)

In [16]: t2 = (4, 5, 6)

In [17]: dict (zip(t1,t2))
Out[17]: {1: 4, 2: 5, 3: 6}

[+9] [2010-05-24 16:38:10] L̲̳o̲̳̳n̲̳̳g̲̳̳p̲̳o̲̳̳k̲̳̳e̲̳̳

Top Secret Attributes

>>> class A(object): pass
>>> a = A()
>>> setattr(a, "can't touch this", 123)
>>> dir(a)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', "can't touch this"]
>>> a.can't touch this # duh
  File "<stdin>", line 1
    a.can't touch this
SyntaxError: EOL while scanning string literal
>>> getattr(a, "can't touch this")
>>> setattr(a, "__class__.__name__", ":O")
>>> a.__class__.__name__
>>> getattr(a, "__class__.__name__")

(6) AHHHH! Bad, bad, bad! - asmeurer
[+9] [2011-01-07 22:00:46] Apalala

namedtuple is a tuple

>>> node = namedtuple('node', "a b")
>>> node(1,2) + node(5,6)
(1, 2, 5, 6)
>>> (node(1,2), node(5,6))
(node(a=1, b=2), node(a=5, b=6))

Some more experiments to respond to comments:

>>> from collections import namedtuple
>>> from operator import *
>>> mytuple = namedtuple('A', "a b")
>>> yourtuple = namedtuple('Z', "x y")
>>> mytuple(1,2) + yourtuple(5,6)
(1, 2, 5, 6)
>>> q = [mytuple(1,2), yourtuple(5,6)]
>>> q
[A(a=1, b=2), Z(x=5, y=6)]
>>> reduce(operator.__add__, q)
(1, 2, 5, 6)

So, namedtuple is an interesting subtype of tuple.

At this point, you've lost all context. If you don't need the context, or the data isn't structured in a particular way, why a tuple at all? Surely you're just using it as a list? - Samir Talwar
@Samir Talwar The question/answer is about hidden features. Did you know about this one? I'm not defending one design or the other, but just pointing out what is there. When I first tried to use named tuples, I thought they woulnd't match as tuples do, but... Let me expand the example to show you. - Apalala
@Apalala: I had assumed it, but never checked. You're right: it is an interesting and hidden feature. I guess useful is a different thing. - Samir Talwar
(5) Also fun is that you can feed the result of a namedtuple call directly into a class definition, as in class rectangle(namedtuple("rectangle", "width height")): in order to add custom methods - Ben Blank
@Samir Talwar I use namedtuples as the representation for parse trees, and their behavior was useful in merging siblings so they looked more like lists. Imagine the typical grammar productions for a list... - Apalala
@Apalala: OK, you've sold me. Can't say it's how I would approach the problem, but the feature is clearly useful. - Samir Talwar
@Ben Blank. I didn't understand your comment about feeding nametuples to classes. - Apalala
@Apalala — Here's an example: - Ben Blank
@Ben Blank. Incredible! It merits its own answer. - Apalala
[+9] [2011-01-24 06:13:14] FernandoEscher

Dynamically added attributes

This might be useful if you think about adding some attributes to your classes just by calling them. This can be done by overriding the __getattribute__ [1] member function which is called when the dot operand is used. So, let's see a dummy class for example:

class Dummy(object):
    def __getattribute__(self, name):
        f = lambda: 'Hello with %s'%name
        return f

When you instantiate a Dummy object and do a method call you’ll get the following:

>>> d = Dummy()
>>> d.b()
'Hello with b'

Finally, you can even set the attribute to your class so it can be dynamically defined. This could be useful if you work with Python web frameworks and want to do queries by parsing the attribute's name.

I have a gist [2] at github with this simple code and its equivalent on Ruby made by a friend.

Take care!


[+9] [2011-07-09 02:09:45] johnsyweb

Flattening a list [1] with sum() [2].

The sum() [3] built-in function can be used to __add__ [4] list [5]s together, providing a handy way to flatten a list [6] of list [7]s:

Python 2.7.1 (r271:86832, May 27 2011, 21:41:45) 
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> l = [[1, 2, 3], [4, 5], [6], [7, 8, 9]]
>>> sum(l, [])
[1, 2, 3, 4, 5, 6, 7, 8, 9]

[+9] [2011-07-18 15:39:56] cerberos

The Borg Pattern

This is a killer from Alex Martelli [1]. All instances of Borg share state. This removes the need to employ the singleton pattern (instances don't matter when state is shared) and is rather elegant (but is more complicated with new classes).

The value of foo can be reassigned in any instance and all will be updated, you can even reassign the entire dict. Borg is the perfect name, read more here [2].

class Borg:
    __shared_state = {'foo': 'bar'}
    def __init__(self):
        self.__dict__ = self.__shared_state
    # rest of your class here

This is perfect for sharing an eventlet.GreenPool to control concurrency.


[+8] [2009-01-12 11:38:56] Tom Viner

pdb — The Python Debugger

As a programmer, one of the first things that you need for serious program development is a debugger. Python has one built-in which is available as a module called pdb (for "Python DeBugger", naturally!).

[+8] [2010-04-06 00:58:22] haridsv

threading.enumerate() gives access to all Thread objects in the system and sys._current_frames() returns the current stack frames of all threads in the system, so combine these two and you get Java style stack dumps:

def dumpstacks(signal, frame):
    id2name = dict([(th.ident, for th in threading.enumerate()])
    code = []
    for threadId, stack in sys._current_frames().items():
        code.append("\n# Thread: %s(%d)" % (id2name[threadId], threadId))
        for filename, lineno, name, line in traceback.extract_stack(stack):
            code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
            if line:
                code.append("  %s" % (line.strip()))
    print "\n".join(code)

import signal
signal.signal(signal.SIGQUIT, dumpstacks)

Do this at the beginning of a multi-threaded python program and you get access to current state of threads at any time by sending a SIGQUIT. You may also choose signal.SIGUSR1 or signal.SIGUSR2.

See [1]


[+7] [2008-11-28 20:34:37] Steen

...that dict.get() has a default value [1] of None, thereby avoiding KeyErrors:

In [1]: test = { 1 : 'a' }

In [2]: test[2]
<type 'exceptions.KeyError'>              Traceback (most recent call last)

&lt;ipython console&gt; in <module>()

<type 'exceptions.KeyError'>: 2

In [3]: test.get( 2 )

In [4]: test.get( 1 )
Out[4]: 'a'

In [5]: test.get( 2 ) == None
Out[5]: True

and even to specify this 'at the scene':

In [6]: test.get( 2, 'Some' ) == 'Some'
Out[6]: True

And you can use setdefault() to have a value set and returned if it doesn't exist:

>>> a = {}
>>> b = a.setdefault('foo', 'bar')
>>> a
{'foo': 'bar'}
>>> b

[+7] [2009-03-10 22:47:20] Pratik Deoghare

inspect [1] module is also a cool feature.


[+7] [2009-06-09 03:27:08] Ken Arnold

Reloading modules enables a "live-coding" style. But class instances don't update. Here's why, and how to get around it. Remember, everything, yes, everything is an object.

>>> from a_package import a_module
>>> cls = a_module.SomeClass
>>> obj = cls()
>>> obj.method()
(old method output)

Now you change the method in and want to update your object.

>>> reload(a_module)
>>> a_module.SomeClass is cls
False # Because it just got freshly created by reload.
>>> obj.method()
(old method output)

Here's one way to update it (but consider it running with scissors):

>>> obj.__class__ is cls
True # it's the old class object
>>> obj.__class__ = a_module.SomeClass # pick up the new class
>>> obj.method()
(new method output)

This is "running with scissors" because the object's internal state may be different than what the new class expects. This works for really simple cases, but beyond that, pickle is your friend. It's still helpful to understand why this works, though.

(1) +1 for suggesting pickle (or cPickle). It was really helpful for me, some weeks ago. - Denilson Sá Maia
[+7] [2010-08-20 21:38:24] Denilson Sá Maia

Backslashes inside raw strings can still escape quotes. See this:

>>> print repr(r"aaa\"bbb")

Note that both the backslash and the double-quote are present in the final string.

As consequence, you can't end a raw string with a backslash:

>>> print repr(r"C:\")
SyntaxError: EOL while scanning string literal
>>> print repr(r"C:\"")

This happens because raw strings were implemented to help writing regular expressions, and not to write Windows paths. Read a long discussion about this at Gotcha — backslashes in Windows filenames [1].


(2) Note that the backslash is still part of the string afterwards... So one might not regard this as regular escaping. - huin
You're probably better off just using single quotes ' for the outer string. - asmeurer
Or just use (forward) slashes, as the Windows API will translate them automatically, then you can finally forget about DOS-style paths. (Though you must use backslashes for "\\server\share\path\file" style resources) - Terence Simpson
[+7] [2011-01-23 17:37:30] Brendon Crawford

Operators can be called as functions:

from operator import add
print reduce(add, [1,2,3,4,5,6])

? what did you think operators are? - Ant
sorry, i dont get your point..what do you think that we think operators are? - Ant
@Ant, if you were already aware of operators being functions, you can disregard this tip. Not all languages implement operators as functions, so a person coming from another language might not have known this. - Brendon Crawford
[+7] [2011-06-26 20:04:13] Elisha

infinite recursion in list

>>> a = [1,2]
>>> a.append(a)
>>> a
[1, 2, [...]]
>>> a[2]
[1, 2, [...]]
>>> a[2][2][2][2][2][2][2][2][2] == a

i don't think it's a Python feature. nor it's hidden. where this can be used? - warvariuc
[+6] [2008-09-19 13:19:13] Paweł Hajdan

Ability to substitute even things like file deletion, file opening etc. - direct manipulation of language library. This is a huge advantage when testing. You don't have to wrap everything in complicated containers. Just substitute a function/method and go. This is also called monkey-patching.

(1) Creating a test harness which provides classes that have the same interfaces as the objects which would be manipulated by the code under test (the subjects of our testing) is referred to as "Mocking" (these are called "Mock Classes" and their instances are "Mock Objects"). - Jim Dennis
[+6] [2008-09-21 22:12:38] Armin Ronacher

Builtin methods or functions don't implement the descriptor protocol which makes it impossible to do stuff like this:

>>> class C(object):
...  id = id
>>> C().id()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: id() takes exactly one argument (0 given)

However you can create a small bind descriptor that makes this possible:

>>> from types import MethodType
>>> class bind(object):
...  def __init__(self, callable):
...   self.callable = callable
...  def __get__(self, obj, type=None):
...   if obj is None:
...    return self
...   return MethodType(self.callable, obj, type)
>>> class C(object):
...  id = bind(id)
>>> C().id()

(1) It's simpler and easier to do this as a property, in this case: class C(object): id = property(id) - Pi Delport
lambda is also a good alternative: class C(object): id = lambda s, *a, **kw: id(*a, **kw); and a better version of bind: def bind(callable): return lambda s, *a, **kw: callable(*a, **kw) - Lie Ryan
[+6] [2008-10-12 23:19:49] ironfroggy

Nested Function Parameter Re-binding

def create_printers(n):
    for i in xrange(n):
        def printer(i=i): # Doesn't work without the i=i
            print i
        yield printer

it works without it, but differently. :-) - u0b34a0f6ae
No, it doesn't work without it. Omit the i=i and see the difference between map(apply, create_printers(10)) and map(apply, list(apply_printers(10))), where converting to a list consumes the generator and now all ten printer functions have i bound to the same value: 9, where calling them one at a time calls them before the next iteration of the generator changes the int i is bound to in the outer scope. - ironfroggy
(1) I think what is saying is that when you omit the i=i the i in the printer function references the i from the for loop rather than the local i that is created when a new printer function is created with the i=i keyword arg. So it still does work (it yields functions, each with access to a closure) but it doesn't work in the way you'd expect without explicitly creating a local variable. - Sean Vieira
[+6] [2009-01-01 16:14:19] Benjamin Peterson

You can override the mro of a class with a metaclass

>>> class A(object):
...     def a_method(self):
...         print("A")
>>> class B(object):
...     def b_method(self):
...         print("B")
>>> class MROMagicMeta(type):
...     def mro(cls):
...         return (cls, B, object)
>>> class C(A, metaclass=MROMagicMeta):
...     def c_method(self):
...         print("C")
>>> cls = C()
>>> cls.c_method()
>>> cls.a_method()
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
AttributeError: 'C' object has no attribute 'a_method'
>>> cls.b_method()
>>> type(cls).__bases__
(<class '__main__.A'>,)
>>> type(cls).__mro__
(<class '__main__.C'>, <class '__main__.B'>, <class 'object'>)

It's probably hidden for a good reason. :)

(2) That's playing with fire, and asking for ethernal damnation. Better have good reason ;) - gorsky
Does not work with python 2.x. Use __metaclass__ = MROMagicMeta instead. - Alexander Artemenko
[+6] [2009-02-25 10:29:19] Mykola Kharechko

Objects of small intgers (-5 .. 256) never created twice:

>>> a1 = -5; b1 = 256
>>> a2 = -5; b2 = 256
>>> id(a1) == id(a2), id(b1) == id(b2)
(True, True)
>>> c1 = -6; d1 = 257
>>> c2 = -6; d2 = 257
>>> id(c1) == id(c2), id(d1) == id(d2)
(False, False)

Edit: List objects never destroyed (only objects in lists). Python has array in which it keeps up to 80 empty lists. When you destroy list object - python puts it to that array and when you create new list - python gets last puted list from this array:

>>> a = [1,2,3]; a_id = id(a)
>>> b = [1,2,3]; b_id = id(b)
>>> del a; del b
>>> c = [1,2,3]; id(c) == b_id
>>> d = [1,2,3]; id(d) == a_id

(5) This feature is implementation dependent, so you shouldn't rely on it. - Denis Otkidach
As Denis said, do not rely on this behavior. It doesn't work, for example, in PyPy, and your code will break miserably in that if you try to use it. - asmeurer
[+6] [2009-06-18 15:45:23] Markus

You can decorate functions with classes - replacing the function with a class instance:

class countCalls(object):
    """ decorator replaces a function with a "countCalls" instance
    which behaves like the original function, but keeps track of calls

    >>> @countCalls
    ... def doNothing():
    ...     pass
    >>> doNothing()
    >>> doNothing()
    >>> print doNothing.timesCalled
    def __init__ (self, functionToTrack):
        self.functionToTrack = functionToTrack
        self.timesCalled = 0
    def __call__ (self, *args, **kwargs):
        self.timesCalled += 1
        return self.functionToTrack(*args, **kwargs)

[+6] [2009-12-03 04:23:14] grayger

Manipulating Recursion Limit

Getting or setting the maximum depth of recursion with sys.getrecursionlimit() & sys.setrecursionlimit().

We can limit it to prevent a stack overflow caused by infinite recursion.

[+6] [2010-07-16 13:46:10] Daniel Hepper

Slices & Mutability

Copying lists

>>> x = [1,2,3]
>>> y = x[:]
>>> y.pop()
>>> y
[1, 2]
>>> x
[1, 2, 3]

Replacing lists

>>> x = [1,2,3]
>>> y = x
>>> y[:] = [4,5,6]
>>> x
[4, 5, 6]

[+6] [2010-07-19 12:01:21] Martin

Python 2.x ignores commas if found after the last element of the sequence:

>>> a_tuple_for_instance = (0,1,2,3,)
>>> another_tuple = (0,1,2,3)
>>> a_tuple_for_instance == another_tuple

A trailing comma causes a single parenthesized element to be treated as a sequence:

>>> a_tuple_with_one_element = (8,)

(2) Python3 ignores them as well. - Alexander Artemenko
[+6] [2010-07-22 20:03:20] hughdbrown

Slices as lvalues. This Sieve of Eratosthenes produces a list that has either the prime number or 0. Elements are 0'd out with the slice assignment in the loop.

def eras(n):
    last = n + 1
    sieve = [0,0] + list(range(2, last))
    sqn = int(round(n ** 0.5))
    it = (i for i in xrange(2, sqn + 1) if sieve[i])
    for i in it:
        sieve[i*i:last:i] = [0] * (n//i - i + 1)
    return filter(None, sieve)

To work, the slice on the left must be assigned a list on the right of the same length.

[+6] [2011-12-29 16:55:14] yoav.aviram

Rounding Integers: Python has the function round, which returns numbers of type double:

 >>> print round(1123.456789, 4)
 >>> print round(1123.456789, 2)
 >>> print round(1123.456789, 0)

This function has a wonderful magic property:

 >>> print round(1123.456789, -1)
 >>> print round(1123.456789, -2)

If you need an integer as a result use int to convert type:

 >>> print int(round(1123.456789, -2))
 >>> print int(round(8359980, -2))

Thank you Gregor [1].


[+5] [2008-09-19 11:53:55] Oko

List comprehensions

list comprehensions [1]

Compare the more traditional (without list comprehension):

foo = []
for x in xrange(10):
  if x % 2 == 0:


foo = [x for x in xrange(10) if x % 2 == 0]

(5) In what way is list comprehensions a hidden feature of Python ? - Eli Bendersky
(1) They are probably "hidden" for former C & Java programmers who haven't seen such features before, don't think to look for it and ignore it if they see it in a tutorial. OTOH a Haskell programmer will notice it immediately. - finnw
(2) The question does ask for "an example and short description of the feature, not just a link to documentation". Any chance of adding one? - David Webb
List comprehensions were implemented by Greg Ewing, who was a postdoc at a department where they taught functional programming in a first-year paper. - ConcernedOfTunbridgeWells
If this was a hidden feature of python there would have been 40% more lines of code written in python today. - Vasil
It took me ages to find list comprehensions in Python. Can't live without them now, of course... - Chinmay Kanchi
+1 I think that nested list comprehensions should also be mentioned:… - inspectorG4dget
[+5] [2008-09-19 15:55:40] pi.

Too lazy to initialize every field in a dictionary? No problem:

In Python > 2.3:

from collections import defaultdict

In Python <= 2.3:

def defaultdict(type_):
    class Dict(dict):
        def __getitem__(self, key):
            return self.setdefault(key, type_())
    return Dict()

In any version:

d = defaultdict(list)
for stuff in lots_of_stuff:


Thanks Ken Arnold [1]. I reimplemented a more sophisticated version of defaultdict. It should behave exactly as the one in the standard library [2].

def defaultdict(default_factory, *args, **kw):                              

    class defaultdict(dict):

        def __missing__(self, key):
            if default_factory is None:
                raise KeyError(key)
            return self.setdefault(key, default_factory())

        def __getitem__(self, key):
                return dict.__getitem__(self, key)
            except KeyError:
                return self.__missing__(key)

    return defaultdict(*args, **kw)

(1) You may be interested to learn about collections.defaultdict(list). - Thomas Wouters
Thanks. Does not work on my production environment though. Python 2.3. - pi.
Careful, that defaultdict reimplementation ends up calling type_ on every lookup instead of only when the item is missing. - Ken Arnold
Prior to python 2.2, you could not subclass dict directly, so you'd need to subclass from UserDict.UserDict. Better still would be to upgrade. - SingleNegationElimination
[+5] [2008-09-21 21:57:37] Armin Ronacher

If you are using descriptors on your classes Python completely bypasses __dict__ for that key which makes it a nice place to store such values:

>>> class User(object):
...  def _get_username(self):
...   return self.__dict__['username']
...  def _set_username(self, value):
...   print 'username set'
...   self.__dict__['username'] = value
...  username = property(_get_username, _set_username)
...  del _get_username, _set_username
>>> u = User()
>>> u.username = "foo"
username set
>>> u.__dict__
{'username': 'foo'}

This helps to keep dir() clean.

[+5] [2008-09-22 18:48:03] tghw


getattr is a really nice way to make generic classes, which is especially useful if you're writing an API. For example, in the FogBugz Python API [1], getattr is used to pass method calls on to the web service seamlessly:

class FogBugz:

    def __getattr__(self, name):
        # Let's leave the private stuff to Python
        if name.startswith("__"):
            raise AttributeError("No such attribute '%s'" % name)

        if not self.__handlerCache.has_key(name):
            def handler(**kwargs):
                return self.__makerequest(name, **kwargs)
            self.__handlerCache[name] = handler
        return self.__handlerCache[name]

When someone calls'bug'), they don't get actually call a search method. Instead, getattr handles the call by creating a new function that wraps the makerequest method, which crafts the appropriate HTTP request to the web API. Any errors will be dispatched by the web service and passed back to the user.


You can also create semi-custom types in this manner. - user13876
[+5] [2008-10-16 10:52:13] Gurch

import antigravity [1]


(5) this answer was already given - Davide
[+5] [2009-10-21 18:44:16] user166390

Exposing Mutable Buffers

Using the Python Buffer Protocol [1] to expose mutable byte-oriented buffers in Python (2.5/2.6).

(Sorry, no code here. Requires use of low-level C API or existing adapter module).


[+5] [2009-11-03 13:10:07] Amol

The pythonic idiom x = ... if ... else ... is far superior to x = ... and ... or ... and here is why:

Although the statement

x = 3 if (y == 1) else 2

Is equivalent to

x = y == 1 and 3 or 2

if you use the x = ... and ... or ... idiom, some day you may get bitten by this tricky situation:

x = 0 if True else 1    # sets x equal to 0

and therefore is not equivalent to

x = True and 0 or 1   # sets x equal to 1

For more on the proper way to do this, see Hidden features of Python [1].


[+5] [2010-01-13 22:46:27] Chinmay Kanchi

Monkeypatching objects

Every object in Python has a __dict__ member, which stores the object's attributes. So, you can do something like this:

class Foo(object):
    def __init__(self, arg1, arg2, **kwargs):
        #do stuff with arg1 and arg2

f = Foo('arg1', 'arg2', bar=20, baz=10)
#now f is a Foo object with two extra attributes

This can be exploited to add both attributes and functions arbitrarily to objects. This can also be exploited to create a quick-and-dirty struct type.

class struct(object):
    def __init__(**kwargs):

s = struct(foo=10, bar=11, baz="i'm a string!')

(6) except for the classes with __slots__ - John La Rooy
(1) Except for some "primitive" types implemented in C (for performance reasons, I guess). For instance, after a = 2, there is no a.__dict__ - Denilson Sá Maia
[+5] [2011-02-17 01:44:42] Abbafei

I'm not sure where (or whether) this is in the Python docs, but for python 2.x (at least 2.5 and 2.6, which I just tried), the print statement can be called with parenthenses. This can be useful if you want to be able to easily port some Python 2.x code to Python 3.x.

Example: print('We want Moshiach Now') should print We want Moshiach Now work in python 2.5, 2.6, and 3.x.

Also, the not operator can be called with parenthenses in Python 2 and 3: not False and not(False) should both return True.

Parenthenses might also work with other statements and operators.

EDIT: NOT a good idea to put parenthenses around not operators (and probably any other operators), since it can make for surprising situations, like so (this happens because the parenthenses are just really around the 1):

>>> (not 1) == 9

>>> not(1) == 9

This also can work, for some values (I think where it is not a valid identifier name), like this: not'val' should return False, and print'We want Moshiach Now' should return We want Moshiach Now. (but not552 would raise a NameError since it is a valid identifier name).

(1) Side-effect of one of the basic design rules of the Python syntax. Parentheses and whitespace can be varied in pretty much any way that doesn't make the meaning ambiguous. (Which is why you get more freedom to word-wrap things like if/while statements if you put the test body in brackets.) - ssokolow
(2) What ssokolow said is correct. In python 2.6 the language was updated to be (more) compatible with python 3. In python 3+ parenthesis are required to call print. see here for more information:‌​on - Jake
[+5] [2011-03-05 08:06:59] armandino

In addition to this mentioned earlier by haridsv [1]:

>>> foo = bar = baz = 1
>>> foo, bar, baz
(1, 1, 1)

it's also possible to do this:

>>> foo, bar, baz = 1, 2, 3
>>> foo, bar, baz
(1, 2, 3)

[+5] [2011-06-14 01:49:41] Ken Arnold

getattr takes a third parameter

getattr(obj, attribute_name, default) is like:

    return obj.attribute
except AttributeError:
    return default

except that attribute_name can be any string.

This can be really useful for duck typing [1]. Maybe you have something like:

class MyThing:
class MyOtherThing:
if isinstance(obj, (MyThing, MyOtherThing)):

(btw, isinstance(obj, (a,b)) means isinstance(obj, a) or isinstance(obj, b).)

When you make a new kind of thing, you'd need to add it to that tuple everywhere it occurs. (That construction also causes problems when reloading modules or importing the same file under two names. It happens more than people like to admit.) But instead you could say:

class MyThing:
    processable = True
class MyOtherThing:
    processable = True
if getattr(obj, 'processable', False):

Add inheritance and it gets even better: all of your examples of processable objects can inherit from

class Processable:
    processable = True

but you don't have to convince everybody to inherit from your base class, just to set an attribute.


[+5] [2011-06-20 14:54:56] Douglas

Simple built-in benchmarking tool

The Python Standard Library comes with a very easy-to-use benchmarking module called "timeit". You can even use it from the command line to see which of several language constructs is the fastest.


% python -m timeit 'r = range(0, 1000)' 'for i in r: pass'
10000 loops, best of 3: 48.4 usec per loop

% python -m timeit 'r = xrange(0, 1000)' 'for i in r: pass'
10000 loops, best of 3: 37.4 usec per loop

[+5] [2011-08-16 10:03:29] mdeous

Here are 2 easter eggs:

One in python itself:

>>> import __hello__
Hello world...

And another one in the Werkzeug module, which is a bit complicated to reveal, here it is:

By looking at Werkzeug's source code, in werkzeug/, there is a line that should draw your attention:

'werkzeug._internal':   ['_easteregg']

If you're a bit curious, this should lead you to have a look at the werkzeug/, there, you'll find an _easteregg() function which takes a wsgi application in argument, it also contains some base64 encoded data and 2 nested functions, that seem to do something special if an argument named macgybarchakku is found in the query string.

So, to reveal this easter egg, it seems you need to wrap an application in the _easteregg() function, let's go:

from werkzeug import Request, Response, run_simple
from werkzeug import _easteregg

def application(request):
    return Response('Hello World!')

run_simple('localhost', 8080, _easteregg(application))

Now, if you run the app and visit http://localhost:8080/?macgybarchakku, you should see the easter egg.

[+5] [2012-01-18 16:12:29] Justin

Dict Comprehensions

>>> {i: i**2 for i in range(5)}
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}

Python documentation [1]

Wikipedia Entry [2]


[+5] [2012-01-18 16:17:28] Justin

Set Comprehensions

>>> {i**2 for i in range(5)}                                                       
set([0, 1, 4, 16, 9])

Python documentation [1]

Wikipedia Entry [2]


This is already covered in - Chris Morgan
[+4] [2008-09-19 11:55:26] cleg

Special methods

Absolute power! [1]


This is my favorite thing about Python. I especially love overloading operators. IMHO object1.add(object2) should always be object1 + object2. - fncomp
I read object1.add() as a destructive operation and + as one that only returns the result without modifying object1. - XTL
[+4] [2008-09-22 17:54:25] amix

Access Dictionary elements as attributes (properties). so if an a1=AttrDict() has key 'name' -> instead of a1['name'] we can easily access name attribute of a1 using ->

class AttrDict(dict):

    def __getattr__(self, name):
        if name in self:
            return self[name]
        raise AttributeError('%s not found' % name)

    def __setattr__(self, name, value):
        self[name] = value

    def __delattr__(self, name):
        del self[name]

person = AttrDict({'name': 'John Doe', 'age': 66})
print person['name']
print = 'Frodo G'

del person.age

print person

(1) no title or explanation? where is the hidden feature here? - Sanjay Manohar
[+4] [2008-09-23 09:41:41] Rafał Dowgird

Tuple unpacking in for loops, list comprehensions and generator expressions:

>>> l=[(1,2),(3,4)]
>>> [a+b for a,b in l ] 

Useful in this idiom for iterating over (key,data) pairs in dictionaries:

d = { 'x':'y', 'f':'e'}
for name, value in d.items():  # one can also use iteritems()
   print "name:%s, value:%s" % (name,value)


name:x, value:y
name:f, value:e

This is also useful when l is replaced with zip(something). - asmeurer
[+4] [2008-09-24 03:03:20] Dan Udey

The first-classness of everything ('everything is an object'), and the mayhem this can cause.

>>> x = 5
>>> y = 10
>>> def sq(x):
...   return x * x
>>> def plus(x):
...   return x + x
>>> (sq,plus)[y>x](y)

The last line creates a tuple containing the two functions, then evaluates y>x (True) and uses that as an index to the tuple (by casting it to an int, 1), and then calls that function with parameter y and shows the result.

For further abuse, if you were returning an object with an index (e.g. a list) you could add further square brackets on the end; if the contents were callable, more parentheses, and so on. For extra perversion, use the result of code like this as the expression in another example (i.e. replace y>x with this code):


This showcases two facets of Python - the 'everything is an object' philosophy taken to the extreme, and the methods by which improper or poorly-conceived use of the language's syntax can lead to completely unreadable, unmaintainable spaghetti code that fits in a single expression.

why would you ever do this? it is hardly a valid criticism of a language to show how it can be intentionally abused. accidental abuse would be valid, but this would never happen by accident. - Christian Oudard
@Gorgapor: Python's consistency and lack of exceptions and special cases is what makes it easy to learn and, to me at least, beautiful. Any powerful tool, used abusively can cause 'mayhem'. Contrary to your opinion, I think the ability to index into a sequence of functions and call it, in a single expression is a powerful and useful idiom, and I've used it more than once, with explanatory comments. - Don O'Donnell
@Don: Your use case, indexing a sequence of functions, is a good one, and very useful. Dan Udey's use case, using a boolean as an index into an inline tuple of functions, is a horrible and useless one, which is needlessly obfuscated. - Christian Oudard
@Gorganpor: Sorry, I meant to address my comment to Dan Udey, not you. I agree entirely with you. - Don O'Donnell
[+4] [2008-10-12 22:40:18] pixelbeat

Taking advantage of python's dynamic nature to have an apps config files in python syntax. For example if you had the following in a config file:

  "name1": "value1",
  "name2": "value2"

Then you could trivially read it like:

config = eval(open("filename").read())

(3) I agree. I've started using a or file which I then load as a module. Sure beats the extra steps of parsing some other file format. - monkut
(24) I can see this becoming a security issue. - Richard Waite
(1) It could be, but sometimes it's not. In those cases, it's awesome. - recursive
Python can be a much more expressive configuration language than any amount of XML or INI files. I'm trying to avoid explicit config, with just an invoke script that does “import myapp; app= myapp.Application(...);”. Options default sensibly but can be changed using constructor args. - bobince
(This assumes that run-time configuration in the app itself is stored in a database. More significant configuration is possible through allowing the user to subclass Application and set properties/methods on the subclass.) - bobince
(9) That's a bold action for even non-hostile environments. eval() is a loaded gun, that needs intensive caution while handling. On the other hand, using JSON (now in 2.6 stdlib) is much more secure and portable for carrying configuration. - Berk D. Demir
(5) I would never approve a code review which contained an eval. - a paid nerd
(1) @Richard Waite: It's usually a security issue if an adversary can modify your config file... - L̲̳o̲̳̳n̲̳̳g̲̳̳p̲̳o̲̳̳k̲̳̳e̲̳̳
I agree, this is extremely useful in many quick'n'dirty scripts. But it's better to use execfile instead of eval+open+read. - Jukka Suomela
(1) Even in a trusted environment, this is an unacceptable security issue. If you need to parse config files, use ConfigParser - 10 lines of code give you a full blown mechanism for creating universally readable configuration file. Your approach is really not portable and not extensible. - Escualo
Then why does Django store site settings in a .py file (including db password)? Are they out of their minds, are they not using eval(), or is there something I'm missing? - Agos
(1) I personally don't like using eval() for anything, especially settings. I always wrap Django settings around ConfigParser and save actual information in a permission-guarded file. Like Rasmus Lerdorf said "If eval() is the answer, you’re almost certainly asking the wrong question." - AdmiralNemo
eval() has the same security issues that import does, so denying a script that uses it for security issues doesn't make sense. It is the usual issue of never evaling untrusted user input, but if the file ends in .py and gets imported, it still gets executed. The reason to use import is because it puts your configuration into a different namespace cleanly. You could also use execfile(ConfigFile,ConfigDict) to store the configuration files into a dictionary. - Perkins
no need for eval, name your dict (config) and import it from your module… (from configfile import config) - Tshirtman
[+4] [2008-10-20 11:59:39] Tupteq

Method replacement for object instance

You can replace methods of already created object instances. It allows you to create object instance with different (exceptional) functionality:

>>> class C(object):
...     def fun(self):
...         print "C.a", self
>>> inst = C()
>>>  # C.a method is executed
C.a <__main__.C object at 0x00AE74D0>
>>> instancemethod = type(
>>> def fun2(self):
...     print "fun2", self
>>> = instancemethod(fun2, inst, C)  # Now we are replace C.a by fun2
>>>  # ... and fun2 is executed
fun2 <__main__.C object at 0x00AE74D0>

As we can C.a was replaced by fun2() in inst instance (self didn't change).

Alternatively we may use new module, but it's depreciated since Python 2.6:

>>> def fun3(self):
...     print "fun3", self
>>> import new
>>> = new.instancemethod(fun3, inst, C)
fun3 <__main__.C object at 0x00AE74D0>

Node: This solution shouldn't be used as general replacement of inheritance mechanism! But it may be very handy in some specific situations (debugging, mocking).

Warning: This solution will not work for built-in types and for new style classes using slots.

I personally tend to prefer to leave instancemethod to classes; paticularly so that the binding behavior foo.method works normally. If I'm binding self explicitly, I'll instead use functools.partial, which achieves the same effect, but makes it a bit clearer that the binding behavior is explicit. - SingleNegationElimination
[+4] [2009-06-18 15:54:00] Markus

With a minute amount of work, the threading module becomes amazingly easy to use. This decorator changes a function so that it runs in its own thread, returning a placeholder class instance instead of its regular result. You can probe for the answer by checking placeolder.result or wait for it by calling placeholder.awaitResult()

def threadify(function):
    exceptionally simple threading decorator. Just:
    >>> @threadify
    ... def longOperation(result):
    ...     time.sleep(3)
    ...     return result
    >>> A= longOperation("A has finished")
    >>> B= longOperation("B has finished")

    A doesn't have a result yet:
    >>> print A.result

    until we wait for it:
    >>> print A.awaitResult()
    A has finished

    we could also wait manually - half a second more should be enough for B:
    >>> time.sleep(0.5); print B.result
    B has finished
    class thr (threading.Thread,object):
        def __init__(self, *args, **kwargs):
            threading.Thread.__init__ ( self )  
            self.args, self.kwargs = args, kwargs
            self.result = None
        def awaitResult(self):
            return self.result        
        def run(self):
            self.result=function(*self.args, **self.kwargs)
    return thr

You may be interested in the concurrent.futures module added in Python 3.2 - ncoghlan
[+4] [2010-02-13 22:09:00] Juanjo Conti

There are no secrets in Python ;)

[+4] [2010-04-06 00:47:03] haridsv

You can assign several variables to the same value

>>> foo = bar = baz = 1
>>> foo, bar, baz
(1, 1, 1)

Useful to initialize several variable to None, in a compact way.

(1) You could also do: foo, bar, baz = [None]*3 to get the same result. - Van Nguyen
You can also compare multiple things at once, like foo == bar == baz. It's essentially the same thing as (what is right now) the top answer. - asmeurer
(4) Also be aware that this will only create the value once, and all the variables will reference that one same value. It's fine for None, though, since it is a singleton object. - asmeurer
[+4] [2010-07-16 18:42:51] Wayne Werner

Combine unpacking with the print function:

# in 2.6 <= python < 3.0, 3.0 + the print function is native
from __future__ import print_function 

mylist = ['foo', 'bar', 'some other value', 1,2,3,4]  

(1) I prefer something like print(' '.join([str(x) for x in mylist])). Using unpacking like this is too clever. - Brian
Performance wise I think the 'clever' version is faster (after doing some completely non-scientific tests). Plus you know * means you're unpacking a list or tuple, and you can use the sep keyword. - Wayne Werner
(2) I find this clean and simple, but I always wonder why pylint insists there's too much magic in there ;) - Paweł Prażak
@Paweł Prażak: I believe PyLint simply considers * and ** to be too magical, period. - ssokolow
(1) maybe some people are just allergic to * and ** because of pointer and double pointer resemblance ;) - Paweł Prażak
(1) @Brian I would drop the list and use generator print(' '.join(word for word in mylist)) - Paweł Prażak
[+4] [2011-01-07 16:00:50] Ant

insert vs append

not a feature, but may be interesting

suppose you want to insert some data in a list, and then reverse it. the easiest thing is

count = 10 ** 5
nums = []
for x in range(count):

then you think: what about inserting the numbers from the beginning, instead? so:

count = 10 ** 5 
nums = [] 
for x in range(count):
    nums.insert(0, x)

but it turns to be 100 times slower! if we set count = 10 ** 6, it will be 1,000 times slower; this is because insert is O(n^2), while append is O(n).

the reason for that difference is that insert has to move each element in a list each time it's called; append just add at the end of the list that elements (sometimes it has to re-allocate everything, but it's still much more fast)

Or you can use nums.reverse() and have it done by the core - without the need to use range() - rob
i don't get your point, sorry.. - Ant
(1) The fact python lists are implemented with arrays is interesting; however, the example is not that useful, because the idiomatic way to reverse a list is to use reverse method, without any additional step. - rob
(2) And that would be why collections.deque exists - you can insert and pop entries from either end in O(1) - ncoghlan
[+4] [2011-01-13 00:18:43] Apalala

A module exports EVERYTHING in its namespace

Including names imported from other modules!

# this is ""
from operator import *
from inspect  import *

Now test what's importable from the module.

>>> import answer42
>>> answer42.__dict__.keys()
['gt', 'imul', 'ge', 'setslice', 'ArgInfo', 'getfile', 'isCallable', 'getsourcelines', 'CO_OPTIMIZED', 'le', 're', 'isgenerator', 'ArgSpec', 'imp', 'lt', 'delslice', 'BlockFinder', 'getargspec', 'currentframe', 'CO_NOFREE', 'namedtuple', 'rshift', 'string', 'getframeinfo', '__file__', 'strseq', 'iconcat', 'getmro', 'mod', 'getcallargs', 'isub', 'getouterframes', 'isdatadescriptor', 'modulesbyfile', 'setitem', 'truth', 'Attribute', 'div', 'CO_NESTED', 'ixor', 'getargvalues', 'ismemberdescriptor', 'getsource', 'isMappingType', 'eq', 'index', 'xor', 'sub', 'getcomments', 'neg', 'getslice', 'isframe', '__builtins__', 'abs', 'getmembers', 'mul', 'getclasstree', 'irepeat', 'is_', 'getitem', 'indexOf', 'Traceback', 'findsource', 'ModuleInfo', 'ipow', 'TPFLAGS_IS_ABSTRACT', 'or_', 'joinseq', 'is_not', 'itruediv', 'getsourcefile', 'dis', 'os', 'iand', 'countOf', 'getinnerframes', 'pow', 'pos', 'and_', 'lshift', '__name__', 'sequenceIncludes', 'isabstract', 'isbuiltin', 'invert', 'contains', 'add', 'isSequenceType', 'irshift', 'types', 'tokenize', 'isfunction', 'not_', 'istraceback', 'getmoduleinfo', 'isgeneratorfunction', 'getargs', 'CO_GENERATOR', 'cleandoc', 'classify_class_attrs', 'EndOfBlock', 'walktree', '__doc__', 'getmodule', 'isNumberType', 'ilshift', 'ismethod', 'ifloordiv', 'formatargvalues', 'indentsize', 'getmodulename', 'inv', 'Arguments', 'iscode', 'CO_NEWLOCALS', 'formatargspec', 'iadd', 'getlineno', 'imod', 'CO_VARKEYWORDS', 'ne', 'idiv', '__package__', 'CO_VARARGS', 'attrgetter', 'methodcaller', 'truediv', 'repeat', 'trace', 'isclass', 'ior', 'ismethoddescriptor', 'sys', 'isroutine', 'delitem', 'stack', 'concat', 'getdoc', 'getabsfile', 'ismodule', 'linecache', 'floordiv', 'isgetsetdescriptor', 'itemgetter', 'getblock']
>>> from answer42 import getmembers
>>> getmembers
<function getmembers at 0xb74b2924>

That's a good reason not to from x import * and to define __all__ =.

(4) How is that a hidden feature? __all__ exists to limit what's exported, and it's even in the tutorial. - Cat Plus Plus
(1) @PiotrLegnica Did you know that a module exports also what it imports unless _all_ is used? It is unlike most languages with modules, and I haven't read about the "feature" in the documentation, so, for me, it qualifies as hidden. - Apalala
@Apalala: why shouldn't imported (sub)modules be exported given the fact they are in the namespace of the main module? - Cristian Ciupitu
@Cristian Ciuptu. In most other languages a module (or its equivalent) exports only what it defines, plus, implicitly, what it's reachable from what it defines. It's traditionally part of separation of concerns and implementation hiding. A module may import, say, math to do its thing, and cope with standard arithmetic in a later version; importing modules should not know about that (traditionally). - Apalala
IOW, if you want to use math.sqrt(), then you should import it from math, not from answer42. - Apalala
@Apala: I got your point regarding other languages in the first place, but given Python's dynamic nature and object orientation, this behavior isn't quite a surprise. - Cristian Ciupitu
@Cristian Ciupitu. It is strictly a not-well-documented design choice that has nothing to do with the dynamics of the language. It is also so unusual, unexpected, and useless, that it might qualify as a design flaw. - Apalala
@Apalala: how is a module different from a class or let's say a type like int or float? All of them are objects and all of them reside in the module's namespace. The design is consistent. It let's you do stuff like def f(m, x): return m.sqrt(x); f(math, x), although this is not exactly a good coding style. The "feature" is unusual and unexpected only if you compare it with other languages (that are less "dynamic" whatever that means). - Cristian Ciupitu
@Apalala: if you want to see a good use case checkout the source code of the os module. It does stuff like import posixpath as path or import ntpath as path, posixpath and ntpath being other modules, of course. - Cristian Ciupitu
@Cristian Ciupitu I concede. It is useful, and consistently used. But I still don't like that is so undocumented. - Apalala
[+4] [2011-03-27 02:29:50] Kabie

Unicode identifier in Python3:

>>> 'Unicode字符_تكوين_Variable'.isidentifier()
>>> Unicode字符_تكوين_Variable='Python3 rules!'
>>> Unicode字符_تكوين_Variable
'Python3 rules!'

(2) Of course, using non-ascii characters in python source code for any reason except to spell contributor names in the header documentation is in violation of pep-8 code style rules. - SingleNegationElimination
[+4] [2011-12-24 15:49:25] e-satis

Python have exceptions for very unexpected things:


This let you import an alternative if a lib is missing

    import json
except ImportError:
    import simplejson as json


For loops do this internally, and catch StopIteration:

Traceback (most recent call last):
  File "<pyshell#4>", line 1, in <module>


>>> try:
...     assert []
... except AssertionError:
...     print "This list should not be empty"
This list should not be empty

While this is more verbose for one check, multiple checks mixing exceptions and boolean operators with the same error message can be shortened this way.

[+3] [2008-09-19 13:51:54] Thomas Wouters

Everything is dynamic

"There is no compile-time". Everything in Python is runtime. A module is 'defined' by executing the module's source top-to-bottom, just like a script, and the resulting namespace is the module's attribute-space. Likewise, a class is 'defined' by executing the class body top-to-bottom, and the resulting namespace is the class's attribute-space. A class body can contain completely arbitrary code -- including import statements, loops and other class statements. Creating a class, function or even module 'dynamically', as is sometimes asked for, isn't hard; in fact, it's impossible to avoid, since everything is 'dynamic'.

This gives Python the wonderful reload(). - user13876
Everything is dynamic... Except classes and modules implemented in C, which are not as dynamic as everything else. (try something like dict.x = 3, and Python won't let you) - Denilson Sá Maia
Yes, modules and types defined in C are defined at compiletime, but they're still created at runtime. Also, dict.x = 3 has nothing to do with things being dynamic, but with the dict type not allowing attributes to be assigned. You can make your own classes, in Python, that don't allow that. You can make your own type, in C, that does allow it. It's unrelated. - Thomas Wouters
How is this a hidden feature? - Alexandru
I frequently hear this, but it isn't quite true. When you import a module, the whole thing is compiled immediately. If there are any syntax errors anywhere in the module, nothing will execute. The big difference between Python and more traditionally compiled languages is that class and function definitions are statements that are executed at runtime (and hence can be skipped via if statements and exceptions, nested arbitrarily, etc). - ncoghlan
[+3] [2008-09-23 17:48:20] Constantin

Objects in boolean context

Empty tuples, lists, dicts, strings and many other objects are equivalent to False in boolean context (and non-empty are equivalent to True).

empty_tuple = ()
empty_list = []
empty_dict = {}
empty_string = ''
empty_set = set()
if empty_tuple or empty_list or empty_dict or empty_string or empty_set:
  print 'Never happens!'

This allows logical operations to return one of it's operands instead of True/False, which is useful in some situations:

s = t or "Default value" # s will be assigned "Default value"
                         # if t is false/empty/none

(4) actually this is discouraged, you should use the "new" s = t if t else "default value" - Tom
[+3] [2008-10-17 02:19:19] zaphod

Private methods and data hiding (encapsulation)

There's a common idiom in Python of denoting methods and other class members that are not intended to be part of the class's external API by giving them names that start with underscores. This is convenient and works very well in practice, but it gives the false impression that Python does not support true encapsulation of private code and/or data. In fact, Python automatically gives you lexical closures [1], which make it very easy to encapsulate data in a much more bulletproof way when the situation really warrants it. Here's a contrived example of a class that makes use of this technique:

class MyClass(object):
  def __init__(self):

    privateData = {}

    self.publicData = 123

    def privateMethod(k):
      print privateData[k] + self.publicData

    def privilegedMethod():
      privateData['foo'] = "hello "

    self.privilegedMethod = privilegedMethod

  def publicMethod(self):
    print self.publicData

And here's a contrived example of its use:

>>> obj = MyClass()
>>> obj.publicMethod()
>>> obj.publicData = 'World'
>>> obj.publicMethod()
>>> obj.privilegedMethod()
hello World
>>> obj.privateMethod()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'MyClass' object has no attribute 'privateMethod'
>>> obj.privateData
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'MyClass' object has no attribute 'privateData'

The key is that privateMethod and privateData aren't really attributes of obj at all, so they can't be accessed from outside, nor do they show up in dir() or similar. They're local variables in the constructor, completely inaccessible outside of __init__. However, because of the magic of closures, they really are per-instance variables with the same lifetime as the object with which they're associated, even though there's no way to access them from outside except (in this example) by invoking privilegedMethod. Often this sort of very strict encapsulation is overkill, but sometimes it really can be very handy for keeping an API or a namespace squeaky clean.

In Python 2.x, the only way to have mutable private state is with a mutable object (such as the dict in this example). Many people have remarked on how annoying this can be. Python 3.x will remove this restriction by introducing the nonlocal keyword described in PEP 3104 [2].


(6) this is almost never a good idea. - Christian Oudard
(1) "They're local variables in the constructor, completely inaccessible outside of init." Not true: >>> [c.cell_contents for c in obj.privilegedMethod.func_closure] --> [{'foo': 'hello '}, <function privateMethod at 0x65530>] - Miles
The right way of preventing attribute access would be have a __getattribute__ or __getattr__ sentinal and route accepted calls accordingly. Again, secrecy and python isnt a good idea. - Jeffrey Jose
[+3] [2008-10-24 22:36:56] Karl Anderson

Functional support.

Generators and generator expressions, specifically.

Ruby made this mainstream again, but Python can do it just as well. Not as ubiquitous in the libraries as in Ruby, which is too bad, but I like the syntax better, it's simpler.

Because they're not as ubiquitous, I don't see as many examples out there on why they're useful, but they've allowed me to write cleaner, more efficient code.

[+3] [2009-03-02 18:23:19] lprsd

Simulating the tertiary operator using and and or.

and and or operators in python return the objects themselves rather than Booleans. Thus:

In [18]: a = True

In [19]: a and 3 or 4
Out[19]: 3

In [20]: a = False

In [21]: a and 3 or 4
Out[21]: 4

However, Py 2.5 seems to have added an explicit tertiary operator

    In [22]: a = 5 if True else '6'

    In [23]: a
    Out[23]: 5

Well, this works if you are sure that your true clause does not evaluate to False. example:

>>> def foo(): 
...     print "foo"
...     return 0
>>> def bar(): 
...     print "bar"
...     return 1
>>> 1 and foo() or bar()

To get it right, you've got to just a little bit more:

>>> (1 and [foo()] or [bar()])[0]

However, this isn't as pretty. if your version of python supports it, use the conditional operator.

>>> foo() if True or bar()

(2) Careful with that: >>> a and "" or ":(" you'll always get a frowny face back, no matter if a is true or false - Marius Gedminas
Marius, Only, if a is false. Otherwise U'd want ":(" as "" is false. - lprsd
(4) (falseValue, trueValue)[cond] is a cleaner (IMO) way to simulate a ternary operator. - Ponkadoodle
[+3] [2009-07-06 17:28:20] Steven Sproat

If you've renamed a class in your application where you're loading user-saved files via Pickle, and one of the renamed classes are stored in a user's old save, you will not be able to load in that pickled file.

However, simply add in a reference to your class definition and everything's good:

e.g., before:

class Bleh:


class Blah:

so, your user's pickled saved file contains a reference to Bleh, which doesn't exist due to the rename. The fix?

Bleh = Blah


A reasonable hack, but why has the class name changed? was it because it conflicts with something else? Doing this sort of negates any benefit you might have had from renaming the class in the first place. - SingleNegationElimination
I was modelling classes on "drawing" tools - pen, rectangle, select etc, and was using the class name as GUI button labels. I then changed to a class variable to represent the name, later. - Steven Sproat
[+3] [2009-08-27 02:14:14] Greg

The fact that EVERYTHING is an object, and as such is extensible. I can add member variables as metadata to a function that I define:

>>> def addInts(x,y): 
...    return x + y
>>> addInts.params = ['integer','integer']
>>> addInts.returnType = 'integer'

This can be very useful for writing dynamic unit tests, e.g.

(2) Most things are objects; and some objects do not take property assignments so happily. - user166390
[+3] [2009-10-20 06:35:25] six8

Simple way to test if a key is in a dict:

>>> 'key' in { 'key' : 1 }

>>> d = dict(key=1, key2=2)
>>> if 'key' in d:
...     print 'Yup'

(7) This is hopefully not hidden for any non-new Python coder! - u0b34a0f6ae
Or even new ones, since it's introduced in the tutorial. - XTL
[+3] [2010-07-19 15:51:01] Don O'Donnell
** Using sets to reference contents in sets of frozensets**

As you probably know, sets are mutable and thus not hashable, so it's necessary to use frozensets if you want to make a set of sets (or use sets as dictionary keys):

>>> fabc = frozenset('abc')
>>> fxyz = frozenset('xyz')
>>> mset = set((fabc, fxyz))
>>> mset
{frozenset({'a', 'c', 'b'}), frozenset({'y', 'x', 'z'})}

However, it's possible to test for membership and remove/discard members using just ordinary sets:

>>> abc = set('abc')
>>> abc in mset
>>> mset.remove(abc)
>>> mset
{frozenset({'y', 'x', 'z'})}

To quote from the Python Standard Library docs:

Note, the elem argument to the __contains__(), remove(), and discard() methods may be a set. To support searching for an equivalent frozenset, the elem set is temporarily mutated during the search and then restored. During the search, the elem set should not be read or mutated since it does not have a meaningful value.

Unfortunately, and perhaps astonishingly, the same is not true of dictionaries:

>>> mdict = {fabc:1, fxyz:2}
>>> fabc in mdict
>>> abc in mdict
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
TypeError: unhashable type: 'set'

[+3] [2011-02-22 21:34:21] Ivan P

Python has "private" variables

Variables that start, but not end, with a double underscore become private, and not just by convention. Actually __var turns into _Classname__var, where Classname is the class in which the variable was created. They are not inherited and cannot be overriden.

>>> class A:
...     def __init__(self):
...             self.__var = 5
...     def getvar(self):
...             return self.__var
>>> a = A()
>>> a.__var
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: A instance has no attribute '__var'
>>> a.getvar()
>>> dir(a)
['_A__var', '__doc__', '__init__', '__module__', 'getvar']

(3) umm... not quite "real private variables". Nothing stops you from accessing _A__var... - Jake
Nothing stops you from accessing private memory locations in C++ programs to twiddle private variables either, but it's seriously frowned upon. - Benson
(3) In C++, it's a limitation of how the language addresses memory. If you want to risk crashing your program or inducing someone less familiar with the code to do so, that can't be prevented without breaking the C-language compatibility. Python's "member name mangling" isn't intended for use as a private variable mechanism. It's intended for public members which need to opt out of the normal inheritance/override rules. Calling it "private variable support" because one popular language is unable to offer full variable isolation only devalues the concept. - ssokolow
You are right, I did not pick good words to describe the functionality - they are not real private variables. Edited as best as I could. - Ivan P
Who need private variables? When you're dumb nobody can prevent you from making dumb things in every programming language. When someone wants to change a constant (written in full upper case), he can do that, but he don't get his code through the code review. - Joschua
[+3] [2011-06-30 01:04:25] matchew

while not very pythonic you can write to a file using print [1]

print>>outFile, 'I am Being Written'

Explanation [2]:

This form is sometimes referred to as “print chevron.” In this form, the first expression after the >> must evaluate to a “file-like” object, specifically an object that has a write() method as described above. With this extended form, the subsequent expressions are printed to this file object. If the first expression evaluates to None, then sys.stdout is used as the file for output.


thanks for adding the explanation. I was sort of curious where I had seen that before, I've just been using it for awhile. Someone commented on it last night and I realized that others were probably unfamiliar with its usage. - matchew
(1) This syntax saw some updating in Python 3, so you can now do print('I am being writtten', file=outFile). I was just reading about the changes. So now it actually is much more pythonic. - shadowland
very nice. I've been enjoying pypy lately, which may only delay my transition to python3. - matchew
[+3] [2011-10-12 15:27:35] etuardu

Print multiline strings one screenful at a time

Not really useful feature hidden in the site._Printer class, whose the license object is an instance. The latter, when called, prints the Python license. One can create another object of the same type, passing a string -- e.g. the content of a file -- as the second argument, and call it:


That would print the file content splitted by a certain number of lines at a time:

file row 21
file row 22
file row 23

Hit Return for more, or q (and Return) to quit:

[+2] [2008-09-21 21:49:26] Armin Ronacher

If you use exec in a function the variable lookup rules change drastically. Closures are no longer possible but Python allows arbitrary identifiers in the function. This gives you a "modifiable locals()" and can be used to star-import identifiers. On the downside it makes every lookup slower because the variables end up in a dict rather than slots in the frame:

>>> def f():
...  exec "a = 42"
...  return a
>>> def g():
...  a = 42
...  return a
>>> import dis
>>> dis.dis(f)
  2           0 LOAD_CONST               1 ('a = 42')
              3 LOAD_CONST               0 (None)
              6 DUP_TOP             
              7 EXEC_STMT           

  3           8 LOAD_NAME                0 (a)
             11 RETURN_VALUE        
>>> dis.dis(g)
  2           0 LOAD_CONST               1 (42)
              3 STORE_FAST               0 (a)

  3           6 LOAD_FAST                0 (a)
              9 RETURN_VALUE        

(3) Just to nitpick: that only applies to bare exec. If you specify the namespace for it to use, eg "d={}; exec "a=42" in d" this won't happen. - Brian
[+2] [2009-03-17 00:56:53] jfs

The spam module in standard Python

It is used for testing purposes.

I've picked it from ctypes tutorial [1]. Try it yourself:

>>> import __hello__
Hello world...
>>> type(__hello__)
<type 'module'>
>>> from __phello__ import spam
Hello world...
Hello world...
>>> type(spam)
<type 'module'>
>>> help(spam)
Help on module __phello__.spam in __phello__:



(3) sorry, why and how would you use this? - cmcginty
@Casey: read "Accessing values exported from dlls" section from the ctypes tutorial… - jfs
(1) Your example is unclear. - Mikel
[+2] [2009-04-23 14:26:17] Mike

Memory Management

Python dynamically allocates memory and uses garbage collection to recover unused space. Once an object is out of scope, and no other variables reference it, it will be recovered. I do not have to worry about buffer overruns and slowly growing server processes. Memory management is also a feature of other dynamic languages but Python just does it so well.

Of course, we must watch out for circular references and keeping references to objects which are no longer needed, but weak references help a lot here.

[+2] [2009-09-09 13:01:23] Busted Keaton

The getattr built-in function :

>>> class C():
    def getMontys(self):
        self.montys = ['Cleese','Palin','Idle','Gilliam','Jones','Chapman']
        return self.montys

>>> c = C()
>>> getattr(c,'getMontys')()
['Cleese', 'Palin', 'Idle', 'Gilliam', 'Jones', 'Chapman']

Useful if you want to dispatch function depending on the context. See examples in Dive Into Python ( Here [1])


[+2] [2009-10-21 18:39:29] user166390

Classes as first-class objects (shown through a dynamic class definition)

Note the use of the closure as well. If this particular example looks like a "right" approach to a problem, carefully reconsider ... several times :)

def makeMeANewClass(parent, value):
  class IAmAnObjectToo(parent):
    def theValue(self):
      return value
  return IAmAnObjectToo

Klass = makeMeANewClass(str, "fred")
o = Klass()
print isinstance(o, str)  # => True
print o.theValue()        # => fred

[+2] [2009-11-14 13:05:23] Eryk Sun

Regarding Nick Johnson's implementation of a Property class [1] (just a demonstration of descriptors, of course, not a replacement for the built-in), I'd include a setter that raises an AttributeError:

class Property(object):
    def __init__(self, fget):
        self.fget = fget

    def __get__(self, obj, type):
        if obj is None:
            return self
        return self.fget(obj)

    def __set__(self, obj, value):
       raise AttributeError, 'Read-only attribute'

Including the setter makes this a data descriptor as opposed to a method/non-data descriptor. A data descriptor has precedence over instance dictionaries. Now an instance can't have a different object assigned to the property name, and attempts to assign to the property will raise an attribute error.


[+2] [2011-03-09 19:24:07] Luper Rouch

Not at all a hidden feature but still nice:

import os.path as op

root_dir = op.abspath(op.join(op.dirname(__file__), ".."))

Saves lots of characters when manipulating paths !

[+2] [2011-05-27 14:08:33] Mojo_Jojo

Ever used xrange(INT) instead of range(INT) .... It's got less memory usage and doesn't really depend on the size of the integer. Yey!! Isn't that good?

(2) In Python 3, both are the same. - Wok
[+2] [2012-01-21 06:22:09] Primal Pappachan

Not really a hidden feature but something that might come in handy.

for looping through items in a list pairwise

for x, y in zip(s, s[1:]):

[+2] [2012-02-02 17:16:36] Giampaolo Rodolà
>>> float('infinity')
>>> float('NaN')

More info:


People forget about these far too often. I've seen things like having a "count" argument to a function, which is then decreased by one until it gets to zero, defaulting to a special value which is made to mean not to stop, when float('inf') would do nicely, not requiring any special code at all (inf - 1 == inf). - Chris Morgan
[+1] [2008-09-19 13:25:43] Kevin Little
>>> x=[1,1,2,'a','a',3]
>>> y = [ _x for _x in x if not _x in locals()['_[1]'] ]
>>> y
[1, 2, 'a', 3]

"locals()['_[1]']" is the "secret name" of the list being created. Very useful when state of list being built affects subsequent build decisions.

(11) Ew. This 'name' of the result list depends on too many factors to really consider it more than abuse of a specific implementation (and specific to a particular version, to boot.) On top of that it's an O(n^2) algorithm. Yuck. - Thomas Wouters
(3) Well, at least no one will claim this one isn't hidden. - I. J. Kennedy
[+1] [2011-06-19 15:35:02] jassinm

mapreduce using map and reduce functions

create a simple sumproduct this way:

def sumprod(x,y):
    return reduce(lambda a,b:a+b, map(lambda a, b: a*b,x,y))


In [2]: sumprod([1,2,3],[4,5,6])
Out[2]: 32

[+1] [2011-12-09 07:18:25] sransara

Not a programming feature but is useful when using Python with bash or shell scripts.

python -c"import os; print(os.getcwd());"

See the python documentation here [1]. Additional things to note when writing longer Python scripts can be seen in this discussion [2].


[+1] [2012-01-13 21:35:06] Perkins

Python's positional and keyword expansions can be used on the fly, not just from a stored list.

l=lambda x,y,z:x+y+z
print l(*a)
print l(*[a[0],2,3])

It is usually more useful with things like this:


[0] [2009-12-24 13:30:18] Martin Thurau

You can construct a functions kwargs on demand:

kwargs = {}
kwargs[str("%s__icontains" % field)] = some_value

The str() call is somehow needed, since python complains otherwise that it is no string. Don't know why ;) I use this for a dynamic filters within Djangos object model:

result = model_class.objects.filter(**kwargs)

(1) The reason is complains is probably because "field" is unicode, which makes the whole string unicode. - mthurlin
[0] [2011-05-25 10:47:42] Rabarberski

Multiply a string to get it repeated

print "SO"*5 



(1) You can also do this with lists: [3]*3 == [3, 3, 3] - inspectorG4dget
[0] [2011-05-26 18:01:29] inspectorG4dget


If you want to get the output of a function which outputs directly to stdout or stderr as is the case with os.system, commands.getoutput [1] comes to the rescue. The whole module is just made of awesome.

>>> print commands.getoutput('ls')
myFile1.txt    myFile2.txt    myFile3.txt    myFile4.txt    myFile5.txt
myFile6.txt    myFile7.txt    myFile8.txt    myFile9.txt    myFile10.txt
myFile11.txt   myFile12.txt   myFile13.txt   myFile14.txt

(2) Given that it's basically a UNIX-only precursor to the subprocess module and has been removed in Python 3.0, shouldn't you be talking about subprocess instead of commands? - ssokolow
Touche! However, I'm using 2.7 on windows (not UNIX-only) at work. It works here and I just discovered it. Thus, I thought it was worth a mention. - inspectorG4dget
specifically, subprocess.check_output - wim
[0] [2011-09-05 07:46:47] giodamelio

Here is a helpful function I use when debugging type errors

def typePrint(object):
    print(str(object) + " - (" + str(type(object)) + ")")

It simply prints the input followed by the type, for example

>>> a = 101
>>> typePrint(a)
    101 - (<type 'int'>)

[0] [2011-10-21 16:43:06] shadowland

Interactive Debugging of Scripts (and doctest strings)

I don't think this is as widely known as it could be, but add this line to any python script:

import pdb; pdb.set_trace()

will cause the PDB debugger to pop up with the run cursor at that point in the code. What's even less known, I think, is that you can use that same line in a doctest:

>>> 1 in (1,2,3)   
>>> import pdb; pdb.set_trace(); 1 in (1,2,3)

You can then use the debugger to checkout the doctest environment. You can't really step through a doctest because the lines are each run autonomously, but it's a great tool for debugging the doctest globs and environment.

[0] [2011-12-07 18:49:03] Giampaolo Rodolà

In Python 2 you can generate a string representation of an expression by enclosing it with backticks:

 >>> `sorted`
'<built-in function sorted>'

This is gone in python 3.X.

[0] [2012-01-09 08:57:08] Srinivas Reddy Thatiparthy

some cool features with reduce and operator.

>>> from operator import add,mul
>>> reduce(add,[1,2,3,4])
>>> reduce(mul,[1,2,3,4])
>>> reduce(add,[[1,2,3,4],[1,2,3,4]])
[1, 2, 3, 4, 1, 2, 3, 4]
>>> reduce(add,(1,2,3,4))
>>> reduce(mul,(1,2,3,4))

[-2] [2008-11-27 03:24:04] M. Utku ALTINKAYA
is_ok() and "Yes" or "No"

That's strange. Interesting, but strange. >>> True and "Yes" or "No" 'Yes' >>> False and "Yes" or "No" 'No' >>> x = "Yes" >>> y = "No" >>> >>> False and x or y - monkut
(30) The preferred way to accomplish this in Python 2.5 or up is " 'Yes' if is_ok() else 'No' ". - Paul Fisher
whether it is preferred or not, the way is correct and I use all the time and I think it is elegant. since this is hidden features question really interesting this post has been negatively voted, - M. Utku ALTINKAYA
"preferred" argument is open to discussion, becouse this way, the execution order is the same as the logical order, while "Yes" if True else "No" is not like that. - M. Utku ALTINKAYA
(7) "Preferred" In this case means that the conditional operator works as expected for all possible operands. Specifically, True and False or True is True, but False if True else True is false, which is almost certainly what you expected. This is especially important where the operands have side effects, and the conditional operator will NEVER evaluate more than one of its conditional clauses. - SingleNegationElimination
This is a commonly used feature in many languages [especially bash, where the && || syntax is used to emulate ternary operator] - Foo Bah
[-2] [2011-09-26 19:55:45] bfontaine
for line in open('foo'):

which is equivalent (but better) to:

f = open('foo', 'r')
for line in f.readlines():

(2) That's not equivalent at all, because you can't predict when the file will be closed. That depends on the interpreter. As far as I know CPython garbage collects objects as soon as possible, but other interpreters might not. - Cristian Ciupitu
[-5] [2011-07-27 22:52:20] Abdelouahab

to activate the autocompletion in IDE that accepts it (like IDLE, Editra, IEP) instead of making: "hi". (and then you hit TAB), you can cheat in the IDE, just make hi". (and you heat TAB) (as you can see, there is no single quote in the beginning) because it will only follows the latest punctuation, it's like when you add : and hit enter, it adds directly an indentation, dont know if it will make change, but it's a tip no more :)

Can someone please clarify what this means? - the Tin Man
that when you hit tab choices can be aviable even if it's not a string, just do this is IEP for example: ". and hit TAB, you'll get choices that offer them when dealing with strings... or make this other hint: : and hit enter, you'll get an identation :) - Abdelouahab
This seems to be just a common editor feature or two. - XTL
[-9] [2010-05-13 20:23:55] L̲̳o̲̳̳n̲̳̳g̲̳̳p̲̳o̲̳̳k̲̳̳e̲̳̳


def g():
    print 'hi!'

def f(): (

>>> f()

(1) >>> def f(): ( ... g() ... g() File "<stdin>", line 3 g() ^ SyntaxError: invalid syntax - bukzor
(2) I was trying to show that your feature doesn't work if you have more than one statement inside the "braces". - bukzor
(34) Everyone knows that Python uses #{ and #} for braces. Subject to certain lexical constraints. - detly