share
Stack OverflowCommon "truisms" needing correction the most
[+43] [41] Charles Bretana
[2009-01-11 02:00:21]
[ lessons-learned ]
[ http://stackoverflow.com/questions/432167] [DELETED]

In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went:

"It's not what you don't know that'll hurt you, it's what you do know that ain't so."

We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us.

Unfortunately, as these truisms spread throughout the community, the details — why they came about and the caveats that affect when they apply — tend to not spread along with them.

We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understanding the details behind them.

For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "truism" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this.

What other common not-so-"truisms" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about?

Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but — in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule — can easily backfire or cause the opposite of the intended effect.

(2) I vote for reopening this - Gordon Wilson
@le dorfier: I wish you wouldn't close this. There are ideas loose in the community that are worth discussing because we can share wisdom. - Mike Dunlavey
I have voted to reopen because I consider the question to be worthwile. - e.James
(2) It's way too general. You could key off half the other questions and object to some statement; and there's no context for discussion of single points. Read the postings. A big chunk of them are naive assertions of folklore, without regard to context. - le dorfier
Y'all know I like to push the window as much as anyone. But this one basically defines subjective and argumentative. You couldn't imagine a better example. - le dorfier
If you want points, take any one of the individual assertions and make it a separate question. A number of them are, I think, legitimate. - le dorfier
I was hoping to elicit comments about issues that are not subjective, but technical, only commonly misunderstood, and therefore generally or commonly misapplied. - Charles Bretana
(1) Take a look. Do you think you can find even one in the list below? It's mostly absolute statements that everyone knows don't apply in all circumstances. <br> Maybe you can write a change request that we declare Subjective Saturdays or something. - le dorfier
I'm not sure about "subjective", but I don't think "argumentative" applies here. These are not adolescent flames. It's good to discuss disagreements so we can all learn. - Mike Dunlavey
(1) ... and I think much of the wisdom of more highly experienced and capable programmers is in areas of judgement that could be called "subjective" but are nevertheless valuable. - Mike Dunlavey
@le dorfer, Unfortuntely, I agree with your assessment as to the quality of the responses. To be optimistic, perhaps it's mostly due to my lack of skill in framing the question. But I also agree with @Mike, that it is still worth doing, even if it does get a bit argumentative - Charles Bretana
We'll fix it in the next version. - jasonk
[+51] [2009-01-11 02:11:32] tvanfosson

You need to know all of your requirements ahead of time because it's too expensive to change things later in development.

In reality, no one ever knows all of their requirements ahead of time and you can develop code in such a way as to mitigate the inevitable changes and new requirements. This might not be as much as truism as it used to be now that Agile development methods have gained currency.


I don't like down voting, but this is not a truism. Numerous large corporations use Agile Methologies now, and this answer is simply a vote in favor of Agile, which is really a completely different discussion. - Jonathan Beerhalter
Regarding requirements, it's good to get as much as possible up front, but it's not good to rely on it. - Mike Dunlavey
@WindyCityEagle -- it may not be quite as prevalent as it used to be among developers, but I find it to still be prevalent among my customers and, to a lesser extent, the managers in our organization. - tvanfosson
1
[+39] [2009-01-11 02:13:46] mezoid

Java is slow


(1) Beat me by 30 seconds. :) - Bill the Lizard
(15) You must be running a lot of Java - Michael Haren
(1) An aside question, where did this idea start? I swear on my first day ten years ago I was told "Here is you seat, your computer, and oh by the way, Java is slow" - Jonathan Beerhalter
(1) Java applets, I think. Java applets have always loaded slowly and froze the browser while they were loading. - Ross
(18) It started back when Java was first released. Back when it really was slow. - Bill the Lizard
(4) The JVM has come a long way since the early releases. Back then it was all interpreted, and yes... SLOW. Since then it's got a lot of snazzy features like just in time compilation, and ever improving garbage collection. It is definitely not slow now days. - madlep
(8) Knock knock. Who's there? ......... ........... ........... .......... .......... ........... ............ ........... ........... ........... .......... .......... .......... ............ .......... ............ ............ Java - Jason
@Ross: You're correct about everything except the past tense. They still load slowly and freeze the browser while loading. - Billy ONeal
2
[+37] [2009-01-11 02:26:11] Michael Stum

Lines of Code is a good way to track productivity of your developers and overall project health.


(7) I have to admit that I read this and the stopped and thought to myself, "When was this EVER true?" - JasCav
(2) So true. The days I'm feeling more productive I almost always end up with less lines of code than when I started. And that is good. - Sergio Acosta
3
[+32] [2009-01-11 02:17:14] Agent_9191 [ACCEPTED]

Never hard code any value.


(2) Oh man, +5 if I could have given it. this one is sometimes treated as the word of god. - shoosh
(3) or, "Everything should be driven from the database... " - Charles Bretana
Incorrect #defines aren't a valid argument for using magic numbers. - Bill the Lizard
(20) #define FOURTY_TWO (42) - LiraNuna
Ugh. I hate magic numbers. I'm fine with #defining (or const, or static, whatever), but PLEASE DON'T HARD CODE PLAIN NUMBERS. - jvenema
(1) @jvenema -- not even 0? if (results.Count() > 0)... really bothers you so much that you'd use a macro or a variable to hold the value. - tvanfosson
OK, you win. 0, and potentially 1 (depending on the situation) are probably reasonable. Beyond that though...the number of times I've seen some random number hard coded in some code I'm maintaining makes me want to cry. - jvenema
@jvenema - surely #define MEANING_OF_LIFE_UNIVERSE_AND_EVERYTHING (42) is a valid use case? - Jonathan Day
I selected this one as the best answer cause it is definitely a common misunderstanding, and is one of the answers most in the spirit of the question - Charles Bretana
4
[+31] [2009-01-11 03:24:47] BobTheBuilder

Programmers at the same level are completely interchangeable


5
[+27] [2009-01-11 02:07:41] Gordon Wilson

How about, Unit-testing doubles development time


(15) One of the greatest related quotes I heard a bit ago freely paraphrased "I read twitter the other day and noticed that some management guy was complaining that unit tests are bad and hurt the product he's responsible for because now his team of bug hunters find only a fraction of the amount of bugs as they did before and now he feels a lot less safe allowing the release of new versions because of that." - Esko
(2) well, thats true if you don't know how to write unit tests. - Frank Schwieterman
An interesting aside is that I found my rate of bugs to be about twice as high inside my unit tests as in production code. Does unit testing catch production bugs, yes, and that's priceless. Does it slow things down, yeah, not only does it find more bugs for me to fix in the production code, but I also have to hunt down all kinds of unit test bugs too! :-) - Brian Knoblauch
This one depends on the kind of app you're writing. If you're app is highly dependent on API calls, the time spent setting up mocks can be substantial. - Billy ONeal
6
[+23] [2009-01-11 02:44:05] Garry Shutler

You don't need to worry about security until later on in the project.


The same goes with multithreading. - Tadeusz A. Kadłubowski
7
[+22] [2009-01-11 02:15:25] mezoid

Documentation can be written after the software has been deployed. (We'll have time to do it then)


Saved by the parenthetical remark!! - jmucchiello
8
[+21] [2009-12-02 18:58:06] Tim

One Entry One Exit


(2) I've heard people argue that exceptions shouldn't be used, ever, because they break this rule. - Quibblesome
This actually were true back when using languages (such as basic) that didn't have support for actual functions / procedures, but only goto. But that was a looong time ago! :-) - Rasmus Kaj
Back when I wrote BASIC programs, we didn't necessarily limit ourselves to only one RETURN for each GOSUB. - David Thornley
@Quibblesome joelonsoftware.com/items/2003/10/13.html - Kevin Panko
(3) @Kevin Panko - This is one of the few cases where Joel is talking total crap. In a managed environment exceptions are MUCH better. They are more flexible, can be moved around much more easily and can instantly return a helpful message (as opposed to a code). If done correctly they are miles better than error codes they also force the developer to deal with them as opposed to making it optional. They also make the code clearer as error handling isn't munged into same logic as the "normal case" code. Your error handling can exist separately. - Quibblesome
@Quibblesome For the record, I agree. Just wanted to point out a specific case of a person who made that argument. - Kevin Panko
9
[+20] [2009-01-11 02:29:11] Garry Shutler

Your user interface doesn't matter so long as the code works.


10
[+19] [2009-01-11 02:08:49] shoosh

C++ is slower than C


11
[+18] [2009-01-11 02:11:50] TheTXI

Everything should be done in stored procedures

or inversely

Never use stored procedures


(3) Never use stored procedures is a really great rule. Cases where stored procedures are a good idea are extremely unusual, it's pretty much always a very bad thing. - taw
(13) @Taw: Your statement is a broad generalization that lacks any supporting documentation whatsoever. Could you please provide some sort of statistical analysis that proves your claim? My experiences, though anecdotal at best, would argue otherwise. - Mike Hofer
(1) I hope that in the future there will be compilers that are smart enough to understand that some code needs to be run as a stored proc and some code in the another layer. The compiler should decide what's the best place to run a piece of code. Bring the data to the code or to bring the code to the data? The compiler or the JVM or the Common Language Runtime should make the most optimal choice. - tuinstoel
12
[+16] [2009-01-11 03:05:26] dsimcha

There is one True way of programming that's suitable for everything, and any other way is always wrong. Mostly seen among OO or functional fanatics.


Actually, I think that "people who like to dictate extensive coding standards" might be a better-fitting set. - Mike DeSimone
13
[+16] [2009-12-02 19:47:47] Juliet

Big-O Notation: O(1) < O(n)

We all make this mistake -- especially me :)

I can't find the post, but I remember reading a microcontroller blogger who described a case where his hardware needed to store some key/value pairs. Performance was critical and a hashtable with constant time lookup seemed to make sense; if I remember correctly, this setup performed quite well for years.

Out of curiosity, the programmer swapped the hashtable with an unsorted linked list, which easily beat the hash table for dictionaries < 20 items. Later, a sorted array and binary search, with O(lg n) lookup, absolutely demolished the hash table with items less than 500 key/value pairs, although slightly slower than a linked list for less than 10 items.

Since the original hardware never stored more than 15-30 keys at any given time, a sorted array replaced the hash table and our blogger becomes dev team hero for a day.


(1) Hopefully anyone who's taken even the most basic Data Structures and Algorithms course won't be fooled by this. - Cogwheel - Matthew Orlando
(3) Just to clarify, the problem here is people misapplying Big O by ignoring constants, locality of reference, and other real-world concerns. Your first sentence makes it sound like Big O notation itself is flawed. - RossFabricant
(7) Yup. O(1) means "Constant Lookup", not "fast". If an algorithm takes 5 seconds regardless of size of the lookup, it's O(1), but inferior to something that takes (50 ms * number of items) for sets below 100 items. - Michael Stum
+1 for the anecdote. - Rasmus Kaj
People misapply Big-O by ignoring that it only really applies when N is huge. It's an asymptotic analysis. Some people make the same mistake with randomness, invoking the Weak Law of Large Numbers when the sample set is too small to justify it. - Mike DeSimone
14
[+15] [2009-01-11 02:20:16] mezoid

Our project is going to miss it's deadline!....quick lets throw more people onto the project! (ie Mythical Man Month)


(6) It takes 9 months for a woman to give birth. Adding more women will not accelerate the process. - LiraNuna
More specifically, if it takes 1 woman 9 months to give birth, 9 women can do so in 1 month. - Dolph
They would give birth by scrum ? Each one brings a piece? - Tom
@Tom: There are mythologies where gods were born that way, but from a software point of view the integration testing is awfully important. - David Thornley
15
[+13] [2009-05-18 10:27:41] Peter

Reference types live on the heap, value types on the stack


This is a good one. - peacedog
What language does this apply to? - Mike DeSimone
@Mike : fair question. I had the CLR of .NET in mind here. - Peter
16
[+10] [2009-12-02 15:01:54] Patrick Karcher

"SQL in code is bad! Get the SQL out, and then we're good on data access." This simplistic thinking contains some truth but causes a lot of problems. Good data access strategy is sooooo important.

  1. Unless you know how and why data layers, sql functions, etc. can make things much better, just busting things out into procedures and functions can actually decrease the quality of your solution.
  2. Thinking simplistically that getting sql out of your code is what really matters keeps you from really thinking through your data access scheme.
  3. SQL in code is a bad smell. In an imperfect world though, you take short cuts, and this can be a legitimate place to cut corners. If you're not really going to separate your concerns properly, making 60 poorly named sql procedures and functions just makes life harder on the guy who has to come fix the mess a few years later. I know because I've been that guy several times.

So on the money! +1 - Brian Fenton
17
[+9] [2009-12-03 01:13:18] wdh

Never, ever use a goto cause they're harmful.

This was originally cited as "true" because it was noticed that code with lots of gotos was poor in quality.

This is an example of attacking the misused tool (anyone for try/catch?) instead of the real problem, which is being unable to recognize and prevent unmaintainable, poor-quality code.


In the original paper, it applied to static analysis techniques. - Paul Nathan
18
[+9] [2009-01-11 03:32:36] Mike Hofer

The one that irks me the most: Published "best practices" work for everyone.

Malarky.

Every company is different. The staff is different, the business model is different, the clients are different, the fiscal outlook is different, the culture is different, the politics are different, the technology is different, the long and short term goals are different, and on and on and on.

What works for one company will not necessarily work for another company. And I cannot repeat this enough: There is no silver bullet. Just because some guy (or some group of guys) wrote it in a book and slapped a fancy title on it does not make it irrefutable, beyond reproach, or an iron-clad guarantee that it will work in your situation.

You should carefully review any given "best practice" (or mediocre practice, for that matter) for its suitability for what you're doing, where you are, and where you're going before you even think about putting it in place.

Two words, folks: Risk analysis.


I agree, although I think silver bullets do exist for certain very specific situations. As an example, for situations in which it applies, code generation is a silver bullet. - Mike Dunlavey
Published best practices are a fancy phrase for doing whatever everybody else is doing, or to express it in one word, "mediocrity". - David Thornley
Is emphasizing risk analysis best practice? - Arnis L.
Is applying best practices a best practice? - Brian Fenton
19
[+8] [2009-12-03 00:08:50] Juliet

Microsoft IIS is insecure / Apache is secure

You hear this one a lot too, but the criticisms of MS/IIS security are about 10 years outdated. Compare vulnerabilities on Secunia [1]:

  • Apache

    • Apache 1.3.x [2]: 22 advisories, 11 vulnerabilities, 1 unpatched (less critical)
    • Apache 2.0.x [3]: 41 advisories, 26 vulnerabilities, 4 unpatched (less critical)
    • Apache 2.2.x [4]: 17 advisories, 28 vulnerabilities, 2 unpatched (less critical)
  • Microsoft IIS

    • IIS 4.0 [5]: 2 advisories, 2 vulnerabilities, 0 unpatched
    • IIS 5.x [6]: 19 advisories, 10 vulnerabilities, 1 unpatched (not critical)
    • IIS 6 [7]: 8 advisories, 8 vulnerabilities, 0 unpatched
    • IIS 7 [8]: 2 advisories, 2 vulnerabilities, 0 unpatched

To look at it another way, there is a well known article from Mar 2008 [9] which summarizes some findings by Netcraft and Zone-H. Although there are 1.66x as many Apache sites as IIS sites, Apache sites are defaced 2.32x as often, so the ratio of attacks to site is about 1.4. The Slashdot reaction to this article [10] is worth reading.

[1] http://secunia.com/
[2] http://secunia.com/advisories/product/72/
[3] http://secunia.com/advisories/product/73/
[4] http://secunia.com/advisories/product/9633/
[5] http://secunia.com/advisories/product/38/
[6] http://secunia.com/advisories/product/39/
[7] http://secunia.com/advisories/product/1438/
[8] http://secunia.com/advisories/product/17543/
[9] http://4sysops.com/archives/iis-websites-are-14-times-more-secure-than-apache-sites/
[10] http://apache.slashdot.org/apache/08/03/15/1246248.shtml

My reaction is that the numbers are meaningless without some context. Vulnerabilities come in various severities, and more vulnerabilities will be publicly announced in F/OSS than proprietary software. The "0 unpatched" lines could mean that MS got all the bugs, or that MS kept quiet on vulnerabilities they found hard to fix. - David Thornley
(2) Apache's bug reports are public. ISS's are not. MS can have a list of 100 of them and never let you know, just like they do with IE (and we discover today zero day exploits that they knew about for months). - e-satis
apache.slashdot.org/comments.pl?sid=488736&cid=22763474 - Of that same article. Just because the programmers themselves are terrible, doesn't mean Apache is terrible. IIS is still slower and less tested than Apache. - Corey Hart
20
[+6] [2009-05-18 10:48:30] fgm

Use a simple editor or IDE and you will be productive at once.

Not spending your time learning hotkeys, regex-based editing and other power features of a professional tool may save you some days and will cost you hundreds of them.


21
[+5] [2009-12-03 00:43:54] RossFabricant

The more design patterns you use the better.

Applying design patterns can make code better, and it's great to have a shared vocabulary for developers. However, many solutions don't require patterns, and knowledge of patterns is no substitute for understanding algorithms, data structures, and the fundamentals of problem solving.


22
[+5] [2009-12-03 08:10:37] Michael Stum

Reflection (in .net, not sure about Java) is very expensive and therefore extremely slow, hence it should be avoided at all costs.


Maybe not at all costs, but it certainly should be avoided for other reasons when it's not needed. It's good for things like unit tests and serialization, but most of the time if you're using Reflection to solve other problems there's a design issue. - Billy ONeal
23
[+5] [2009-12-02 21:49:42] Juliet

SQL Server specific: Stored procedures perform better than dynamic SQL because they're precompiled.

Don't know how many times I see this one, but its wrong.

See SQL Server 2000 documentation [1]:

SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like any other Transact-SQL statement. SQL Server 2000 and SQL Server 7.0 retain execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans.

See SQL Server 2005/2008 documentation [2]:

When any SQL statement is executed in SQL Server 2005, the relational engine first looks through the procedure cache to verify that an existing execution plan for the same SQL statement exists. SQL Server 2005 reuses any existing plan it finds, saving the overhead of recompiling the SQL statement. If no existing execution plan exists, SQL Server 2005 generates a new execution plan for the query.

SQL Server creates an execution plan for all SQL statements on their first invocation, then caches the execution in memory for future use. Apart from edge cases where network latency slows down transmission of huge SQL strings over a network, there is no performance benefit gained by using stored procedures over dynamic SQL.

[1] http://msdn.microsoft.com/en-us/library/aa174792%28SQL.80%29.aspx
[2] http://msdn.microsoft.com/en-us/library/ms181055.aspx

That stored proc sql and dynamic sql are both precompiled doesn't prove that there are no performance benefits or drawbacks when using stored procs. Do we bring the data to the code or the code to the data? That's the question! - tuinstoel
24
[+4] [2009-12-03 00:39:01] Justin

"Premature optimization is the root of all evil" Knuth

In print it is very often used without the context of the full quote.

Additionally, neither of the two people who are said to have created it, (Hoare is the other) do not claim to have created it.

I typically associate the above quote with laziness and excuses when I hear or read it.

The full quote (whatever the origin):

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."

The difference (by the the added qualification) is huge.


(1) I typically associate the above quote with genius; most developers I've worked with make code much, much less maintainable in the name of speed... that they won't ever notice or use. - Dean J
Worth noting too that in the context of this quote's original appearance Knuth follows it up with a detailed example of an (quite low-level!) optimization that he didn't find premature. - Derrick Turk
Huge? 3%......... - Paul Nathan
@Paul Nathan- The 'huge' I referred to was the difference in interpretation of the saying when comparing the short versus the long forms of the saying; there is a huge difference between the mindsets of considering optimization, performance, resource demands and resource utilization during architecture and development versus no.considerations.what.so.ever. However, if you want to quantify it, the first 10% of optimizing a program is often 100 times easier than the last 10% ;) - Justin
25
[+3] [2009-12-02 18:53:39] JB King

Pair programming means double the development cost!

Pair programming. What researches say on the costs and benefits of the practice. [1] would be a source to counter that.

[1] http://agilesoftwaredevelopment.com/blog/artem/pair-programming-what-researches-say

Wouldn't it be "pair programming doubles the development cost" because you're paying two people to work while only one "actually" works? (I.e. assuming programming is like manufacturing, and idle hands are thus lost productivity.) Shoot, pair programming should pay for itself because programmers will spend less time on CW questions on SO... ^_^;; - Mike DeSimone
My pair programming days have gone through a bit of an evolution. While some of the time, the non-typing part of the pair may appear to be idle, he or she can be looking for typos or other optimizations to the code that may be worth exploring. It also prevents one developer from going rogue and spewing out tons of crappy code just to get something done. - JB King
26
[+3] [2009-05-18 10:47:48] Skizz

Computers are really clever and will solve any problem we encounter.

From what I've seen over the years, there appears to be two distinct groups of people: those who think computers are really clever and those who think computers are really dumb. Unfortunately, most people think the former is true when in fact computers are really dumb - they do exactly what we tell them do, even if that is to start a global themonuclear war.

Skizz


"I really hate this damn machine, I wish that they would sell it. It never does quite what i want, but only what I tell it!" - NVRAM
But you can't deny that they are good listeners. :) - Arnis L.
27
[+2] [2009-01-11 03:19:24] Mike Dunlavey

Performance-related falsisms:

  • To find performance problems you have to run the code as fast as possible and time it every which way, guessing where the problems are based on how long things take or how many times they are invoked.

That is fine for monitoring program health, but pinpointing problems is not about measuring. It's about finding cycles that have poor reasons. This does not require running fast. It requires detailed insight into what the program is doing (typically via sampling as much of the program state as possible and understanding in detail why it's doing what it's doing at each sample time).

  • To find performance problems you need a large number of samples so as to get high measurement precision.

Typical performance problems worth pursuing take from 10% to 90% of execution time. (That is how much execution time is reduced after you fix them.) The object is to find the problem, not to know precisely how big it is. Even a small number of random-time samples is virtually guaranteed to display the problem, assuming they are taken during the overall time span when the performance problem exists.

  • Compiler optimization matters.

It only matters in code that 1) you actually compile (as opposed to libraries), 2) you actually spend much time in (as opposed to code that spends all its time calling functions, explicitly or implicitly).


28
[+2] [2009-01-11 02:11:09] a'b'c'd'e'f'g'h'

Always use stored procedures.


29
[+2] [2009-12-02 19:00:37] Jordan Ryan Moore

Exponential-time algorithms are slower than polynomial-time algorithms.

In linear programming [1], the simplex algorithm is exponential, but it is typically much faster than its polynomial ellipsoid algorithm counterpart.

[1] http://en.wikipedia.org/wiki/Linear%5Fprogramming

30
[+2] [2009-12-02 19:04:44] Sarah Vessels

Based on a paper from 1978, people quote that maintenance is 20% corrective, 20% adaptive, and 60% perfective. These percentages came from a survey of managers' opinions, and no empirical evidence. In 2003, another group of researchers (Stephen R. Schach, Bo Jin, Liguo Yu, Gillian Z. Heller and Jeff Offutt) challenged this by studying maintenance data for Linux, RTP, and GCC, and found wildly different numbers. See their paper here: Determining the Distribution of Maintenance Categories: Survey versus Measurement [1].

[1] http://cs.gmu.edu/~offutt/rsrch/abstracts/LST-maint03.html

31
[+2] [2009-12-03 00:45:50] Jeffrey Hantin

From the premature-optimizations department:

Denormalize your schema up front because normalized schemas are too slow and full of joins to be usable in the Real World.


32
[+2] [2010-03-18 20:22:00] Dean J

PHP isn't a language you should use for serious websites.


(9) I will still keep trashing php, just for the fun of it. - Tom
If the PHP developers whose code I had to go back and maintain had designed any of the code for maintenance, I'd think a lot better of it, too. - Dean J
33
[+1] [2010-03-18 20:10:53] jasonk

Staying late and working overtime is the only way to make deadlines.

...sure, until you are so bloody exhausted you can barely see straight and the excessive caffeine leads to the shakes or a mental kernel panic.

... to heck with better planning/doing actual estimates/setting more realistic expectations.


34
[+1] [2010-04-29 08:37:01] nikie

Low-level languages (Assembler, C) produce faster code than high-level languages (C++, Java, OCaml). Often when you show people benchmarks that prove the opposite, they even think there's some kind of "trick" involved, because "nothing can be faster than C except assembly, right?"


35
[+1] [2009-12-03 00:20:44] Ladlestein

Design your application from the ground up: start with the database model.


36
[0] [2009-12-03 00:55:49] timarmandpour

Number Of Bugs per Line Of Code measures Quality (yep, not so true or relevant in the practical world as we know it today)


37
[0] [2009-12-03 19:18:24] timarmandpour

Business Development Guy: "If I can write the spec, then anybody can write the spec...so anybody can build my product"


38
[0] [2009-12-03 19:27:35] Pillsy

Static typing and strong typing are the same thing.

There are plenty of languages that are strongly and dynamically typed out there; Python is a particularly popular example.


39
[0] [2010-03-18 20:46:28] pborenstein

We can defer this bug as long as we document it in the release notes.


40
[0] [2010-03-18 20:47:03] e-satis

A more recent one :

Don't bother with that, hardware is cheap, we'll buy more servers.

Yeah, hardware is cheap. But when you buy a server, you have to pay a price every month for hosting and/or electricity and/or bandwidth. And you add an extra cost to your maintenance too. You spend more time for migrations and deployments.

Yes, hardware is cheap to buy, but unless you are a cloud-computing-virtualisation-sys-admin hero, owning a new computer has significant cost.


41