What are, in your opinion, the worst subjects of widespread ignorance amongst programmers, i.e. things that everyone who aspires to be a professional should know and take seriously, but doesn't?
Related to perf:
Related to new folks without history or experience in the industry (this is new programmers or "the business guys"):
Surprises me how often these repeat.
I have many, but this one makes me want to hurt myself:
"...but it was working before."
Programmers who do not unit test their code and then get upset with QA when bugs are found which obviously demonstrate this fact.
When people say "Oh, this is so simple, I know it works, there's no point of writing a test for it".
They are completely missing the point of the test, it isn't just to verify that it works, it's to verify that it still works when people make changes down the road.
Ignoring the latest community libraries/techinques, and continuing to develop software the way people did ten years ago.
People who know only one language, that "can do everything". And every problem they face as if they are using their "can do everything" language and never stop to see what else can bee done in the others paradigms.
In C++:
Myth: NULL
is not always zero, but it depends on the null pointer address.
Reality: NULL
is always zero, and it's not an address. It's an integer.
Many confuse NULL
with an address, and think therefor it's not necessarily zero if the platform has a different null pointer address.
But NULL
is always zero and it is not an address. It's an zero constant integer expression that can be converted to pointer types. When converted, the pointer created is called a "null pointer", and its value is not necessarily the lowest address of the system. But this has nothing to do with the value of NULL
.
C++
it is always an integer, never a pointer expression. It's got a reason i didn't include C
into it. So if you think it deserves an entry for C
about it with that wording - go for it. But this comment section is the wrong place. - Johannes Schaub - litb
NULL
can be defined as 0L
, '\0'
or any other zero value - in this way, 0
and NULL
are not always completely identical, and it's allowed in C to have NULL
be 0
too. The matter is more difficult in details, so i omitted the C side of this on purpose, because i wanted my answer to be a rough overview, not a detailed specification :) - Johannes Schaub - litb
Programmers that think they don't need to consistently indent code and make it as readable as possible for the next developer.
It continues to amaze me how many people believe that:
All programming languages are fundamentally the same; they just use different syntax.
Jumbo Methods
I can't stand "jumbo methods," often characterized by:
e.g.
void DoIt() {
// if the view is valid
if (TextBox1.Text != string.Empty && ...) {
var sum = 0;
// process each element
for (var i = 0; ... ) {
// make sure it's a good element
if ( ... ) {
Status.Text = "Bad Element";
break;
}
// process each subelement
for (var j = 0; ... ) {
// add to the sum
sum += ...
}
}
// remember the sum
ViewState["sum"] = sum;
}
// if the view is not valid
else Status.Text = "Required field missing";
}
Any form of cargo culting [1] ( programming [2], general computer usage, etc.).
“Hmm, the result still has “>” thingies in it. I guess I have to add yet another decoding pass.”
Take the time to understand the involved specifications and standards to find out whether this is a bug in the content the program is consuming, a bug in the standard, a bug in the specification, or a bug in some other part of the system. If the data is supposed to be free of character entity references at this point, then there is a bug somewhere (unless the bug is in the specification and it really is OK to have character entity references at this point!). Find the bug to understand the problem.
“Hmm, that did not work. sudo worked on this other thing I was doing, I guess I will try it here, too.”
The solution to every permission problem is not “do it as root”.
Do not “paper over” the immediate problem with the first thing that comes to mind. Solutions from one problem should not be automatically applied to any other problems unless the solution will solve the problem in all applicable situations.
Getting a solution from “the Internet” is fine, but do not blindly apply it. Read the documentation, read the code, do some research, experiment in temporary environments. Learn from the given solution. Do not parrot it. Only with a proper understanding of a particular solution can one determine whether it is a proper solution for a specific problem.
[1] http://en.wikipedia.org/wiki/Cargo_cultjQuery != javascript
I see a lot of questions around here saying, How do I do X in jQuery, even when the OP has no idea whether jQuery is relevant.
This has elements of other answers: fanboyism, and using frameworks that you don't understand.
There are some great answers at the top! This really is just a personal pet peeve.
The belief that other coders are incompetent.
Pointless Tables and ASP.Net controls
Example: clean, functional code
<div>
I is in your browser
</div>
<div>
Showing your static text
</div>
Horrible beyond belief code
<asp:Table ID="table1" runat="server">
<asp:TableRow ID="row1" runat="server">
<asp:TableData ID="data1" runat="server" Width=191px>
<asp:Panel runat="server" ID="pnl1">
<asp:Label runat="server" ID=lbl1" Text="I is in your browser" />
</asp:Panel>
</asp:TableData>
</asp:TableRow>
<asp:TableRow ID="row2" runat="server">
<asp:TableData ID="data2" runat="server" Width=191px>
<asp:Panel runat="server" ID="pnl2">
<asp:Label runat="server" ID=lbl2" Text="Showing your static text" />
</asp:Panel>
</asp:TableData>
</asp:TableRow>
</asp:Table>
What bothers me the most is people who code to just accomplish a task without putting even a minimal amount of thought into planning what they should do.
Listening to ASP.Net (VB) developers with several years experience comment that they cannot work with a particular application because it is written in ASP.Net with C# and they "don't know how it works".
I got a new one. From a coder I worked with recently.
What do you mean standards?
He didn't know what W3C was, how to write compliant code and had database columns listed as "unecrypted_password". I die a little inside each time I meet people like this and find that they get paid more than I do. :(
Assumption that your program/driver/code is important to the user.
The user wants to check their e-mail, watch videos, listen to music, write their novel. They do not want to know that your program just finished re-evaluating its network connection, and now has the ability to launch sheep to mars.
The user does not want to memorize all of the rules that your program requires to run correctly, and they certainly do not want to learn to program some new and truly inventive language just to fill in a config file.
The user does not want to wait while you eat all of the resources building a pretty icon cache.
The user just wants to jump on, check their e-mail and leave.
Ignorance of one's own limitations. I can't stand it when someone thinks they know everything there is to know about a topic and give useless or harmful "advice" to someone else.
Comparing floating point values from different calculations without using an epsilon.
Example:
if(sin(x) == cos(y)){ /* do something */ }
instead of
/* here epsilon is taken as 0.001 */
if(abs(sin(x) - cos(y)) <= 0.001){ /* do something */ }
People who think "Real-time" is fast.
Usually it takes more time to make sure that tasks are achieved in time.
My pet peeve is poor variable naming! Having spent some time as a maintenance programmer, I can tell you that poor variable naming is pure Hell! You should never name things after yourself, after your pets, or after anything that does not relate to what it is and/or what it is doing in your code! I should never see anything like:
if (john == 0){
return fido;
else
return fluffy;
5 days from now, no one will know what you were doing, let alone 5 years from now!
Humility. Eric Evans said it in his forward to Jimmy Nilsson's book Apply Domain Driven Design and Patterns, the best coders have the rare combination of self-confidence and humility.
I find many developers have plenty of self-confidence however do not take well to good criticism. Dunno whether this can be blamed on ignorance of human nature.
"It seems to work for me, so I won't bother reading manual/specification to do it correctly"
This is why HTML, JavaScript, feeds and HTTP (caching, MIME types) are in such sorry state.
Ignorance of the principles of reusable code and parameters. A ColdFusion developer I inherited code from had made several pages with names like getWallProducts.cfm, getFloorProducts.cfm, getCountertopProducts.cfm, getBacksplashProducts.cfm, etc. Each of the pages was absolutely identical except for the WHERE clause in one SQL query.
Bad or incorrect knowledge of data structures.
"I need to find all untranslated strings in our source. I'll just build an array of all the strings, copy it and compare them to eachother."
Congrats on your n-squared solution. Some folks with modern CS degrees don't even know what a hash-map does. Or why you would ever use one as opposed to an array or list etc...
Drives me nuts.
Copying code from another application they've worked on containing the functionality they want to use, and not changing the variable names (that only make sense in context of the original application) because "the client will never see the code."
Oy. Do I try to explain that the client can and will see the code in any variety of instances, or that this will drive fellow team members crazy/confused, or that the PM will have a conniption when she requests full documentation of the system and sees processes named after other clients' products?
Displaying a message box instead of raising an exception when a method fails to do it's job. For example, a Save() method in a Form simply showing a message box, instead of raising an exception, because the user hasn't filled in some required field, etc.
Because they don't raise an exception, any code calling the Save method has no freaking idea that the Save failed or why it failed!
Typically at this point I'd expect at least one person to say that exceptions should be used "exceptionally", i.e. rarely. If you follow this philosophy then you still need some way to tell your calling code that you failed, which results in changing your method signature so it returns failure details either as a result or an out parameter, etc. And of-course your calling code will need to tell it's calling code that it failed and so on. Ahh hello world, this is exactly what exceptions are built for!
Maybe this thinking doesn't work in all frameworks (like web, etc) but in Delphi Windows applications it's perfect as unhandled exceptions don't crash the application, once they travel back to the main message loop the app simply shows a presentable message box to the user with the error details, they click OK and program flow continues to process messages again.
Overengineering, usually to make unnecessary optimizations.
These are usually done by seniour developers. These usually add a lot of complexity with adding minimal (if any at all) speed improvements. What's worse, is that after these are done, someone other unlucky developer gets stuck with the "optimized" code.
Programmers that have absolutely no idea whatsoever what "malloc" means/refers to.
So far in my years of development I have found that I resent most those programmers who can't keep deadlines. It's OK to go over because of some unforeseen trouble, but to look into the eye and say:"It will be finished tomorrow" and then start coding next week is not acceptable.
I've found that a lot of programmers don't know about the for loop. They'd rather use:
Dim i as Integer = 0
Do Until i > 10
'do stuff
i = i + 1
Loop
And when I tried to let one know about the for loop he got mad and said he wasn't going to rewrite all his code just to use a different kind of loop.
Always starts by writing concrete classes instead of starting to "program by interface".
Cowboys who just want to write code before they have finished understanding and then debugging their business rules & requirements. Once you have finished slashing your business requirements and rules with Occam's' Razor [1] the code, modules, libraries, data structure etc. that you need will be bleedingly obvious.
Horse first, then cart.
[1] http://en.wikipedia.org/wiki/Occam%27s_RazorThe difference between "I need to get this done" and "I need to get this done here" (as in I need to add code in this specific location). By far the biggest issue I have encountered in scaling systems up is where code written by various people puts a lot of logic that should live in separate levels of abstraction in a single place.
My favorite one is that linked lists are quicker for adding and removing items in the middle than array lists. So many people fail to grasp the subtler concept and give the canned answer that everyone seems to propagate. This is in Java in particular, but the pet peeve applies to the concept in general. Say you have list.remove(2000) in a list of 4000 items, they claim it will be quicker in a linked list than in an array list. What they forget about is how long it will take the above call to find the 2000th item ( O(n) ) and then remove it ( O(1) ). The iteration will be done in Java code many times over. With an array list, it will be a low-level memory copy which, while is o(n) as well, will be quicker in most cases than iterating a linked list.
With large datasets being moved between systems in XML, not understanding the merits of SAX over DOM, and the performance implications of selecting DOM simply because it is easier to implement. I have seen a number totally unnecesary performance bottlenecks and system failures over this, with XML getting blamed rather than the lazy parser implementation.
Code is not just for communicating with the computer, but also with fellow programmers.
You can throw all sorts of rules on comments and variable names at people, but it really doesn't matter that much. Either they grok the above (and don't need rules except perhaps as guidelines) or they don't (and will only obey the letter of the rules).
Just one of the most symbolic examples of ignorance in programming (C#):
private string GetMonth(int Number)
{
switch (Number)
{
case 1: return "January";
case 2: return "February";
//And so on...
default: return "Invalid";
}
}
CultureInfo.CurrentCulture.DateTimeFormat.GetMonth(monthNumber)
- dbkk
.NET != C++
Saw this yesterday: a programmer wrote some code in VB.NET which passed all parameters ByRef between a few dozen functions. I asked him why he wrote it in that style, and he commented that .NET would make a complete copy of every array parameter before it passed it to another function. I correct him, "yes, it'll make a copy... of the pointer, but not the entire array".
He fought with me on that fact for a few minutes. I decided it wasn't worth my time to "fix" code that wasn't broken, so I left it as is.
The myth that writing the code is the main part, while debuggin is just an extra.
They are both faces of the same coin. If one is shitty, the overall result will suck.
The power of Google. Or Find in Files.
This is really anal, but I abhor the use of NULL to describe the character that is used to mark the end of a string in c.
NULL is associated with pointers.
NUL is the name of the ASCII character that represents '\0'
'\0'
. The ASCII character may be called NUL, but in C the terminator is called the null character. - James McNellis
My pet peeve is programmers who don't try to understanding something, they have the attitude that the "compiler" will figure it out for them.
"It doesn't matter that Java doesn't have feature X - because with a little bit of coding around (in all the places where X would be used) achieves the same effect"
Where X can be:
It's another way of saying "I get paid to do Java and it's a general purpose programming language so I don't need to bother to learn anything else"
Some prefixing of variables is just aggravating:
<short_product_name>
for instance, if your product is ABC Accounting, having all sorts of variables like abcWindow
and abcSqlConnection
- Earlz
The belief that the ease of writing code in a language is a more important quality of the language than the ease of reading code in that language.
Using ".Net is the future" as the primary justification for rewriting existing working software. The same developer once described it as "God's will".
I read 4 pages of that and I really need to post... (hope no reposts;))
I hate when developers forget that they write soft to BE usable by people who are not programmers, and not taking into account that for a non-programmer smth may be difficult to grasp..
Also 'index base programming' - a fantastic paradigm I have kept on rediscovering in many lines of code eg.
List<int> linesChecked
List<Rectangle> drawnRectangles
List<whatever> something
and then orchestrating all the lists using one index because the things are ESSENTIALLY a single object. duh
A third.... not leaving the default: in switch statement... it's there because something CAN really happen, one can recompile the enum, whatever.... duh (an infinite source of bugs for me:)
Changing code, but not updating comments
I come across code sometimes that has evolved over time, but the people who worked on it didn't update the comments around the code - so that the comments refer to what was there before, and not the current code.
Misleading comments in the code are worse than no comments at all.
Not just programmers (though they are unfortunately very much represented in this group), but I'm annoyed by people who don't understand or appreciate the role of research in driving progress in technology (I say this as an industrial programmer, not a researcher, btw) and don't understand how long it takes for something to go from an idea to mainstream reality, or how long a history each "new hot technology" really has.
The My-app-should-run-in-full-screen-by-default-even-if-you-have-a-30-inch-monitor-attitude.
My peeve is programmers that don't consider the memory footprint of their software. They develop code using STL or other data structures and they automatically pick a set or map instead of a vector or deque.
Gnome software does this a lot. One of the data structures provided is a GTree that uses GNodes that have five pointers each! Some people use this to store data items smaller than the nodes!
Now imagine what this does when built with 64-bit pointers.
When programmers confuse classes and instances when talking about systems. The word object often gets used ambiguously to refer to classes and instances in both casual and formal conversations about architecture and software engineering.
somefunction(){
try { .... put your whole code here .... }
catch{}; // empty catchy!!
}
Claiming that things like if
/else
vs. switch
will obviously improve the performance of a program.
Because it's premature optimization, but also because people making such claims don't know what they're talking about, and yet they feel the need to teach other people how everything works.
Other examples:
Revert 'small' refactoring because running the unit tests before committing takes too long.
Developers who don't understand .NET naming conventions.
Examples (from a real library that I had to use):
public delegate void FooBarVoidVoidDelegate(); //FooBar is the component name.
public enum FieldTypeEnum {
OBJECT_NAME,
FIELD_SIZE
}
public delegate void ConnectedDelegate(object sender_, ConnectedEventArgs args_);
delegate
means, and that I have done very little with .NET...) - SamB
The remark "reinvent the wheel". Look around you, do you see one size of the wheel fitting all?
In a dynamic language, not using Duck Typing and littering the code with tonnes of switch statements!
They just develop skill in one language and try to do everything with the same, they don't try to understand that there are scenarios when C++ should be preferred over C#. They don't think to go out of the Box.
I find myself particularly irritated when a I encounter programmers who act as if documenting bugs is enough to get along instead of fixing them. Those people who even argue against those who discover bugs in their codes. Even defending defective implementation with a "working as designed" attitude.
I hate the "known problems" sections in readme files.
There are a lot of good answers here already. Here are my top four:
Continuous Learning: It's one of my most important interview areas. If you aren't learning anymore, you shouldn't be in this business.
Arrogance: I have no patience for a developer that say something should be done that way "because it's the only way".
Over-commenting: If your code is written so that it can be maintained by others, it doesn't need comments describing each line.
Consuming Errors: Putting a try/catch around each section or just returning from each function when an error condition occurs isn't HANDLING an error. It takes a lot more time to track down a bug if an error condition is consumed.
Programmers who decide to deviate from a standard just to make it work. This lack of concern also means they won't share this information until after they have commited their several revisions.
Indifference towards retarded compilation/build/project-organization methods that create unbelievable amount of mundane work.
I kid you not, I've seen a dev environment where checking out the latest revision requires 10 (gui) steps, and so does building. Running the full test suite might well require 100+ (gui) steps.
BY FAR my biggest annoyance is: Leaving Code Commented out everywhere!
Not improving code that was fixed, just leaving it alone
Most programmers don't seem to realize that any database product based around SQL is not a relational database. Somehow the whole concept of a relational database gets smeared because of how awful SQL is. Web developers now want to use new untested database paradigms because they just can't stand the idea of using a "relational" (that is, an SQL-based) database. Go ahead and read the SQL standard and try and find any occurance of the word "Relation" or "Relational"
In reality, there has never been a mainstream relational database. There's a couple research programs (like rel) that implement the relational concepts. But it's all got this kind of grampa's suspenders air about it, that nobody wants to touch, because it's just not hip to be mathematically and logically rigorous nowadays.
In C++:
Myth: std::getline
is a global function.
Reality: std::getline
is not a global function, but a function defined in the namespace std
.
There is a common believing that things that are defined in namespaces other than the global namespace are all global. But in fact, that can cause confusion as to not knowing where stuff is really defined.
Here is an example where to avoid confusion: Instead of saying such things as
1)
global int variables are initialized to zero if an initializer is omitted.
Say the following, which is more correct and probably is what you really want to say
1)
namespace scope int variables are initialized to zero if an initializer is omitted.
Note that just because there is no "
namespace... { ... }
"
around the global scope doesn't mean that there isn't a global namespace: This namespace is not user defined. It's implicitly created by the compiler before anything else happens.
std
. Try int main() { cout << "a"; }
and you get an error. You need to qualify it with std::
or use a using-declaration/directive. That's what makes it very different from global names. - Johannes Schaub - litb
My Pet Peeve?
Undocumented code. All the rest can be solved or worked around.
Most of my "favorites" are already up here, but here's one I just ran into again last week (from an otherwise decent programmer):
Traversing the ENTIRE XML DOM tree, when searching for a specific node (or nodes), using methods such as Children[], NextSibling(), etc.... instead of a simple call to SelectSingleNode (or SelectNodes) with a simple XPath expression. This of course resulted in many recursive calls, not to mention HORRENDOUS performance...
Of course, this can be generalize as "not using code the way it is meant to be used".
Unreadable code. And large, flat LabVIEW block diagrams that take a couple thousand pixels in both directions. And bland and ugly UI's. And noisy workplaces. And knowledge silos. (What? we can only have one?)
My pet peeve is a sort of brain-washing that most programmers don't even realize has happened to them - namely that the von Neumann machine is the only paradigm that is available when developing applications. The first data processing applications using machinery were what was called "unit record", and involved data (punched cards) flowing between processing stations, and the early computers were just another type of station in such networks. However, as time went on, computers became more powerful. Also the von Neumann architecture had so many successes, both practical and theoretical, that people came to believe this was the way computers had to be! However, complex applications are extremely difficult to get right, especially in the area of asynchronous processing - which is exactly what the von Neumann machine has trouble with! On the other hand, since supposedly computers can do anything, if people are having trouble getting them to work, it has to be the fault of the programmers, not the paradigm... Now, we can see the von Neumann paradigm starting to run out of steam, and programmers are going to have to be deprogrammed, and "go back to the future" - to what is both an earlier, and a more powerful, paradigm - namely that of data chunks flowing between multiple cores, multiple computers, multiple networks, world-wide, and 24/7. Paradoxically, based on our experience with FBP and similar technologies, we are finding that such systems both perform better, and are easier to develop and maintain.
Programmers writing "helper" code (eg: buggy date validation that doesn't use regex to parse and breaks on certain dates since they were unable to understand testing works by not merely running it once on their machine) that can easily be found in the default language API or some commonly used library such as Apache Commons.
This could would be so much better if I could just re-build it from the ground up.
No, it wouldn't [1].
[1] http://www.joelonsoftware.com/articles/fog0000000069.htmlDatastructures..... people don't know what it is first... but go on with programming.
Thinking that their code is the reason the business exists, not the other way around.
The belief "What I think is good is always best"
I've worked with many programmers who believe they know what's best for everyone because it's what's best for them. The "best" code is the best solution to the code user's problem. It may not be the most elegant, fastest, easiest to maintain, coolest, easiest to read, best documented, most advanced, newest technology, etc.
Unfortunately too many employers don't spell out their requirements for good decision making so I can't really complain too much.
Incomplete/inaccurate/missing documentation
I know this was partially answered before, but mine has two parts:
When I inherit code from another programmer, I'd like to see what the intent was behind the code. When I started my current job, I inherited a number of classes that were heavily dependent upon a knowledge of specialized business math. In order to make corrections, I had to spend a lot of time with senior management learning what the math was supposed to be, why, and how it would look on paper (immediately got documented). With proper documentation, my time spent would have been cut by 75%.
Component documentation. My company spent hundreds of dollars for your company's components a few months ago as they have the most functionality that we can use. Now you have a new release, and the documentation is incomplete and inaccurate because you did an overhaul on your methods and properties. Now I have to spend multiple hours figuring out what's wrong with my code because of your changes. Now I have to hunt through source code and keep playing with different switches because it no longer works properly. Please, is it too hard to document not just the change log, but also the help files? I know it's not. All you have to do is show sales a noose and ask them to back off their artificial deadline before the deadline gets used on them (not really, ignore last part).
As a part of number 2, why don't component developers ever document custom exceptions their code can throw? Why do I have to waste more hours testing every case to find out what exceptions can get thrown? Because I don't have the time to find every obscure exception buried 6 or 7 layers deep in the component code that might hit me, I'm forced to generic handling (and yes, this is based on my reality, no exaggeration or whining intended).
For me the worst thing someone who wants to be a programmer can ignore is the need of commenting. Everything else doesn't matter so much, as in a normal workplace somebody else can fix the error. While those issues may be time consuming, not commenting can make any of the necessary fixes or changes take much longer than necessary since any developer would have to figure out what the code does before they can make any changes. Some would say that this isn't an issue if the same developer who wrote it is making the changes, but even then people won't recognize what the code does. This is why commenting is so important. Sadly, I know of a graduate director who has only ever written one comment in 147,000 lines of code and most of it doesn't work the way it should.
Emulating sets with dictionaries, or failing to see when a set would be appropriate.
Setting the value in a dictionary to some random value instead of just using a set data structure.
d[key] = 1 # now `key` is in the set!!
Or general ignorance of set data structures, using lists/arrays instead.
Sets offer O(1) lookup and awesome operations like union, intersection, difference. Sets are applied less often than they are applicable.
Version control.
comment your code
write unit tests
be userfriendly
Not putting the extra effort to be clean when committing code... Nothing annoys me more than people who commit print statements, extra spaces or end-of-line characters as a result of their debugging.
Test by try/catch
This must be bad for my heart.
Example is in J. Suppose two vectors. The dyad ,.
means stitch, a.k.a. align these two vectors side-by-side in a matrix. Now suppose that for some reason, you have vectorA and vectorB, and you know that vectorB can be one off. I've seen this in a function trying to alter colours between two rows, vectorA has the odd rows, vectorB the even rows, so either vectorB will be the same length or one off.
try. vectorA ,. vectorB catch. vectorA ,. (vectorB, a:) NB. append an empty item end.
,.
is a bit of a resource monster. Using it, watching it fail, then using it again? That's a crime! - MPelletier
My pet peeve is my ISP. Especially when the support says: "Turn off your modem, wait 15 seconds and turn in again. Is it okay now?"
I have observed that most of the so called programmers who come from the service industry are more oriented towards just providing something that does what requirements say. They don't care:
1 whether and how (time and space complexity ) optimal their code is,
2 are there ways to improve it if yes what are those. ?
I understand that there are deadlines for a given project so one can do a optimistic job to finish the task but still there are product life cycles. They are careless about speeding it up in next product life cycle.
And the worst programmers give excuses such as about documentation and communication delays (across continents ). And it pisses me off. :-|
So many things are very common. For Example, "Do you know programming in assembler?"
And so on.
That skilled programmers have a better value on business code than on technical code.
Affect your better coders to implement your domain model, they'll make it better, and that's the most important point.
Ignorance of the fact that questions like this should be community wiki.
Programmers are people and as such are expected to function as a socially responsible member of society or company.
bool finished = false;
for (i = 0; i < size; i++) {
if (something(i))
finished = true;
...
if (finished) break;
}
Rather than
bool finished = false;
for (i = 0; i < size && !finished; i++) {
if (something(i))
finished = true;
...
}
Languages support a full conditional in loop clauses for a reason.
For Pete's sakes please don't use ALLCAPS for any form of constant in C#. Be it enums or const or ANYTHING. If your IDE doesn't tell if something is a const, you should find a new IDE, or failing that a new hobby/workplace/job.
Pretentious questions like this :-)
Using Java.
You can't imagine how much I hate software written in Java. Clumsy looking GUIs with tons of display bugs, the resource hogging "java automatic updater"...
Second worst pet peeve:
Using anything else than C# / F#. There is no justification, except in extreme cases (OS development or when you need to talk to the CPU directly to use SIMD). And then - only write as little as possible in unmanaged horrorlanguages from hell (C++, etc... choke) or, even worse, "dynamically typed languages" which are nothing but toys for masochists who hate understanding code and helpful tools like Intellisense.
Third worst pet peeve:
Unneccessary P/Invoke...
Users of any application - or device - who call you up and say "It doesn't work".