I would like to hear what kind of design decisions you took and how did they backfire. Because of a bad design decision, I ended up having to support that bad decision forever (I also had a part in it). This made me realize that one single design mistake can haunt you forever. I want to learn from the more experienced people what kind of blunders have they experienced and what did they learn from them.
I'm sure this will be a lot of help to other programmers by helping them to not repeat those decisions.
Thanks for sharing your experience.
Ignoring YAGNI [1], again and again ...
[1] http://en.wikipedia.org/wiki/You%5Fain%27t%5Fgonna%5Fneed%5FitC++, diamond-shaped multiple virtual inheritance. You get the idea.
Using a single char in databases for statuses, etc. No point in this at all, the overhead of using a longer char() or nvarchar2() is miniscule compared to the network and parsing incurred by any SQL call, yet the chars always end up rather obfuscated, or running out (not for statuses, but other things). Far better to just put the human readable version in, and to also have in your Java model (in my case) an enum with matching values.
I guess this is a form of premature unnecessary and blind optimisation. As if using a single char will save the world these days. Apart from Y/N booleans in databases that don't support booleans/bits.
"WIll do it later"
"Later" never comes.
Configurability in an application is nice. Too much configurability is a nightmare to use and to maintain.
From one of my mistakes i've learned that DB normalization shouldn't be followed blindly. You can, and in some situations you MUST flatten your tables.
I ended up managing loads of tables (via models) and performance wasn't as good as it could be with a little flattening for tables.
Thinking I could be Architect, Developer and PM all on the same project.
2 months of sleeping 3 hours a night taught me you just can't do it.
Chosing Microsoft Foundation Classes (MFC) for writing a Java IDE.
Not developing a proper DAL [1], and having sql everywhere in my code, just to get something "quick" up and running. Later on, as the project started to expand, and requirements changed, it became a nightmare. I didn't know what a DAL was at the time.
... glad I'm passed that, although I still see programmers with 20+ years of "experience" doing this.
[1] http://en.wikipedia.org/wiki/Data%5Faccess%5FlayerReinventing the Wheel
My single worst design decision? Back in the 1980's I was working on a project where we got the bright idea to create a kind of template for our data entry screens which would be interpreted at run-time. Not a bad decision: it made input screens easy to design. Basically just create a file that resembled the data entry screen, with some special codes to identify what was a label vs what was an input field, and to identify whether input fields were alpha or numeric. Then I decided to add some more special codes to these files to identify what validations should be performed. Then I added more codes to allow conditional building of the screen, field X only included when some condition was true, etc. Then I added more codes to do some simple processing of the inputs. Etc. Etc. Eventually we had turned our screen template into a new programming language, complete with expressions, control structures, and an i/o library. And what for? We did a ton of work to re-invent FORTRAN. We had a shelf full of compilers for languages that were better designed and better tested. If we'd devoted that much effort to building products where we actually had some expertise, that company might still be in business today.
It wasn't my decision (I joined the company somewhat later) but somewhere I worked took i18n a bit too far, including translating all their log messages.
Results:
Oops.
Doing too much design. Creating lots of UML diagrams, particularly Sequence diagrams for every single operation, much of which in the end turned out useless. At the end it turned out that significant amount of time could have saved by skipping unnecessarily detailed design/diagrams and starting coding directly.
Trying to make something right and great with wrong people!
Even if they're in the role of a PM.
Believing customers know what they want and then doing too much before checking with them.
Throwing in some 'funny' easter eggs into some code I wrote before going on vacation for 2 weeks. I thought I'd be the only person to read it when I got back, it'd get me chuckling and ready to re-code it.
Needless to say, my boss wasn't impressed when he reviewed it while I was away, and he was even less impressed when one of the 'easter eggs' was involving his face funnily cartooned in ASCII.
Mmmmmm...
Not defining the deployment mechanism/model as early as possible.
Every single time I create technical debt, write procedural code, skip writing tests, etc. because I'm rushing. Almost inevitably I find this creates pain for me down the road.
Using ASP.Net Themes when just a regular ol' CSS folder would've done just fine.
Over-zealous application of YAGNI (which is termed Design by Enumeration in Pitfalls of Object Oriented Development [1]) in an environment where any sensible person could tell that the requirements were definitely going to change. And change repeatedly.
If you've (hard-)coded everything exactly to the current requirements—while beating down anyone who says "couldn't this be more generic?" with your YAGNI mallet—and then the requirements change drastically (but in a way that could have been reasonably anticipated), then that can be the difference between taking 2 weeks to adapt, vs. taking 20 minutes.
UPDATE: To clarify, here's a fictitious example that's not too far from what happened. Stack Overflow was designed to support badges, but suppose they could only think of four badges at first. Only four, such a small number, so they hardcoded support for exactly four badges throughout all of the logic in the site. In the database, in the user info, in all the display code. Because "Ya ain't gonna need" any badges that you can't think of, right? Suppose then that the site goes live, and people start suggesting new badges. Each badge takes up to two weeks to add, because there's so much hardcoding to tweak all over the place. But still, "Ya ain't gonna need" any more badges than today's list, so there's never any refactoring to support a generic collection of badges. Would a such a generic collection have taken any more time up front? Not much, if any.
YAGNI is a valuable principle, but it should not be (ab)used to excuse poor design and inappropriate hardcoding. There's a balance, and with experience, I believe I'm approaching it.
[1] http://www.amazon.com/Pitfalls-Object-Oriented-Development-WebsterI didn't take enough time to assess the business model. I did what the client asked, but 6-12 months later we both came to the conclusion it should've done differently.
Taking the quick road to getting some code working, rather than the right road (bit general, but we'll call it an abstraction and therefore a 'right' answer).
Back in the university I was working on my senior design project. Another guy and I were writing a web-based bug tracking system. (Nothing groundbreaking, but we both wanted to get some web experience.) We did the thing with Java servlets, and it worked reasonably well, but for some silly reason, instead of opting to use Exceptions as our error-handling mechanism, we chose to use error codes.
When we presented our project for a grade and one of the faculty asked the inevitable, "If you had to do it again, what would you do differently?" I instantly knew the answer: "I'd use exceptions, that's what they're there for."
Not my choice of method, but created an XSLT to convert a row based XML file into a column based HTML report.
It only worked in IE, was completely impossible to decode how it worked. Everytime we needed to expand it, was impossibly difficult and took ages.
In the end, I replaced by a tiny C# script which did the same thing.
My company has a waterfall-like development model, where our business users and business analysts will define requirements for projects. On one of our "big" projects, we got a stack of requirements, and I noticed a number of requirements contained implementation details, specifically information related to our database schema used by our accounting system.
I commented to the business users that implementation is my domain, it shouldn't be contained in the requirements. They were unwilling to change their requirements because, after all, they are THE BUSINESS, and it only makes sense for accountants to design accounting software. As a lowly developer who is too far down the totem poll, I'm paid to do instead of think. As much as I fought it, I couldn't persuade them to re-write the requirements -- there is too much paperwork and red tape around changes that its just too much of a hassle.
So, I gave them what they asked for. At the very least, it sorta works, but the database is weirdly designed:
Lots of unnecessary normalization. A single record containing 5 or 10 fields is split across 3 or 4 tables. I can deal with that, but I'd personally like to have all 1:1 fields pulled into a single table.
Lots of inappropriate denormalization. We have a table which stores invoice data which stores more than invoice data. We store a number of miscellaneous flags in the InvoiceData table, even if the flag isn't logically related to the InvoiceData table, such that each flag has a magic, hardcoded Primary Key value and all other fields nulled out in the InvoiceData table. Since the flag is represented as a record in the table, I suggested pulling the flag into its own table.
Lots more inappropriate denormalization. Certain app-wide flags are stored as columns in inappropriate tables, such that changing an app's flag requires updating every record in the table.
Primary keys contain metadata, such that if a varchar primary key ends with "D", we calculate invoices using one set of values, otherwise we calculate it with another set. It would make more sense to pull this metadata into a separate column, or pull the set of values to calculate into another table.
Foreign keys often go to more than one table, such that a foreign key ending with "M" might link to our mortage accounts table, whereas a foreign key ending with "A" might link to our auto accounts table. It would be easier to split the data into two tables, MortageData and AutoInsuranceData.
All of my suggestions were shot down with much wailing and gnashing of teeth. The app works as-designed, and while its a big ball of mud, all of the nasty hacks, special cases, and weird business rules are sarcastically and humorously documented in the source code.
trying to use all the new technologies (to learn new technology) even though it doent even require..
Rewriting working code. Hell the result works great, it's got less bugs and it's more maintainable.
But the customer could care less and it cost my days of dev time...
Designing without a specification.
Using SQL Server Intergration Services (SSIS). [1]
I dont wish it on my worst enemy.
After building several SSIS packages over the past two months, only to come to find out that the packages I developed are not distributable & and undeployable. Specifically in a non-web, non SQL Server licensed environment.
It's a very bad situation to be in, when you have less than 48 hours to re-write your SSIS packages in pure .NET POCO code or miss your targeted deadline.
It amazes me that I was able to rewrite three SSIS packages (that took me two months to test and develop), within 12 hours in pure .NET code, with OLEDB Adapters and SQL Adapaters.
SSIS is not distributable and will not execute packages from a client machine if it does not have a SQL Server license installed on it (Specifically the DTSPipeline.dll). This would be great to know up front. I do see the disclaimer now (in fine print) on MSDN. That does no good when you have example code all over the internet using SQL-LICENSED machine only code. Basically, you have to create a web service that will talk to your SQL server, to run your SSIS packages programmatically. You cannot execute them from pure .NET code, unless you do have a SQL license installed on the executing machine. How unrealistic is that? Does Microsoft really expect SSIS to be used from machines that require SQL server installation? What a complete waste of two months.
My company will never again use SSIS because of this small print "gotcha".
[1] http://msdn.microsoft.com/en-us/library/ms141026.aspxSticking to older technology because it seems too much hassle to let your clients upgrade to a new .NET framework version, but it actually will take more development time to create the software because you can't utilize some (time-saving) components of the newer framework version.
I implemented a sub-section of an application according to the requirements.
It turns out that the requirements were bloated and gold-plated, and my code was over-designed. I should have designed my sub-section to only work with what I was adding at the time, but plan for adding all the other stuff without including generic support for it from the outset.
Trying to utilize the Nth element of a circular buffer [N-deep] between 2 processors. Now I never-ever use more than N-1 elements to keep it simple and reliable.
The issue: a circular buffer containing no more than N-1 elements can be realized completely threadsafe (pure producer/consumer). When I optimized it for N elements, the queue sometimes toggled from full-to-empty (data loss) or from empty-to-full (invalid data).
Trying to find this in a complex system (1 corruption on every 100Mbytes of data transfer) is harder than finding a needle in a haystack.
Using flash to build a site because the "designer" wanted a carrousel of photos (that was years ago, jQuery didn't exist). Later it turned out that the designed wanted to change everything once a week because he changed his mind about the design... What a maintenance nightmare.
Listening to Joel and trying to extend a piece of software instead of rewriting it.
Tight coupling of components that, with hindsight, had very little to do with each other. Said components have since been copy-pasta'd into oblivion by other devs wanting to use only a small part of the functionality. :-(
At my last job, I wrote some largish projects using an in-house scripting language which didn't support classes, lists, and the only "include" mechanism was including one file in another.
In hindsight I could have written it in .net or python and half my extensibility issues would have vanished.
I once designed the business tier of a client server application so that all calls would be asynchronous. I thought it would make it easier to manage scarce resources on the server side (it was 1997 and there were major bandwidth constraints). Turned out it didn't make much difference in how we managed the server but made the client hellishly complicated.
Needless to say there was a very quick refactoring about 4 months into the project. And I learned that simple architectures that play to the strength of your tools are always the best.
Using OO and polymorphism when a procedural approach would have worked better.
what else is worser than this
commented out the problematic lines, and it turned to be the login call. :(
At my previous job, I was responsible for building an automated-testing framework. Background: We already had a "prototype" which was pretty good and had many tests written for it. It was written in TCL.
For the new project, I was supposed to adapt the prototype to fit a new project. I really disliked TCL, and had just learned Python, so I was clamoring to apply that new knowledge somewhere. So, of course, I decided to re-write the prototype in Python. Since we wanted to do lots of really cool new things (support hierarchical tests better, have all sorts of extra mechanisms to easily create new tests), I justified my decision by saying that writing all this new stuff in TCL would be a nightmare.
In the end, most of the new features (which were way too difficult anyway) were never used. We ended up reimplementing the entire prototype from scratch for almost no reward.
The best part? About a year and a half later, we had to drudge up the old prototype to run some tests on the old project (foreseeable that this would happen). I was worried we'd have to adapt it to some of our new tools (one of the reasons I opted to reimplement in Python). Turns out it took about 2 hours of work.
Failure to fully determine specs before starting a project on a client's server. I said PHP (meaning >= 5.2), they gave me PHP 4, I said, "I need a database," (when they finally replied), they said, "Ok, you create the table, and we'll put it in our database..." (I also failed to mention the desire for Apache and not IIs). It ballooned out of proportion, it cause several sleepless nights, and it is one of the worst piece of dung I've ever built. The only benefit I received was that I gained much better understanding of PHP 4, something I did not want to begin with.
If I could go back and do it again... I wouldn't.