When developing software I usually find myself continually questioning "Is this the best way?", "Is there better technology for the job?" and as a result I put more effort into actually investigating & researching different design patterns/technologies/best practises than actually just developing the thing with tools I know will work!
A common thing I tend to do is think too far ahead and worry about things that may happen rather than focussing on things that will happen. I don't believe this is necessarily a bad thing, however, I find it sometimes takes up a little too much of my time.
I would like to hear if anyone else has similar issues and how they go about tackling them?
Deadlines.
Adopt an XP idea...YAGNI. ( You Ain't Gonna Need It [1])
Code enough to solve what needs to be done now. Get it working to your customer's satisfaction, that's all that really matters anyway. Then move on to the next feature.
If you find you need to refactor something, so be it, just do it incrementally.
[1] http://en.wikipedia.org/wiki/You_ain%27t_gonna_need_itIn that order.
Yes, I've seen and experienced this problem many times. The number one solution (for me) is a schedule. Not a marketing department schedule that gets determined by insert Black Magic here. I'm talking about something like a monthly / quarterly schedule that you inflict on yourself or your group where you say "on this date, we must have a buildable, working project that, if they cancelled our project that day, they'd still have something good."
What this does is put real milestones out there that you have to commit to. Remember, milestones are rocks. They don't move. Likewise, the calendar just ticks by. If you can't commit to coming up with a good-enough solution in time, you won't have anything worthwhile on that day.
Personally, I think a three weeks of development + 1 week of integration, testing, clean-up and final prep is a nice arrangement for small to mid-sized groups. Your mileage will vary.
Yes.
To prevent over-engineering, do this.
Build the smallest, simplest thing that solves the problem.
Investigate alternatives.
then
worry about making it better :) - James
In my experience, the most simple way to avoid over-engineering is experience. While greenhorns will struggle endlessly with their doubts and fears, a senior developer will just do it. And they will get it right (the second time, if not the first).
The second most simple way is confidence. Here lies a danger, though: Without experience, confidence can be extremely dangerous.
Which leaves the question how to arrive with both in a short time. The world's greatest minds are working on it. My path is using automated tests. That kills my fears and doubts faster than anything else. It allows me to move knowledge and experience from my brain to the computer, so my poor little brain is free for the next thing that comes up and I don't have to worry about all the things that I've already solved.
Excellent question. Here's what I've found, and it's not all easy to do:
Do prototypes. This way, I get a deeper understanding of the problem than I could ever get by just thinking about it ahead of time, and I never get myself wedded to suboptimal code.
Get experience with actual performance tuning of real software because, at least in my experience, over-engineering of software results in massive performance problems,
as in this case
[1]. This teaches you what typical design approaches lead simultaneously to complexity and slowness, so you can avoid them. One example is over-emphasis on the intricacies of classes, data structure, and event-driven style.
(This is the opposite of premature optimization, in which by trying to solve problems that do not exist, you end up creating problems.)
At times, I have taken a really uncompromising view of software simplicity, and it has had the cost of being very strange, it seems, to everyone but me. For example, I stumbled upon the technique of Differential Execution [2], which shortens UI code by an order of magnitude and makes it very easy to modify but at the same time creates a learning curve that few have climbed.
Release Early, Release Often [1]
Even if you're not releasing to an external client you can still have an internal releases to a product owner or testers. This kind of work cycle makes you focus more on the task at hand and not the future.
[1] http://toc.oreilly.com/2008/06/release-early-release-often-ag.htmlPractice YAGNI (You aren't gonna need it), and your productivity may rise. Possibly a lot.
You may want to read The Duct Tape Programmer [1], however exercise your good judgement when you do.
[1] http://www.joelonsoftware.com/items/2009/09/23.htmlHire a bunch of software architects to form a committee to analysis all designs.
Designs will be submitted in UML to ease the analysis.
All projects will all use an in-house XML based language to avoid the specifics of a single language.
You will also need a detailed coding standards to avoid the workers over engineering things. This should only cover important things like the positioning of { }
Test-driven development and refactoring together mean you don't need to have the best way up front, or even see how all the details fit together... it's about emergent design.
Reading about the ideas behind this might help you worry less about perfectionism: http://c2.com/cgi/wiki?EmergentDesign
Here's how I learned to do it:
Repeat... until done...
This is easier to do when you're paring with somebody else... preferably somebody who already knows how to do XP.
In other words, learn XP ;-)
This is one of the things that a review with your peers should help determine. They should let you know if 'you going into the weeds' (and be able to justify that assertion). On the flip side, they should also let you know if you haven't done enough and are designing something that brittle and not resilient enough to change or problems.
Use part of CMMI; measure yourself.
Figure out a way to keep a running tally of how many times you overengineer vs how many times you underengineer. Figure out how much pain underengineering something costs you on average. Wait a year or two, and look back at it.
Thinking more about it may help you in the short run, and in the long run, you'll have the data to know whether or not your fear of overengineering was justified.
There are a few ways that come to mind:
A couple of other things to notice:
You need two things: Timeboxing and Peer Review
Timeboxing is as simple as saying - I will spend N hours researching technology to make this work better.
Peer Review means you discuss the problem with other interested engineers.
It sounds as if you are working on your own, which makes Peer Review difficult. I work in a Scrum shop. Part of the process requires that we discuss all fixes and features, then get buyoff (agreement) from the other engineers before writing a line of code. This works out to be the same as 'Measure Twice, Cut Once.' We spend about half our time researching and planning, and it is worth the investment.
Keep it simple, stupid
A maxim often invoked when discussing design to fend off creeping featurism and control complexity of development
http://en.wikipedia.org/wiki/KISS_principle
It sounds like you don't have a project manager.
You need to steal techniques from the famous Rubber Duck Debugging [1] and apply them to project managment. Pretend that the Rubber Duck is your project manager representing the primary customer, and explain to it that you want to take X hours researching a new technology or new architecture. Now pretend that the Rubber Duck asks you if you think that the new features would be worth X*Y of the customer's money, where Y is your hourly salary plus the cost of your desk and benefits. Next the Rubber Duck asks you if you think that the new feature is worth delaying the delivery of the product X hours.
Answer both of the Duck's questions honestly and proceed with development based on your response.
Incidently, you should probably ask the Duck if he minds all the time you spend on Stack Overflow.
[1] http://en.wikipedia.org/wiki/Rubber_duck_debuggingAs long you meet your time goal - invest as much as you can. If you can't meet your time goal... invest only if you think it is crucial to meet requirements or if you think you're going a direction that will be impossible to fix later if wrong...
Premature optimizations and handling of what ifs can definitely be a problem.
One general thought is to just make sure that you have code that does the best you can and be willing to adopt better practices as you learn them.
In general though, the simplest answer to solve the current known problems is generally the best. Your code however should be able to catch the unexpected error cases and log/expose them for you to add more robust handling of those corner cases.
I run into this often. If I need to solve something in a language I regularly use, which today is JavaScript, and I'm stuck, I try to solve the problem using a new framework. There's something about using a new tool, a new code library, even a browser I don't usually use, that helps me get past the psychological block of trying to do it right.
The idea of incremental deliverables focussing on the most critical features from the Unified Process was designed to solve this problem. It takes discipline to implement.
Make a list of features (use cases), prioritize them, choose the highest priority and work on that as if its the only feature you'll ever need. When you deliver it in working form, analyze again and select the next feature set to work on.
Another approach is to examine periodically all you have done and refactor it to the bones: Remove all that is not needed. Repeat as often as needed. Makes the code leaner for the next iteration.
Regards
I avoid over- or under-engineering by trying to balance the amount of time I spend investigating and designing with what I can reasonably expect its use and lifetime to be.
Let's say I'm writing a utility that only I will ever use, and I will use it rarely or even just once. If I have a choice between coding up such a utility in Bourne shell or Perl in ten minutes that takes overnight to run, and taking three hours to write an optimized version using sophisticated and difficult algorithms in C++ that runs in one minute... electricity is cheaper than my time.
On the other hand, I've written key components of products that have been used by or affected millions of people over many years. In such instances, it's been worth it to take the time and effort to put a lot of effort into investigation, research, and design, use the absolutely best tools and techniques, and then polish the resulting code to a glossy shine.
There's no set-in-stone way to do this - it's all judgment. And as with all matters of judgment, experience is very helpful, as Aaron Digulla aptly pointed out.
When I started out as a professional software developer twenty-four years ago, I didn't know jack about how to make these decisions. I'd just write code, and if it was too slow or too buggy or something, I'd go back and fix it. But when I did that fix, I'd try to think about how I could have avoided the problem in the first place, and how I could apply that in the future. And I also have tried to listen to other programmers when they talk about problems they ran into and how they fixed them.
Now, many dozens of projects and perhaps millions of lines of code later, there are a lot of design decisions I can make almost instinctively. For instance: "If you're working in C++, and you're facing a problem that some STL template will solve, and you're not constrained to avoid using STL, then that's the way to go. That's because modern STL implementations are highly optimized, and any improvement you could get from writing your own code would just not be worth the effort."
Also, I can just look at a situation and say, "The 10% of the project where we're likely to have problems is here, and here, and here, so that's where we need to concentrate our research and design effort. And the other 90%, let's just make it work however we can do it." And it works out pretty well.
So keep coding, and keep improving your code, and keep learning from other software developers and from your own experience. If you continue paying attention and thinking about things, increasing software design mastery will come over time.
Time boxing is what we do.
We have a 3 day look into an upcoming problem, and try some things out and pick a solution.
After that run with it and deliver early and often, after you've shown some progress then you can refactor if a better answer presents itself.
The best way that I've found to prevent over-engineering is to make sure that you only write code for as much of the specification as you currently know. If you have a class that needs to call one web service and retrieve one type of data don't bother writing some incredibly robust system that can handle every possible case.
That said, this technique really REALLY requires that you create clean, well-written, easily understandable code, and requires that you utilize accessors everywhere. If you don't you'll end up with a refactoring nightmare.
When you write code to satisfy only what you know you'll need at the moment you end up building functionality quickly, when the requirements change you can go back and refactor your code to add the missing functionality. Each time you add a new feature your code will (read: should) get better and better.
There are certainly some overarching design decisions that need to be made at the start of the project, but those decisions are very high level and shouldn't affect your ability to change as the requirements do.
While using this technique you'll want to take a moment before writing any new module to ask yourself if it is needed for the current implementation, if not, leave yourself a comment and write only what is needed.
There is one sure principle to follow: first get things done, then be smart. You must achieve the first goal, the second you don't.
It is not so hard to avoid overengineering, you just have to be pragmatic...
By using TDD [2] you don't have to plan the full class hierarchy or understand the whole project, you just have to write a small piece of code or test at once. This helps a lot by focusing only on things you really need...
I am currently experimenting with the clean architecture [3], which is very effective with TDD by building easy testable, developable and maintainable applications.
[1] http://en.wikipedia.org/wiki/Single_responsibility_principle