I was explaining this to some junior programmers the other day and it’s worth repeating. Failure is fine in software engineering, in fact it is expected: I get nervous if something works first time these days as I know my limits and expect to make mistakes. However I think that new programmers don’t always realise that failure is acceptable and in some cases failure is preferred, this is especially true during research and development tasks. Consider the following diagram representing three programmers A, B and C working on a task over a period of time, each arrow represents one attempt at a solution.
Assume the task that has been set is complex and contains some degree of uncertainty which means that the first few attempts will probably fail. Which programmer would you want working on the task? If the task is actually impossible then programmer A will be the cheapest from a business standpoint as they fail earliest followed by programmer B and finally programmer C. Also, programmer A will have five attempts at a solution compared to programmer B’s two and a half attempts and programmer C’s single attempt. As iteration leads to improved quality (Boyd’s Law) this means that programmer A is also likely to hit on a more superior solution than programmer B or programmer C in the same time period.
The interesting thing here is that in my experience new programmers tend to work like programmer C: they will attempt a single solution working diligently to overcome any hurdles they encounter, no matter how severe the hurdle. This sounds harmless but remember that few things are impossible in programming, so the more a programmer hammers away at a problem the more likely they are to fudge a solution and that solution (in my experience) is not often of a high quality. The counter point to this is that more productive programmers will rapidly discard attempts early in each attempt when it becomes clear that the solution though possible is not actually desirable e.g. the dog is hungry and solutions include: A) feed the dog B) kill the dog.
This can also be observed by monitoring how long a programmer will remain stuck for before asking for assistance. The most productive programmers will get help within minutes of becoming completly stuck. However the least productive (and often least experienced) programmers will often struggle with the problem for days before actually asking for assistance and this delay is expensive in terms of money, deadlines and product quality.
Yesterday I wrote about optimisation work flow and in this post I will be discussing why you should not preemptively optimise your program’s source code. You may wonder what could possibly be wrong with preemptively optimising the source code, it just makes your program faster right? And that is the trap: while it does make your program faster, if you do not measure the programs performance relative to your performance goals you are effectively taking a shot in the dark with very little real likelihood of hitting your (performance) mark and a real likelihood of unintended consequences in terms of the quality, understandability, complexity and bug count in your program’s source code.
For example, it faster on the CPU you are targeting to compare an integer to zero than to compare an integer to another integer. Therefore you have decided to write all your loops so they count down to zero instead of the normal approach of counting up from zero. This will save you a few cycles per iteration of your loop, but have you considered:
- Counting down and not up in a loop is not conventional in most programming languages and may result in you making math errors which can result in hard to track down bugs in your loop control logic or code inside your loop.
- Pointer arithmetic in languages like C/C++ is hard enough when counting up (e.g, pObj++) but counting down is even harder to understand as it is so incredibly rarely done and requires extra pointer math to set up.
- The programmer that comes after you to develop or maintain the source code may not realise you are counting down and not up in your loop and then implement functionality that relies on your loop counting up which would again cause bugs and confusion.
- That despite your loop being invoked for thousands or millions of iterations, your loop’s control logic is no where near as expensive in terms of CPU operations as function call X that comes right in the middle of your loop which costs millions of cycles by itself every iteration!
- That it is almost unheard of for loop control logic to be an issue performance wise even when it is in tightest inner loops.
- That most optimising compilers are perfectly able to make this optimisation for you during compilation on your target platform with none of the above programmer confusion?
My advice here is never optimise without measuring first: as at best you will save a few CPU cycles where it really does not matter at the cost of making your source more complex and harder to understand and at worse you will add subtle bugs to your program and make the source code more complex and harder to understand or maintain! I have lost count now of how many times I’ve tracked subtle bugs in programs down to programmers making preemptive optimisations without first measuring to see if any optimisation is required and if the extra level of complexity introduced by the optimisation is a desirable trade off.
Occasionally there will be an exception to this rule but in general if you are going to optimise something should always measure first and then optimise only if it is nessessary.
I have moved my RSS feeds to feedburner.google.com so you may have to update your bookmark in your RSS reader, although there should be redirection in place to redirect the old feed URL to the new location.
I believe the key to optimising any program is measurement: not writing l33t code which seems to be what a lot of programmers think optimisation is! The optimisation process is all about finding the slow parts of your program and speeding those slow parts up by refactoring your source code to meet your target performance goals. Without measuring the code or having target performance goals optimisation is a waste of time: as programmers are very bad at guessing where the slow parts of the program are and very good at optimising pieces of the program that do not need it.
This diagram emphasises the importance of measurement in the optimisation process: you cannot begin the process, evaluate your optimisations, even discard your optimistations or honestly finish the process without first measuring the program performance versus your target performance. As I have mentioned before the initial optimisations to a program tend to yeild larger returns and then returns drop off until much more time is invested. These initial easy optimisations are not usually the sort of changes new programmers expect: they expect sexy l33t code like inline assembler for big performance wins, not tweaking compiler flags or removing calls to a pure virtual method on a base object in an inner loop.
Jani Hartikainen has written an excellent post in reply to my earlier post about teaching software engineering students memory management, and his post is well worth a read. I started off writing a comment on his post as a reply but I ended up writing more than I expected as I refined my ideas.
I agree teaching a higher level language with built in memory management as a first programming language is the more humane option as far as first time students are concerned. As learning your first programming language and all the associated concepts is hard enough without all the nasty memory related gotchas in a language like C or C++. Although the nice thing about learning something like C or php (as Jani suggests) is that teaching object orientation can be avoided initially, as that particular concept does seem to be something that some students struggle with a lot the first time they encounter it.
However I do think a low level language that has manual memory management should be at least experienced by every programmer, as it is a fundamental concept of programming effectively. And I believe that having minimal experience with some form of manual memory management would help most programmers write higher performance programs. But perhaps I was being a bit over zealous recommending that everyone learns a language like C/C++ as a first language, although I think at least a short course featuring a language with manual memory management would be invaluable to all programmers. The course would not even need to cover object orientation as you can teach that in higher level languages that are easier to manipulate than C++: the key point of the course would be to teach memory management and its implications for writing fast high quality software.
As much as I go on about knowing assembler, I admit its not something I write very often at all, however it is something that is incredibly handy to be able to read and comprehend. Even the most basic level of understanding of the instructions for loading data from or to memory and registers, branching and basic math operations would allow you to check the compiler has actually generated the assembly code you expected. This is especially useful for debugging unexpectedly slow code in compiler optimised builds. You also don’t need to know about all the fancy vendor specific assembly instructions, as I’ve mentioned the basics are usually sufficient to be able to understand roughly what a slow piece of code is doing to then rewrite the higher level source code in a way that prompts compiler to generate faster assembly code. Actually writing assembly code should always be the last resort and only done by experts after all other higher level refactorings are attempted, as higher level optimisation or refactoring work is usually more effective and easier to understand. Also, assembler is not usually easily portable to other hardware platforms and most programmers find it harder to debug assembler than normal C/C++ source code. Plus as Jani mentions it is scary to find a block of inline assembler in the C++ program you are working as it is much harder to decipher than regular source code unless it is very well documented.
This ability to check what is going on ‘under the hood’ of your language is essential for those hard to track down bugs and when optimising your application for performance, especially where the compiler has reordered the program flow or generated unexpectidally slow code. This can also be applied to high level languages as well as C/C++: checking that the IL byte code generated by your C# compiler or the Java byte code generated by the JVM is doing what you expected can be very useful in understanding your program’s execution and performance.
It is not enough to react quickly to meet your customers feedback as a software engineer, if you truely want to be an excellent engineer you need to anticipate their needs (to an extent). This does not mean creating applications that are so generic that they can meet any user need: as such systems usually suffer from feature overkill, take too long to develop and are overly complicated to use or maintain.
Anticipation takes many forms and covers many areas of software development:
- Anticipating Questions:
- Have you described your software in terms the customer can understand?
- Do they still want the software now they understand what you are proposing?
- Have you been precise with your description? Vague descriptions can lead to confusion and user disinterest or down right resistance to your software (if they think it is something it is not).
- Do you have design documentation you can show the customer?
- Is your design documentation actually understandable (by the user) and is it user focused e.g. work flows, user interfaces etc? High quality, readable and concise design documents are an excellent way of allowing the customer to soak in the design in their own time.
- Anticipating deployment:
- How is the user going to run your software?
- Have you test run your software on the customer’s platform with a similar environment configuration before you attempt to roll it out?
- What other dependencies will need to be installed to support your software?
- How do you plan to push out those dependencies?
- What sort of configuration management (CM) system is your customer using and can you harness it?
- How will you install and configure your software?
- Anticipating integration:
- How will your software integrate with the customers current work flow and application stack?
- Does your software have dependencies are shared with other applications that could require those other applications to be upgraded too? What extra cost would this introduce?
- Does your application have specific operating system configuration requirements that could cause side effects for other application?
- Anticipating support:
- How will you support your software post roll out?
- How will diagnose problems on customers computers when you don’t have your development environment available?
- Have you built any logging, metrics or error reporting tools built into your software?
- Is there a help system integrated into the software and is that help system’s content usable and understandable by the target users?
- How do you intend to upgrade your software or its dependencies in the future?
- What sort of testing do you have in place to prevent upgrades breaking existing functionality?
- Anticipating retirement:
- How easy would it be to remove your software from the customers application stack?
- Do you have some sort of uninstallation script?
- What about your applications dependencies: will any dependencies be orphaned and how will you remove them?
- How tightly coupled is your software to the other systems it interacts with? Can it be easily and gracefully decoupled?
You don’t need to answer all of the above questions but spending some time thinking about them and jotting down even single senence answer will help you anticipate potential problems. It is never fun to have to tell a customer “I hadn’t thought about that.” after some show stopping problem emerges…