Algorithmic efficiency 145128 225599662 2008-07-14T14:20:32Z 84.13.30.8 /* See also add realtime computing to see also list*/ In [[computer science]], '''efficiency''' is used to describe properties of an [[algorithm]] relating to how much of various types of resources it consumes. The two most frequently encountered are * speed or running time - the time it takes for an algorithm to complete, and * space - the memory or 'non-volatile storage' used by the algorithm during its operation. The process of making code as efficient as possible is known as [[Optimization (computer science)|Optimization]] and in the case of automatic optimization - performed by compilers (on request or by default) - usually focus on space at the cost of speed, or vice versa. There are also quite simple programming techniques and 'avoidance strategies' that can actually improve both at the same time, usually irrespective of hardware, software or language. ==Speed== The absolute speed of an algorithm for a given input can simply be measured as the duration of execution (or clock time) and the results can be averaged over several executions to eliminate possible random effects. A relative measure of an algorithms performance can sometimes be gained from the total instruction path length which can be determined by a run time [[Instruction Set Simulator]] (where available). The speed of an algorithm can be estimated in various ways. The most common method uses [[time complexity]] to determine the [[Big_O_notation|Big-O]] of an algorithm. ==Memory== Often, it is possible to make an algorithm faster at the expense of memory. This might be the case whenever the result of an 'expensive' calculation is [[cache]]d rather than recalculating it afresh each time. This is a very common method of improving speed, so much so that some programming languages often add special features to support it, such as [[C++]]'s mutable keyword. The memory requirement of an algorithm is actually two separate but related things:- * The memory taken up by the compiled executable code itself (on disk or equivalent, depending on the hardware and language). This can often be reduced by preferring run-time decision making mechanisms (such as [[virtual function]]s and [[run-time type information]]) over certain compile-time decision making mechanisms (such as [[Macro (computer science)|macro substitution]] and [[template (programming)|templates]]). This, however, comes at the cost of speed. * Amount of temporary (dynamic) memory allocated during processing. For example, dynamically pre-caching results, as mentioned earlier, improves speed at the cost of this attribute. Even the depth of sub-routine calls can impact heavily on this cost and increase path length too, especially if there are 'heavy' dynamic memory requirements for the particular functions invoked. ==Optimization techniques== ===Environment specific=== Optimization of algorithms frequently depends on the properties of the machine the algorithm will be executed on as well as the language the algorithm is written in. For example, one might optimize code for time efficiency in applications for home computers with sizable amounts of memory, while code to be embedded in small, "memory-tight" devices may have to be made to run slower because of limited memory. ===General techniques=== * Table look-up's in particular can be very expensive in terms of execution time but can be reduced significantly through use of efficient techniques such as indexed arrays and [[binary search]]es. Using a look-up on 1st occurrence and indexed thereafter is an obvious compromise. * Use of [[Index (information technology)|index]]ed program branching, utilizing [[branch table]]s to control program flow, (rather than using multiple conditional IF statements or unoptimized CASE/SWITCH) can drastically reduce instruction path length, simultaneously reduce program size and even also make a program easier to read and more easily maintainable (in effect it becomes a 'decision table' rather than repetitive spaghetti code). ===Hot spot analyzers=== Special system software products known as "performance analyzers" are often available from suppliers to help diagnose "[[hot spot]]s" - during actual execution of computer programs using real data. They can pinpoint sections of the program that could benefit from programmer targeted optimization without necessarily spending time on the rest of the code. ===Avoiding costs=== * Unnecessary use of allocated [[dynamic storage]] when [[Static memory allocation|static storage]] would suffice, can increase the processing overhead substantially - both increasing memory requirements and the associated allocation/deallocation [[path length]] overheads for each [[function call]]. * Storage defined in terms of bits, when bytes would suffice, may inadvertently involve extremely long path lengths involving [[bitwise operation]]s instead of more efficient single instruction 'multiple byte' copy instructions. (This does not apply to 'genuine' intentional bitwise operations - used for example instead of multiplication or division by powers of 2) * Over use of function calls for very simple functions, rather than in-line statements can also add substantially to path lengths and [[Stack (data structure)|stack]]/unstack overheads. ===Readability, trade offs and trends=== One must be careful, in the pursuit of good coding style, not to over-emphasize efficiency. Frequently, a clean, readable and 'usable' design is much more important than a fast, efficient design that is hard to understand. There are exceptions to this 'rule' (such as [[embedded system]]s, where space is tight, and processing power minimal) but these are rarer than one might expect. However, increasingly, for many 'time critical' applications such as air line reservation systems, [[point-of-sale]] applications, cash-point machines, Airline [[Guidance system]]s, anti-collision software and numerous modern web based applications - operating in a [[real-time]] environment where speed of response is fundamental - there is little alternative. ===Determining if optimization is worthwhile=== The essential criteria for using optimized code is of course dependant upon the expected use of the algorithm. If it is a new algorithm and is going to be in use for many years and speed is relevant, it is worth spending some time designing the code to be as efficient as possible from the outset. If an existing algorithm is proving to be too slow or memory becoming an issue, clearly something must be done to improve it. For the average application, or for one-off applications, avoiding inefficient coding techniques and encouraging the compiler to optimize where possible may be sufficient. One simple way (at least for for mathematicians!) to determine whether an optimization is worthwhile is as follows: Let the original time and space requirements (generally in Big-O notation) of the algorithm be <math>O_1</math> and <math>O_2</math>. Let the new code require <math>N_1</math> and <math>N_2</math> time and space respectively. If <math>N_1 N_2 < O_1 O_2 </math>, the optimization should be carried out. However, as mentioned above, this may not always be true. ==See also== * [[Binary search algorithm]] - an simple and efficient technique for searching sorted [[array]]s * [[Branch table]] - a technique for reducing [instruction path-length, size of machine code, (and often also) memory] * [[Computational complexity theory]] * [[Index (information technology)]] - a technique for fast lookup using [[Index (information technology)|index]]es * [[Memory Locality]] * [[Real-time computing]], for further examples of time critical applications [[Category:Information technology]] [[Category:Analysis of algorithms]] {{Comp-sci-stub}} [[he:יעילות אלגוריתמית]] [[ro:Eficienţa algoritmilor]]