Complexity of algorithms: Difference between revisions
imported>Nick Johnson No edit summary |
imported>Robert Tito mNo edit summary |
||
Line 28: | Line 28: | ||
[[Category: Computers Workgroup]] [[Category: Computer Science Workgroup]] [[Category | [[Category: Computers Workgroup]] [[Category: Computer Science Workgroup]] [[Category CZ Live]] |
Revision as of 17:13, 21 February 2007
In computer science, Big O Notation is a notation for expressing the worst-case execution time for an algorithm.
It is generally written as , where is a function of zero or more parameters of the system. It roughly states that in the aymptotic case (i.e. as one or more of the system parameters extend towards their extremes), the actual execution time of the algorithm is bounded by for some positive constant K.
Big O Notation is attractive to computer scientists because it expresses the complexity of an algorithm without restricting the statement to a particular computer architecture or time period; even if computers get faster, or if the algorithm is switched from a PC to a Apple, the worst case execution time of an algorithm does not change. Additionally, when comparing two algorithms, it is readily apparent which will execute faster in the extreme case.
Most commonly, one will see
- Polynomial time algorithms,
- --- Constant time --- the time necessary to perform the algorithm does not change in response to the size of the problem.
- --- Linear time --- the time grows linearly with the size (n) of the problem.
- --- Quadratic time --- the time grows quadratically with the size (n) of the problem. Please note that, by convention, one only writes the highest-degree term for polynomial time.
- Sub-polynomial time algorithms (grow slower than polynomial time algorithms).
- -- Logarithmic time
- Super-polynomial time algorithms (grow faster than polynomial time algorithms).
Despite the examples, Big O Notation can express execution time as a function of multiple examples.