What is Big O Notation?

that is a very common job interview question for developers. It is generally measures of scalability of any algorithm, which means how scalable an algorithm is. In short, it is the mathematical expression of how long an algorithm takes to run depending on how long is the input, usually talking about the worst-case scenario.

Let’s now explore the most common types of Big O Notations, we will be using JavaScript as our reference language.

Constant-Time Algorithm

O(1) — “Order 1”

An algorithm is said to be constant time (also written as O(1) time) if the value of T(n) is bounded by a value that does not depend on the size of the input. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it.

const getItem = (items, index )=> items[index-1];
const items = [1, 7, 9, 7 , 4, 3, 2, 8];
const thirdItem = getItem(items, 3); //9
const sevenItem = getItem(items, 3); //2

Linear-Time Algorithm

O(N) — “Order N”

An algorithm is said to take linear time, or O(n) time if its time complexity is O(n). Informally, this means that the running time increases at most linearly with the size of the input. A linear search is a good example of it.

function linearSearch(arr, elToFind) {
for (var i=0; i<arr.length; i++) {
if (arr[i] == elToFind) {
return i;
}
}
reeturn null;
}
linearSearch(rainbow, "green"); // returns 3
linearSearch(rainbow, "white"); // returns null

O(N 2 ) — “Order N squared”

Quadratic Time Complexity represents an algorithm whose performance is directly proportional to the squared size of the input data set (think of Linear, but squared). Within our programs, this time complexity will occur whenever we nest over multiple iterations within the data sets. The size of the input quadratically affects the running time of the algorithm. Example:

const buildSquareMatrix = items => {
let matrix = [];  for (let i = 0, total = items.length; i < total; i++){
matrix[i] = [];    for (let j = 0, total = items.length; j < total; j++)
matrix[i].push(items[j]);
}  return matrix;
};
buildSquareMatrix(['a', 'b', 'c']);
/* 9 iterations for 3 elements, returns:
[
['a', 'b', 'c'],
['a', 'b', 'c'],
['a', 'b', 'c']
]
/*

Logarithmic-Time Algorithm

O(log n) — “Order log N”

O(log N) basically means time goes up linearly while the n goes up exponentially. So if it takes 1 second to compute 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on.

​It is O(log n) when we do divide and conquer type of algorithms e.g binary search.

function quicksort(array) {
if (array.length <= 1) {
return array;
}
var pivot = array;
var left = [];
var right = [];
for (var i = 1; i < array.length; i++) {
array[i] < pivot ? left.push(array[i]) : right.push(array[i]);
}
return quicksort(left).concat(pivot, quicksort(right));};
var unsorted = [23, 45, 16, 37, 3, 99, 22];
var sorted = quicksort(unsorted);
console.log('Sorted array', sorted);

You will eventually come across a linerarithmic time O(n log(n) algorithm. The rule of thumb above applies again, but this time the logarithmic function has to run n times e.g. reducing the size of a list n times, which occurs in algorithms like a mergesort.

You can easily identify if the algorithmic time is n log n. Look for an outer loop which iterates through a list (O(n)). Then look to see if there is an inner loop. If the inner loop is cutting/reducing the data set on each iteration, that loop is (O(log n), and so the overall algorithm is = O(n log n).

function binarySearch(arr, target) {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
const mid = left + Math.floor((right - left) / 2);
if (arr[mid] === target) {
return mid;
}
if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1;
}