# Poker Face

Insertion sort iterates, consuming one input element each repetition, and grows a sorted output list. At each iteration, insertion sort removes one element from the input data, finds the location it belongs within the sorted list, and inserts it there. It repeats until no input elements remain.

Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it. At each array-position, it checks the value there against the largest value in the sorted list (which happens to be next to it, in the previous array-position checked). If larger, it leaves the element in place and…

# To The Left, To The Left

In computer science, selection sort is an in-place comparison sorting algorithm. It has an O(n2) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.

One thing which distinguishes selection sort from other sorting algorithms is that it makes the minimum possible number of swaps, n − 1 in the worst case. (Wikipedia)

Selection Sort improves on the bubble sort by making only one exchange for every pass…

# Rise To The Top

Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted. The algorithm, which is a comparison sort, is named for the way smaller or larger elements “bubble” to the top of the list.

This simple algorithm performs poorly in real world use and is used primarily as an educational tool. (Wikipedia)

Bubble Sort is not often used in production code because it uses a nested loop…

# Look Both Ways

This technique is a unique form of Binary Search used to search an array for an element or condition which requires accessing the current index and:

its immediate left AND right neighbor’s index.

This technique uses the element’s neighbors to determine if the condition is met and to decide to go left or right. It also guarantees that the searchable subsection has at least a size of 3 at each step.

## Find Your Way — Distinguishing Syntax

• Initial Condition:` left = 0, right = nums.length — 1`
• Termination: `left + 1 === right`
• Searching Right: `left = mid`
• Searching Left…

This technique is an advanced form of Binary Search used to search an array for an element or condition which requires:

This technique uses the element’s right neighbor to determine if the condition is met and to decide to go left or right. It also guarantees that the searchable subsection has at least a size of 2 at each step.

## Right On — Distinguishing Syntax

• Initial Condition: `left = 0, right = nums.length`
• Termination: `left === right`
• Searching Left: `right = mid`
• Searching Right: `left = mid + 1`

# Half You Heard?

Binary search optimizes algorithms that need to search for an index or element in a sorted collection. Problems like these could be solved with a linear search, which produces a time complexity of O(n). Binary search improves the time complexity to O(log n) because the collection size is halved after each comparison.

# Call Again

In computer science, recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Recursion solves such recursive problems by using functions that call themselves from within their own code. (Wikipedia)

With the recursion technique, we can replace code written iteratively with clean and simple code. Recursion does not optimize for performance, instead, it prioritizes readability. …

# Move Along

The dynamic-size sliding window pattern optimizes algorithms that involve searching an array or string for a consecutive subsection that satisfies a given condition. Unlike the fixed-size sliding window, this window’s size changes as it moves along the data structure.

To solve problems like these, we could apply a brute force solution with a nested loop, but it would produce O(n²) time complexity at best. Applying the dynamic-size sliding window pattern can reduce the time complexity to O(n) and space complexity to O(1).

## Moving on

In this pattern, two pointers create a window that represents the current subsection of the data structure. One…

# Okay, Move Along

The fixed-size sliding window pattern optimizes algorithms that involve searching an array or string for a consecutive subsection, of a given size, that satisfies a given condition. It can also be considered a variation of the two pointers pattern.

To solve problems like these, we could apply a brute force solution with a nested loop, but it would produce O(n²) time complexity at best. Applying the fixed-size sliding window pattern can reduce the time complexity to O(n) and space complexity to O(1).

## Let’s get a move on

In this pattern, two pointers create a window that represents the current subsection of the data structure. One…