Insertion sortiterates, consuming one input element each repetition, and grows a sorted output list. At each iteration, insertion sort removes one element from the input data, finds the location it belongs within the sorted list, and inserts it there. It repeats until no input elements remain.Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it. At each array-position, it checks the value there against the largest value in the sorted list (which happens to be next to it, in the previous array-position checked). If larger, it leaves the element in place and…

In computer science,

selection sortis an in-place comparison sorting algorithm. It has an O(n2) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.One thing which distinguishes selection sort from other sorting algorithms is that it makes the minimum possible number of swaps,

n− 1 in the worst case. (Wikipedia)

Selection Sort improves on the bubble sort by making only one exchange for every pass…

Bubble sort, sometimes referred to assinking sort, is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted. The algorithm, which is a comparison sort, is named for the way smaller or larger elements “bubble” to the top of the list.This simple algorithm performs poorly in real world use and is used primarily as an educational tool. (Wikipedia)

Bubble Sort is not often used in production code because it uses a nested loop…

This technique is a unique form of Binary Search used to search an array for an element or condition which requires **accessing the current index and***:*

** its immediate left AND right neighbor’s index**.

This technique uses the element’s neighbors to determine if the condition is met and to decide to go left or right. It also guarantees that the searchable subsection has at least a size of 3 at each step.

- Initial Condition:
`left = 0, right = nums.length — 1`

- Termination:
`left + 1 === right`

- Searching Right:
`left = mid`

- Searching Left…

This technique is an advanced form of Binary Search used to search an array for an element or condition which requires:

*accessing the current index **AND **its immediate right neighbor’s index**.*

This technique uses the element’s right neighbor to determine if the condition is met and to decide to go left or right. It also guarantees that the searchable subsection has at least a size of 2 at each step.

- Initial Condition:
`left = 0, right = nums.length`

- Termination:
`left === right`

- Searching Left:
`right = mid`

- Searching Right:
`left = mid + 1`

Binary search optimizes algorithms that need to search for an index or element in a sorted collection. Problems like these could be solved with a linear search, which produces a time complexity of O(n). Binary search improves the time complexity to O(log n) because the collection size is halved after each comparison.

In computer science,

recursionis a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Recursion solves such recursive problems by using functions that call themselves from within their own code. (Wikipedia)

With the recursion technique, we can replace code written iteratively with clean and simple code. Recursion does not optimize for performance, instead, it prioritizes readability. …

The dynamic-size sliding window pattern optimizes algorithms that involve searching an array or string for a consecutive subsection that satisfies a given condition. Unlike the fixed-size sliding window, this window’s size **changes** as it moves along the data structure.

To solve problems like these, we could apply a brute force solution with a nested loop, but it would produce O(n²) time complexity at best. Applying the dynamic-size sliding window pattern can reduce the time complexity to O(n) and space complexity to O(1).

In this pattern, two pointers create a window that represents the current subsection of the data structure. One…

The fixed-size sliding window pattern optimizes algorithms that involve searching an array or string for a consecutive subsection, of a given size, that satisfies a given condition. It can also be considered a variation of the two pointers pattern.

To solve problems like these, we could apply a brute force solution with a nested loop, but it would produce O(n²) time complexity at best. Applying the fixed-size sliding window pattern can reduce the time complexity to O(n) and space complexity to O(1).

In this pattern, two pointers create a window that represents the current subsection of the data structure. One…

The two pointers pattern optimizes algorithms that handle strings, sorted arrays, or linked lists by using two pointers to keep track of indices, simplify the logic, and save time and space. In this pattern, two pointers iterate through the data structure, in tandem, until one or both of the pointers hit a certain condition.

This pattern is useful when we need to analyze each element of a collection compared to its other elements.

To solve problems like these, we could start from the first index and loop through one or more times. While a brute force or naive solution with…