Image for post
Image for post
Photo by Гоар Авдалян on Unsplash

Searching Algorithms — Binary Search: Technique I

Binary search optimizes algorithms that need to search for an index or element in a sorted collection. Problems like these could be solved with a linear search, which produces a time complexity of O(n). Binary search improves the time complexity to O(log n) because the collection size is halved after each comparison.


Image for post
Image for post
Photo by Wesley Hilario on Unsplash

The Recursion Technique — JavaScript

In computer science, recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Recursion solves such recursive problems by using functions that call themselves from within their own code. (Wikipedia)

With the recursion technique, we can replace code written iteratively with clean and simple code. Recursion does not optimize for performance, instead, it prioritizes readability. …


Image for post
Image for post
Photo by Alain Wong on Unsplash

Dynamic-Size Sliding Window Pattern/Technique

The dynamic-size sliding window pattern optimizes algorithms that involve searching an array or string for a consecutive subsection that satisfies a given condition. Unlike the fixed-size sliding window, this window’s size changes as it moves along the data structure.

To solve problems like these, we could apply a brute force solution with a nested loop, but it would produce O(n²) time complexity at best. Applying the dynamic-size sliding window pattern can reduce the time complexity to O(n) and space complexity to O(1).

Moving on

In this pattern, two pointers create a window that represents the current subsection of the data structure. One…


Image for post
Image for post
Photo by Eric Prouzet on Unsplash

The Fixed-Size Sliding Window Pattern/Technique

The fixed-size sliding window pattern optimizes algorithms that involve searching an array or string for a consecutive subsection, of a given size, that satisfies a given condition. It can also be considered a variation of the two pointers pattern.

To solve problems like these, we could apply a brute force solution with a nested loop, but it would produce O(n²) time complexity at best. Applying the fixed-size sliding window pattern can reduce the time complexity to O(n) and space complexity to O(1).

Let’s get a move on

In this pattern, two pointers create a window that represents the current subsection of the data structure. One…


Image for post
Image for post
Photo by JESHOOTS.COM on Unsplash

The Two Pointers Pattern/Technique

The two pointers pattern optimizes algorithms that handle strings, sorted arrays, or linked lists by using two pointers to keep track of indices, simplify the logic, and save time and space. In this pattern, two pointers iterate through the data structure, in tandem, until one or both of the pointers hit a certain condition.

What’s the point?

This pattern is useful when we need to analyze each element of a collection compared to its other elements.

To solve problems like these, we could start from the first index and loop through one or more times. While a brute force or naive solution with…


Image for post
Image for post
Photo by Jeff Fielitz on Unsplash

Big O Notation — JavaScript

In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

We use Big O Notation to analyze the performance of an algorithm.

Big O Notation:

  • gives us a high-level understanding of an algorithm’s complexity
  • gives us a generalized way to talk about how efficient an algorithm is
  • only cares about general trends (e.g. linear? quadratic? constant?)
  • gives us the worst-case scenario/upper bound
  • depends only on the algorithm, not the hardware used to run it

Time Complexity

How can we analyze the run time of an algorithm…


Image for post
Image for post
Photo by Jon Tyson on Unsplash

JavaScript Increment/Decrement Operators — Prefix vs. Postfix

The increment and decrement operators are used to change a variable’s value by 1. The increment operator (++) increases the value by 1 and the decrement operator ( —- ) decreases the value by 1.

Prefix

If used prefix (++i or --i): FIRST, the variable’s value is changed THEN, the variable is used. It returns the value AFTER incrementing/decrementing.

let count = 7console.log(++count)// 8

Postfix

If used postfix (i++ or i--): FIRST, the variable is used THEN, the variable’s value is changed. It returns the value BEFORE incrementing/decrementing.

let count = 7console.log(count++)// 7

What makes these operators so smooth?

Let’s look at a JavaScript…


Image for post
Image for post
Photo by Sharon McCutcheon on Unsplash

Ruby on Rails — ActiveRecord Scopes

In computer science, syntactic sugar is syntax within a programming language that is designed to make things easier to read or to express. It makes the language “sweeter” for human use: things can be expressed more clearly, more concisely, or in an alternative style that some may prefer. (Wikipedia)

Scopes are syntactic sugar for defining class methods in Rails.

ActiveRecord scopes are custom queries defined inside a model and are available as class methods. Scopes take 2 arguments:

  1. a name
  2. a lambda

But what is a lambda?

Does anyone else immediately think of ‘Revenge of the Nerds’? The Lambda Lambda Lambda fraternity. No? Just me?

A lambda is an object that represents a block and…

Jamie Berrier

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store