# How do I understand algorithms better- Part 1

## Introduction

I recently published an article **Overview of Data Structures in Java Programming ****Language** and **ES6 JavaScript Data Structures you should Know**, whose aim was to document the data structures available in Java and JavaScript programming languages, respectively, and the type of software engineering problems, where such data structures are useful.

While data structures are strongly related to algorithms, the latter presents interesting facts, that software engineers need to know to be good in problem-solving.

To build a large-scale computer system (such as those run by Google, Microsoft, Facebook, Netflix, Uber, Amazon, etc.), the concept of algorithms is crucial. *So what is an algorithm*?

An algorithm is afinite setof instructions that, when followed, accomplishes a particular task.

This definition is good, but it does touch on all *attributes of a good algorithm*. An algorithm is expected to meet the following characteristics;-

i) **Input **- An algorithm is supplied with either zero or more quantities.

ii) **Output **- At least one quantity must be produced from an algorithm steps.

iii) **Finiteness **- An algorithm must terminate after a finite number of steps.

iv) **Non-ambiguous** - Also called definiteness, the steps in an algorithm must be clear or mean only one thing.

v) **Correctness **- The output from an algorithm must be correct or should be the intended output.

Apart from the characteristics, we have mentioned, all the algorithm instructions must be simple enough to be done by a person using apenandpaper.

## Objectives of this walk-through

- To highlight algorithm design strategies (categories of algorithms) by solving sample software engineering problems.
- To highlight some tips you need when solving software engineering problems.

## Who is this walk-through for?

- Computer Science students.
- Professional software engineers.
- Software engineering job seekers.
- Computer Science teachers/lecturers/professors

## Prerequisite

- A basic understanding of data structures and algorithms.
- Basic knowledge of Java programming language.

Now let us dive deeper and look at the things you must consider while trying to improve your software engineering skills. For each item we look at, we will solve a sample problem to illustrate our explanation.

## 1. Mastering data structures is a must.

You always interact with data when solving problems. Data structures provide a way to store complex and big datasets, enabling one to design efficient algorithms. Most programming languages have inbuilt data structures, and you should master data structures in the programming language that you love.

For Java and JavaScript engineers, you can check out these 2 (two) articles, which I published myself, **Overview of Data Structures in Java Programming Language,** and **ES6 JavaScript Data Structures you should Know**. It gives an extensive discussion of data structures you need to know.

**Problem 1: **

Given an unsorted array, return `true`

if there are duplicates (has numbers appearing more than once), return `false`

otherwise.

**Example**:

**Input**: `{1, 2, 3, 1}`

**Output**: `true`

**Explanation**: `1 appears twice`

.

**Input**: `{1, 2, 3, 4}`

**Output**: `false`

**Explanation**: `Each of the elements in the array appear once, so there are no duplicates`

.

If you know nothing about data structures, you may solve the problem, but your solution will not be the most efficient.

**The algorithm**:

You will take each of the numbers in the array and check if it appears again. i.e. for `{1, 2, 3, 4}`

, compare `1`

and `2`

, `1`

and `3`

, `1`

and `4`

; compare `2`

and `3`

, `2`

and `4`

; compare `3`

and `4`

We will use 2 `for loops`

as shown below and the resultant algorithm has constant space complexity, `O(n)`

and quadratic running time complexity, `O(n^2)`

```
public boolean hasDuplicates(int a []) {
int n = a.length;
if (n <=1) {
return false;
}
for (int i = 0; i < a.length; i++) {
for (int j = i+1; j < a.length; j++) {
if (a[i] == a[j]) {
return true;
}
}
}
return false;
}
```

Can we solve this problem in linear time, `O(n)`

?

Well, yes. We need to introduce a data structure called a `set/HashSet`

. This data structure has 2 advantages;-

- Does not accept
`duplicates`

, so it will always contain unique values. - It affords
`constant look up time`

, so we will not waste any time when we want to check if a given value exists.

The solution below has *l inear run time, O(n)* and

__linear space,__`O(n)`

. ```
public boolean hasDuplicates(int a []) {
int n = a.length;
if (n <=1) {
return false;
}
Set<Integer> set = new HashSet<Integer>();
for (int i = 0; i < a.length; i++) {
if (set.contains(a[i])) {
return true;
}else {
set.add(a[i]);
}
}
//If we successfully added all the elements into the set, then there are no duplicates
return false;
}
```

There is a simple **Leetcode **problem called **Two sum**, which you can attempt and see if you can identify the appropriate data structure to use to solve the problem in `linear time`

.

## 2. You must understand data types properly.

We are aware that data structures are mostly built from simple or primitives data types such as

, **boolean**

, **byte**

, **char**

, **short**

, **int**

, **long****float****, **and **double****. **

The understanding we are talking about here is the structure of the data type, how it is stored in memory, how much space it occupies, value ranges (especially for numeric data types), etc.

For instance for a Java integer, `int`

, the following needs to be known at the minimum;-

- We use
`32 bits`

to represent the number. - The number can be represented in
`bits`

. - For a signed
`32-bit`

integer, it has a maximum value of`2`

and a minimum of^{31}-1`-2`

. An unsigned Java integer has a minimum of^{31}`0`

and a maximum of`2`

. See more info here.^{32}-1 - Java programming language gives you a way to access a signed integers maximum and minimum values as
`Integer.MAX_VALUE`

, and`Integer.MIN_VALUE`

respectively. - You need to understand the bit-wise operations on integers.
- If you have values ranges outside the provides range, you can use
`long`

data types, which uses`64 bits`

.

**Problem 2**:

You are given an integer array, `nums`

of size `n`

, and you are required to compute the `sum`

of all the integers in the array and then return the sum.

*Additional information:*

`-2`

^{31}<=nums[i]<= 2^{31}-1`0<=`

*n*<= 1000

**The algorithm**

Computing the sum of integers in an array is a trivial task, so you must wonder why we are being asked this question. Look at the additional information;

- The array can have big numbers, in fact, it has an integer, which is the biggest number an integer can accommodate,
`2`

. This means that if the array has just two numbers, the resultant sum cannot be accommodated in an integer. This is possible because we are told that the array can be as big as^{31}-1`1000`

.

The trick here is to understand the additional information, and in our case, we will have to change our return type. If we write a function called `solution`

, then it will be something like this;-

```
public long solution (int nums[]) {
long sum = 0;
for (int i = 0; i < nums.length; i++) {
sum = sum + nums[i];
}
return sum;
}
```

Notice that we return `long`

instead of `int`

- unsigned `long`

.

The following solution looks okay but it will output a wrong result for some inputs. You can test it with this array `{2,4,78,5,2147483647}`

```
public int solution (int nums[]) {
int sum = 0;
for (int i = 0; i < nums.length; i++) {
sum = sum + nums[i];
}
return sum;
}
```

The correct sum is `2147483736`

but it will output `-2147483560`

!

## 3. Different programming languages may implement similar data structures differently.

I once again refer you to read the two articles **Overview of Data Structures in Java Programming Language**, and **ES6 JavaScript Data Structures you should Know.** You will notice that similar data structures will behave differently in different programming languages.

For instance, a Java array needs to be declared with size, whereas a JavaScript array expands to accommodate more items. Thus a JavaScript array can be used as an ordinary `list`

, a `queue`

, and at the same time, a `stack`

.

In addition, a **Java HashMap **does guarantee that the order of items put in the hashmap will remain constant over time, but a **JavaScript Map** remembers the original insertion order of the keys.

**Problem 3:**

You are given an integer array, `nums`

, of size `n`

, and you are required to store the values and their frequencies in some appropriate data structure. The order in which the values appeared in the array must be preserved.

**Example**:

**input**: `{3,4,3,3,5,8,2,1,5,2,2,4,3}`

**output**: `3,4`

; `4,2`

; `5,2`

; `8,1`

; `2,2`

; `1,1`

**Explanation**: 3 appears thrice, 4 appears twice, 5 appears twice, 8 appears once, 2 appears twice and 1 appears once.

**The algorithm **

The idea is to think of a data structure that can store the information as `key`

, `value`

pair (the number is the key and frequency is the value), but at the same time maintain the insertion order.

With JavaScript, we can use a Map/HashMap. However, because Java HashMap does not maintain insertion order, we will need to use Java LinkedHashMap. You can see that Map/HashMap worked in JavaScript but not in Java.

Java solution:

```
//Java solution
public LinkedHashMap<Integer, Integer> solution (int nums []){
LinkedHashMap<Integer, Integer> orderedMap = new LinkedHashMap<Integer, Integer>();
for (int i = 0; i < nums.length; i++) {
if(orderedMap.containsKey(nums[i])) {
orderedMap.put(nums[i], orderedMap.get(nums[i]) + 1);
}else {
orderedMap.put(nums[i],1);
}
}
return orderedMap;
}
```

JavaScript solution:

```
solution (nums){
let orderedMap = new Map();
for (let i = 0; i < nums.length; i++) {
if(orderedMap.has(nums[i])){
orderedMap.set(nums[i], orderedMap.get( nums[i]) + 1);
}else{
orderedMap.set(nums[i], 1);
}
}
return orderedMap;
}
```

## 4. Think of a brute force solution first, then optimize your algorithm

More often, software engineering problems will come with constraints, which guides the engineer in providing an efficient solution. However, it is advisable to provide a brute force (a naive approach) solution to a problem before optimizing it.

This is advantageous because it makes you understand the problem. Better still, if you were in an interview situation, providing a brute force solution is better than providing no solution at all.

Please do not rush to optimize because it may take longer, or you may even fail to crack the efficient solution.

**Problem 4:**

Compute the `sum`

of all the integers between two numbers (the two numbers are included in the sum). It would help if you solved this in constant time, `O(n)`

. It is guaranteed that `10<=sum<=2`

^{31}-1

**Example**:

**input**: `2`

and `10`

**output**: `54`

**The algorithm**

For sure, it might be challenging to figure out how to solve this problem in constant time, but as we have agreed, lets first provide a brute force solution.

We write a loop, which sums all integers between `start`

, say `2`

(two) and `end`

, say `10`

.

Something like this:

```
public int solution(int start, int end){
int sum = 0;
for(int i=start; i<=end; i++) {
sum = sum + i;
}
return sum;
}
```

This is a correct solution but its running time complexity is linear, `O(n)`

.

Next, lets figure out how we can optimize this:

The sum of integers between `2`

and `10`

is done by adding all the numbers, including `2`

and `10`

i.e. `2+3+4+5+6+7+8+9+10`

.

If you are keen, what you see is the sum of an `arithmetic progression`

, AP and we have a formula for it, `s = n/2(a+l)`

, where, *n** *is number of terms, *a** *is the* *first term and

is the last term.*l*

Thus, the solution with constant running time, `O(1)`

is shown below;-

```
//Time complexity- O(1), space complexity - O(1)
public int solution(int start, int end){
//Number of terms, n
int n = (end - start) + 1;
return n * (start+end)/2;
}
```

To add unto this explanation, there is a post on **finding a pair with given sum in an array**, which give a step by step explanation on improving a naive approach to an efficient approach.

## 5. Characters are equally important, know their ASCII values

ASCII stands for American Standard Code for Information Interchange. Computers can only understand numbers, so an ASCII code is the numerical representation of a character such as `'a'`

or `'A'`

or special character such as `'@'`

.

Understanding the ASCII codes can be helpful in solving certain problems. For instance how do you know if an input is a `lower case letter`

, `upper case letter`

or a `number`

and not a `special character`

?

**Problem 5**:

Given a character input, determine if it is a `number`

, `upper case character`

, `lower case character`

or `a special character/other character`

.

**The algorithm:**

The approach we use is to check the ASCII value of the input. Thus, we need to know the following:

- Upper case letter alphabets
`(A-Z)`

lie in the range`65-91`

of the ASCII values - Lower case letter alphabets
`(a-z)`

lie in the range`97-122`

of the ASCII values - Number
`0`

to`9`

lie in the range`48-57`

. - Any other ASCII value is for a non-alphabetic character/non-numeric and it includes special characters.

We can thus write a function which returns a character description as shown below

```
public String checkCharacter (char input) {
if (input >= 'A' && input <= 'Z') {
return new String("It is upper case");
}else if (input >= 'a' && input <= 'z') {
return new String("It is lower case");
}else if(input >='0' && input <='9') {
return new String("It is a number");
}else {
return new String("Other character");
}
}
```

Another problem which you can solve using this technique is, **Given a string, check if it is a Pangram** or not. You must solve this problem in linear time, `O(n)`

, and constant space, `O(1)`

. You can find a comprehensive discussion and solution here.

## 6. You must love Math, it is at the heart of problem solving

The art of designing and optimizing algorithms is aided by math.

If you go back to **problem 4**, you will notice that without understanding a `number series`

, it would have been nearly impossible to optimize our solution.

Not all algorithms you write will require strong math principals but understanding math topics such as __number theory__, __recurrence relations__, __probability,__ __calculus,__ __set theory, proof techniques__** **etc... would be advantageous. In the next sections, you will see how we solve __recurrence relations__ using backward substitution method to compute the running time complexities for __recursive algorithms__.

**Problem 6:**

Given a positive integer `n`

, determine the smallest positive number/integer `m`

,which has the same number of digits as `n`

.

**Example:**

`n = 1567, m = 1000;`

both `m`

and `n`

have the same number of digits and `m`

is the smallest in that group.

**The algorithm:**

The problem we have is to determine the number of digits on `n`

and use it to compute the smallest value with same number of digits - we power `10`

to the `number of digits-1`

.

We can use a string based approach i.e. convert the integer into a string, then use the `length`

function to compute the number of digits. The solution would be something like this;-

```
public int solution (int n){
if(n == 0) {
return 0;
}
int numberOfDigits = String.valueOf(n).length();
return (int)Math.pow(10,numberOfDigits-1);
}
```

If we have a few values, this solution is okay but if the number of values grow, then it will be totally inefficient because the statement `int numberOfDigits = String.valueOf(n).length();`

involves ** memory allocation for a String, for each evaluation.** In the case of Java language,

`JVM`

must first parse our number and copy its digits into a separate *String*and perform a number of different operations as well (like keeping temporary copies, handle Unicode conversions etc).

We could use the logarithm approach. If `a`

is equal to `b`

to the power `c`

, i.e. `b`

= ^{c}`a`

, we also say that `c`

is the logarithm of `a`

to the base `b`

(meaning `c`

is the power to which we have to raise `b`

in order to get `a`

), and we write

. *log*b a = c

Thus number of digits can be obtained as follows:

```
int numberOfDigits = (int) (Math.log10(n) + 1);
```

** **Using this approach, the solution to our problem would be as shown below:-

```
public int solution (int n){
if(n == 0) {
return 0;
}
int numberOfDigits = (int)Math.log10(n) + 1;
return (int)Math.pow(10,numberOfDigits-1);
}
```

Now you see why you have to *love *Math!

## 7. Take additional information or constraints seriously!

When you are given a problem to solve, more often it will include additional information or constraints. A good example is **problem 2**, which we solved earlier (recall how the additional in formation changed the way we designed the solution).

Additional information can describe the nature of data you expect or it gives you a clue on how how to solve the problem.

**Problem 7:**

Let us see a modified version of a problem, which appears as number 1 in Leetcode:

Given an array of integers, `nums`

of size `n`

and an integer `target`

, return *the indices of the two numbers such that they add up to target*.

You may assume that each input would have ** exactly one solution**, and you may not use the

*same*element twice.

You can return the answer in any order. Can you solve this problem in linear time, `O(n)`

and constant space, `O(1)`

.

**Example**:

**Input:** `nums = [2,7,11,15]`

, `target = 9`

**Output:** `[0,1]`

**Explanation:** `Because nums[0] + nums[1] == 9`

, so we return `[0, 1]`

Constraints/Additional information

`1 <= nums.length <= 10`

^{5}`-10`

^{9}<= nums[i] <= 10^{9}`-10`

^{9}<= target <= 10^{9}- Only one valid answer exists.
- Array
`nums`

is sorted

**The algorithm**:

The question is easy to understand. There are additional information and some information is given in the problem description. Most importantly, we are told to solve this problem in linear time, `O(n)`

and constant space, `O(1)`

, we also know that the array is `sorted`

and that a valid answer exists in the input array.

We can solve this problem in a number of ways:

- Generate all possible pairs and pick the one whose values add up to the
`target`

. Unfortunately, this solution ignores that fact that the array is sorted and its running time complexity is`O(n`

^{2}) - Traverse the array and store
`target-nums[i]`

as key and`i`

as value in a hash map. At any point of the traversal check if`nums[i]`

exists in the hash map and if it does, you know you got the pair. Unfortunately, this solution ignores that fact that the array is sorted and its space complexity is`O(n)`

- Because the array is sorted, we know the smaller number is in the left side of the bigger number (or are side by side if the two numbers are equal). We use two pointers,
`rPointer`

and`lPointer`

initialized at`n-1`

and`0`

respectively, (we will see the`2-pointer`

approach in the next sections), which moves as follows: if`nums[lPointer] + nums[rPointer] == target`

, we found the pair, if`nums[lPointer] + nums[rPointer] > target`

, decrement the`rPointer`

, otherwise we increment the`lPointer`

. Also`lPointer`

is always less than`rPointer`

.

The preferred solution will look like as shown below.

```
//Linear time and constant space.
public int[] solution (int nums [], int target, int n) {
int lPointer = 0;
int rPointer = n-1;
int [] output = new int[2];
while (lPointer < rPointer) {
if (nums[lPointer] + nums[rPointer] == target) {
output[0] = lPointer;
output[1] = rPointer;
return output;
}else if (nums[lPointer] + nums[rPointer] > target) {
rPointer--;
}else {
lPointer++;
}
}
return output;
}
```

## 8. You must show the running time complexity and space complexity of your solutions

Running time complexity and space complexity is more often a daunting topic to starters in data structures and algorithms but it becomes clear when you understand its role and with enough practice.

In fact, running time and space complexity for certain algorithms should be at your finger tips i.e. it should be straightforward to answer even if you woke up from a deep sleep.

**A simple teaser**

Can you write down the complexities of the following operations (Just stick to worst case and use Big Oh notation). Each item should not take you more than `30 seconds`

- Running time and space complexity of bubble sort?
- Running time and space complexity of binary search?
- Running time and space complexity determining the largest number in a sorted integer array
- Running time and space complexity of finding a value on a map
- Running time and space complexity of merge sort and quick sort
- Running time and space complexity of determining if a sorted array has duplicates.
- Running time and space complexity if unsorted array has duplicates.
- Running time complexity of node deletion in a doubly linked list.

Alternatively, can you give a known example of an algorithm with the following running time complexities (can you do it in less than 30 minutes for each item)?

Constant`O(1)`

-Logarithmic`O(log n)`

-Linear`O(n)`

-Linearithmic`O(n log n)`

-Quadratic`O(n`

-^{2})Exponential`O(2`

-^{n})

Did you manage? If you did not, then you will need to go back and understand this topic better. If you did, you still need to know how to arrive at the answer.

Let us try an algorithm example, which gives us * O(n*and another with

^{2})

*running time complexities.*

`O(log n)`

Refer to **problem 1's** first solution, which had quadratic running time before we optimized it. It is shown below:

```
public boolean hasDuplicates(int a []) {
int n = a.length;
if (n <=1) {
return false;
}
for (int i = 0; i < a.length; i++) {
for (int j = i+1; j < a.length; j++) {
if (a[i] == a[j]) {
return true;
}
}
}
return false;
}
```

Using the example `{1, 2, 3, 4, 5}`

as input, it has no duplicates, so we will experience a `worst case`

for this algorithm.

Size of array (input), `n`

is `5`

We will perform the following comparisons,

__Pass 1:__

Compare `1`

and `2`

, `1`

and `3`

, `1`

and `4`

, `1`

and `5`

=> `4`

comparisons (expressed as `n-1`

comparisons )

__Pass 2__:

compare `2`

and `3`

, `2`

and `4`

, `2`

and `5`

=> `3`

comparisons (expressed as `n-2`

comparisons)

__Pass __3:

Compare `3`

and `4`

, `3`

and `5`

= > `2`

comparisons (expressed as `n-3`

comparisons)

__Pass 4__:

Compare `4`

and `5`

=> `1`

comparison (expressed as `n-4`

comparisons)

You notice that we performed `4`

passes (done by the `i`

loop in the algorithm), when `n`

is `5`

(you can try more examples and you realize you will perform `n-1`

passes).

In each pass, we perform comparisons (see the `j`

loop in the algorithm) and you can see that the number of comparisons decreases as we increase `i`

. Let us summarize our stats in a table.

For `n=5`

(You can try this with different values of `n`

and you will notice a similar pattern)

Table a: What is the relationship between number of passes and input n for n = 5?

Pass | Comparisons | Express number of comparisons in terms of n |
---|---|---|

1 | 4 | n-1 |

2 | 3 | n-2 |

3 | 2 | n-3 |

4 | 1 | n-4 |

...... | ....... | ....... |

You are aware that we basically count the number of operations in an algorithm,

and then determine the relationship between these operations and the input i.e *T(n)*

`T(n) = 4 + 3 + 2 + 1`

for `n = 5`

-----— (i) or

`T(n) = 5 + 4 + 3 + 2 + 1`

for `n = 6`

-----— (ii) etc..

or generally

`T(n) = n-1 + n-2 + n-3 + n-4`

for `n=5`

-----— (iii) or

`T(n) = n-1 + n-2 + ......+ 2 + 1`

for any input `n`

-----— (iv)

Have you noticed something? This is an arithmetic profession (AP) and we are trying to find the sum of terms in an `AP`

, whose equation is as shown below.

`S`

, where _{n} = T(n) = n (a + l)/2

is the first term and *a*

(this is *l**el*) is the last term in the series.

`a = n-1`

and `l=1`

Do the substitution and you will find that the final equation will be `T(n) = n`

(half ^{2} /2`n`

squared)

Extracting running time complexity: drop any constants and any lower terms, drop co-efficient of highest term.

Running time complexity thus is

.**O(n ^{2})**

**Problem 8**:

Let us see another example, which gives us `logarithmic`

running time complexity.

You are given an integer `n`

of the form `2`

^{k}`(k=1, 2, 3, 4, 5)`

and you are required to divide `n`

continuously to stop when `n`

becomes `1`

. Your algorithm should return the number of divisions. What is the running time and space complexity of your solution?

**Additional information: **

- You are guaranteed that the input is of the form of
`2`

^{k}.

** Example**:

**input**: `32`

**output**: `5`

**Explanation**:

`32/2 = 16`

`16/2 = 8`

`8/2 = 4`

`4/2=2`

`2/2= 1`

(we stop when`n`

is reduced to`1`

; we have done`5`

divisions)

We use a while loop as shown below:

```
public int solution (int n) {
int counter = 0;
while(n>1) {
n = n/2;
counter++;
}
return counter;
}
```

Looking for the answer from the algorithm may be tricky but if you have been writing algorithms, you should already tell the answer. We use a table a we did before to see how different values of `n`

behave.

Table a: What is the relationship between the input n and the number of primitive operations i.e number of divisions, counter

Input (n) | No. of divisions, counter (T(n)) | Express n in terms of T(n) |
---|---|---|

4 | 2 | 2^{2} |

8 | 3 | 2^{3} |

16 | 4 | 2^{4} |

64 | 6 | 2^{6} |

...... | ....... | ....... |

In a general equation, the relationship between `T(n)`

and `n`

is;-

`n = 2`

^{T(n)}

We need to make `T(n)`

the subject of the formula.

Apply * log *(base

`2`

) on both sides; *log *n = *log *2^{T(n)}

; *log *n = T(n)*log *2* log *of

`2`

to base `2`

is equal to `1`

Thus `T(n)= log n`

Running time complexity: **O( log n)**

In the next steps, we will discuss how to use __recurrence relations__ to compute running time complexity for recursive algorithms.

Apart from computing running time complexity, you should be able to give scenarios in your algorithm, which gives `best case`

and `worst case`

running times in your algorithm.

## 9. You must know the running time complexity and space complexity of inbuilt functions in a programming language you use

It is always advisable to use __inbuilt functions__ in the programming languages you use. Unfortunately, it is at the same not advisable to use inbuilt functions for problems with certain constraints, if you do not understand how the inbuilt functions have been implemented internally.

Example of inbuilt functions in java, which you might have used frequently include;-

*Array.fill()*- fills an array with a specified value*Arrays.sort()*- sorts an array*Math.max()*- returns the max value from a given list.

Do you know the time complexities of the above functions?

**Problem 9:**

Given an unsorted integer array, `nums`

of size `n`

and an integer `target`

, return two numbers from the array, which adds up to the* target*.

You may assume that each input would have ** exactly one solution**, and you may not use the

*same*element twice. You may also return the numbers in any order.

What is the running time complexity of your solution?

**Example**:

**Input:** `nums = [2,11,7,15]`

, `target = 9`

**Output:** `[2,7]`

or `[2,7]`

is acceptable.

**Explanation:** Because `2 + 7 = 9`

, so we return `[7, 2]`

or `[2, 7]`

.

**The algorithm**:

Unlike in **problem 7**, we are not required to return indices so we can change the order/positions of the numbers in the original array.

We will sort the array using `Arrays.sort()`

, then use `2-pointer`

approach to determine the two numbers, which add up to the provided `target`

.

The solution is as shown below;-

```
public int[] solution (int nums [], int target, int n) {
//sort the array
Arrays.sort(nums);
int lPointer = 0; //left pointer
int rPointer = n-1; //Right pointer
int [] output = new int[2];
while (lPointer < rPointer) {
if (nums[lPointer] + nums[rPointer] == target) {
output[0] = nums[lPointer];
output[1] = nums[rPointer];
return output;
}else if (nums[lPointer] + nums[rPointer] > target) {
rPointer--;
}else {
lPointer++;
}
}
return output;
}
```

So what is the running time complexity of our solution?

Well, the answer is `O(n`

. But how? *log*n)

Well, we used `Arrays.sort()`

, which uses `quick sort`

whose running time complexity of `O(n`

. *log*n)

We then used 2 pointers to traverse the sorted array to track the numbers adding up to the target and that gives a linear time, `O(n)`

.

So the equation will look like this;

`T(n) = O(n`

; the highest term of the two is *log*n) + O(n)`O(n`

and hence it becomes the running time complexity of our solution. *log*n)

You can see that if you did not know how `Arrays.sort()`

works, it wouldn't be easy to determine the running time complexity of the solution.

**Problem 10**:

Given a string, `phrase`

, check if it is pangram or not. A __pangram__ is a sentence containing every letter in the English Alphabet. You may assume that the input contains __only letters of alphabet__, which are __all lower case__. What is the running time complexity of your algorithm?

__Example__:

**Input**: the quick brown fox jumps over the lazy dog

**Output**: false

**Explanation**: ‘l’, ‘z’, ‘y’ are missing

**Algorithm**:

We will use the concept of ASCII values of characters, a concept we discussed earlier.

We know `'a'-'a' = 0, 'b'-'a'=1, 'c'-'a'=2...etc`

, so if we create an array of size `26`

, we can use such ASCII arithmetic to increment values in the array, not including empty spaces.

We initialize all array values with `0`

using `Arrays.fill()`

function.

The solution is as shown below;-

```
//See comments within the code.
public boolean solution (String phrase) {
//Remove all white spaces
String newPhrase = phrase.replaceAll("\\s", "").toLowerCase();
//prefill the array with zeros
int countingArray [] = new int[26];
Arrays.fill(countingArray, 0);
for (int i = 0; i < newPhrase.length(); i++) {
//Increment a value at a given index whenever you see encounter that character
countingArray[newPhrase.charAt(i)-'a'] = countingArray[newPhrase.charAt(i)-'a'] + 1;
}
//Check to see if there is still a zero in the array
//which means a given character was not encountered
for(int number: countingArray) {
if(number == 0) {
//If you see a zero, then the string is not a Pangram
return false;
}
}
return true;
}
```

There are __three main sections__ in the algorithm, but only one will be important in determining the running time complexity complexity.

**(i)** Filling the array of size `26`

using `Arrays.fill()`

- The array is of size `26`

regardless of the size of phrase. Although we need to know that running time complexity of `Arrays.fill()`

is `O(n)`

, where `n`

is the size of the array, it does not have any effect since size of the array is constant (for any `phrase`

size).

**(ii) **Traversing the characters of the sentence - This process will depend on the size of the input, `phrase`

thus its running time is linear, `O(n)`

. This is the __only part__, which determines the running time complexity.

**(iii)** Checking the array to see if there is still any zero - This part does not affect the running time for the same reason as in **(i)**.

Thus, the running time complexity of the algorithm is `O(n)`

. Can you see that the the space complexity of the algorithm is constant, `O(1)`

?

## 10. Improve efficiency using early function termination.

We have used a function in all the algorithms we have written. It should be okay for me to assume that you understand the effect of the keyword `return`

in a function.

Simply put, a `return`

statement ends the execution of a function. So what does it mean by "*Improve efficiency using early function termination*". Let us use an example.

**Problem 11**:

Given an unsorted integer array `nums`

of size `n`

, and a `target`

, write an algorithm to determine if the `target`

is present in `nums`

.

Additional information:

`10<=n<=2.1 * 10`

^{9}

**The algorithm**:

We see that the size of the array can be very big.

The algorithm is simple, traverse the array elements one by one to check if it has the `target`

.

We will have two solutions as follows. Notice the position where the `return`

statements appears in each of the solutions. The function returns the `target`

or `-1`

if the target is not found.

```
public int findA (int nums [], int target, int n) {
target = -1;
for (int i=0; i<n; i++) {
if (nums[i] == target) {
return nums[i];
}
}
return target;
}
```

```
public int findB (int nums [], int target, int n) {
target = -1;
for (int i=0; i<n; i++) {
if (nums[i] == target) {
target = nums[i];
}
}
return target;
}
```

If we had an array, whose `target`

value is close to the beginning (best case scenario), `findA`

would return the `target`

and terminate quickly. Unfortunately, for `findB`

, it does not matter the position of `target`

, it will perform the same number of operations for both a target near the beginning and a target towards the end or no target at all (worst case).

**Rule of thumb** is, for an algorithm execution time (not necessarily running time complexity), if possible, **best case** scenario should be better than **worst case **.

## 11. A 2-pointer or multiple pointer approach is useful

__Two pointer__ or sometimes known as __multiple pointer__ approach is really easy and effective technique, which is typically used for searching pairs in a sorted array.

If you refer to **problem 7**, where we are supposed to return *the indices of two numbers such that they add up to a target, w*e described 3 approaches and the third one, which uses 2-pointer approach was the most efficient.

Another problem, where you can use a 2-pointer approach (which also looks like a __sliding window__ technique) is in a Leetcode problem, Subarray Sum Equals K - You can try it out.

## 12. Know how to compute running time complexity for recursive algorithms

We will discuss __recursion__ and __dynamic programming__ techniques in another post. In this part though, I will show you how to develop __recurrence relation__ from a recursive solution and how to solve the recurrence relation using __backward substitution__.

I just need to list the laws of recursion before we look at an example.

- A recursive algorithm must have a
**base case.** - A recursive algorithm must
**call itself**, recursively. - A recursive algorithm must change its state and
**move toward the base case**(recall where a function calls itself by passing a smaller version of the input)

**Problem 12**:

The algorithm for computing the factorial of a number, `n!`

, is shown below. Write down the recurrence relation, solve it and hence write down the running time complexity of the algorithm.

```
public int factorial(int n) {
if (n == 1)
return 1;
return n * factorial(n - 1);
}
```

**Solution**:

We can see that in the algorithm (**also by definition**), the base case in `1! = 1`

or `0!=1`

,

Thus if `T(n)`

represents running time complexity, then `T(1) = 1`

or `T(0) = 1`

(our base case)

If the running time complexity for input `n`

is `T(n)`

, then that of `n-1`

is `T(n-1)`

We also see every time, we check the condition `if (n == 1)`

, which happens in constant time (say `C`

) or multiplying `n`

by result from `n-1!`

, which also happens in constant time, say `C`

.

Thus, the recurrence relation is:

`T(n) = T(n-1) + C`

, we also know that `T(1) = 1`

Lets solve it!

`T(n) = T(n-1) + C .............................. (i)`

If we know what `T(n-1)`

is equivalent to, we can solve equation `(i)`

If we subtract 1 from `n`

on both sides of the equation, we can see what `T(n-1)`

is equivalent to.

`T(n-1) = T(n-2) + C`

; substitute this equation in `i`

`T(n) = T(n-2) + 2C...............................(ii)`

We still did not manage to eliminate `T(n)`

on the right side of the equation.

Let's find out what `T(n-2)`

is equivalent to.

We subtract `2`

from `n`

on both sides of equation `i`

`T(n-2) = T(n-3) + C`

; substitute this equation in `ii`

`T(n) = T(n-3) + 3C..................................(iii)`

We would continue with backward substitution and you can guess that our next equation would be as shown below;-

`T(n) = T(n-4) + 4C..................................(iv)`

The effect of subtracting `1`

continuously from `n`

will result to n decreasing to `1`

.What we need to know is the number of operation, which will reduce `n`

to `1`

.

Assuming we need `k`

subtractions, then the general equation would be;-

`T(n) = T(n-k) + kC`

(notice the trend in equations `i - iv`

) and at this point (recursion base case), `T(n-k) = T(1) = 1`

Thus `n-k = 1; k = n-1`

or `n=k+1`

`T(n) = 1 + C(n-1)`

;

`T(n) = 1 + Cn-C`

Drop all constants ( `1`

and `-C`

) as well as coefficient of the highest term, `C`

.

Running time complexity is

. **O(n)**

**Problem 13**:

Refer to **problem 8**, which we solved using iteration. Its __recursive solution__ is as shown below;-

```
//counter is declared at class level
//base condition is n==1? is a condition to stop
public int solution (int n) {
if (n == 1) {
return counter;
}
counter++;
return solution(n/2);
}
```

Although the problem does not merit to be solved using recursion, we did it for illustration purposes.

How will the recurrence relation look like?

`T(n) = T(n/2) + C...............................................(i)`

What is `T(n/2)`

equivalent to?

`T(n/2) = T(n/4) + C`

(we divided `n`

by `2`

on both sides of equation `i`

). Substitute this equation in `i`

.

`T(n) = T(n/4) + 2C`

can be written as `T(n) = T(n/2`

^{2}) + 2C.........(ii)

What is `T(n/4)`

equivalent to?

`T(n/4) = T(n/8) + C`

, we divide `n`

by `4`

on both sides of equation `i`

). Substitute this equation in `ii`

.

`T(n) = T(n/8) + 3C`

can be written as `T(n) = T(n/2`

^{3}) + 3C.........(iii)

We can go on and on doing backward substitution but what is our base case?

We know that when `n=2`

, we need `1`

division, hence `T(2) = 1`

(or still if `n = 1`

, `T(1) = 1`

).

How many divisions do we need to reach the base case? Assuming that we need `k`

divisions, then the general recurrence equation representation is;-

`T(n) = T(n/2`

(see the pattern in equation ^{k}) + kC`i - iii`

) `.................(iv)`

But, `T(2) = 1 = T(n/2`

^{k})

Thus, `2 = n/2`

; ^{k}`2`

; ^{k+1} = n`k = `

substitute in equation *log*n -1`iv`

`T(n) = 1 + C`

*log*n - C

Running time complexity is

. Similar to the iterative algorithm. **O( logn)**

There is another example in the following video, which will cement your understanding of solving recurrence relations using backward substitution.

The objective of this walk-through was to review algorithm concepts and consequently solve problems, which uses the concepts.

But you can see this post is getting to long! We will cover the following concepts in another post:-

- More on recursive algorithms
- Dynamic programming algorithms
- Moving sum approach
- Identifying algorithm edge cases
- Sliding window approach
- Backtracking algorithm technique
- Divide and conquer technique
- The greedy approach
- And more.....

I hope you enjoyed reading through this walk-through and it was helpful.

Do not hesitate to leave your questions in the comments section.

See you in the next post!