Before talking about algorithm itself, we need to talk about data structures that typically uses for algorithms. Some of algorithms even works based on a specific data structure that optimizes the algorithm. In this chapter, I’ll explain the most common data structures only.
Array and list
Array
To store a number of data, there should be some data structure that can give a specific data you want anytime and store a data in to the storage either. To acheive this property, there is the easiest data structure known as an array. Like the name itself, it stores data in an array of storage. Pros of this algorithm is that you can get a data anytime from an array in a constant time because all you need is an index. In mathmatical format, it usually written as $a[0]$ or $a_0$. However, there is a big disadvantange for this. If you want to use an array, you need to know exact size of data you need. Otherwise, you may can access to the data where you didn’t meant to. Therefore, it has a big disadvantage known as the fixed-size. However, it can extend the array by making a new array and copy every element in side of the array. Therefore, complexity of an array is like follow.
Time complexity | Array |
---|---|
Search/Change | $O(1)$ |
Add (Front) | $O(n)$ |
Add (Random) | $O(n)$ |
Add (Back) | $O(n)$ |
Delete (Front) | $O(n)$ |
Delete (Random) | $O(n)$ |
Delete (Back) | $O(n)$ |
Merge | $O(n)$ |
Space complexity | $O(n)$ |
Notice that adding and delete will change the size of the array. Merge means that merging two array into a single array. It will be assumed to have the same size of two arrays.
List 1
To avoid this fixed-size problem, there is an alternative structure known as a list. A list consists of nodes. Each node has a data and a pointer to the next node. Therefore, it can access to next node from any node. However, it has a slow search algorithm because it can access only the next node. Therefore, it takes a linear time to read an array.
Time complexity | List 1 |
---|---|
Search/Change | $O(n)$ |
Add (Front) | $O(1)$ |
Add (Random) | $O(n)$ |
Add (Back) | $O(n)$ |
Delete (Front) | $O(1)$ |
Delete (Random) | $O(n)$ |
Delete (Back) | $O(n)$ |
Merge | $O(n)$ |
Space complexity | $O(n)$ |
One other problem is that it can only add the new data without overhead at the front of the list. Therefore, there is another ways to make a list.
List 2
What if we make a pointer to denotes the last point of the list at the same time? It will gives an advantages that makes accessable at the end of the list. Therefore, it will give better performance when it works for the end of the list. At the same time, it has an advantage to merge two lists because it can connect the end point of a list to another list.
Time complexity | List 2 |
---|---|
Search/Change | $O(n)$ |
Add (Front) | $O(1)$ |
Add (Random) | $O(n)$ |
Add (Back) | $O(1)$ |
Delete (Front) | $O(1)$ |
Delete (Random) | $O(n)$ |
Delete (Back) | $O(1)$ |
Merge | $O(1)$ |
Space complexity | $O(n)$ |
However, it still has $O(n)$ complexity for read/change operation. Therefore, it usually doesn’t be used in actual implementation however the notation of the list is typically used for many other data structures.
Other lists
There are another way to implement the list. Followings are the common thing that could be tried.
Double sizing array
Double sizing array works like an ordinary array but it increases its size by double when it requires a bigger size. It gives a nice performance because it gives a constant complexity for adding and deleting data at the back of the array. Notice that this is amortized analysis. Therefore, it sometimes takes long to add elements.
Time complexity | Vector |
---|---|
Search/Change | $O(1)$ |
Add (Front) | $O(n)$ |
Add (Random) | $O(n)$ |
Add (Back) | Amortized $O(1)$ |
Delete (Front) | $O(n)$ |
Delete (Random) | $O(n)$ |
Delete (Back) | Amortized $O(1)$ |
Merge | $O(n)$ |
Space complexity | $O(n)$ |
Circular array
To access the data, we can use virtual indexing map instead of accessing data by index directly. In other world, just use $a[f(i)]$ instead of $a[i]$ when accessing array $a$ at index $i$. Please notice that $f$ is a virtual map function. There are many potential for this virtual map. However, let’s just think about one of the simplest one. Let’s use $f(i) = (i + from) \mod size$. Notice that $from$ is a constant that can changes. In such a case, we can easily remove and insert the data at the front and the end like list 2 above. In this case, complexity works like below.
Time complexity | Array |
---|---|
Search/Change | $O(1)$ |
Add (Front) | $O(1)$ |
Add (Random) | $O(n)$ |
Add (Back) | $O(1)$ |
Delete (Front) | $O(1)$ |
Delete (Random) | $O(n)$ |
Delete (Back) | $O(1)$ |
Merge | $O(n)$ |
Space complexity | $O(n)$ |
Comparison
Complexity
DSA means the Double sizing array and CA means Circular array.
Time complexity | Array | List 1 | List 2 | DSA | CA |
---|---|---|---|---|---|
Search/Change | $O(1)$ | $O(n)$ | $O(n)$ | $O(1)$ | $O(1)$ |
Add (Front) | $O(n)$ | $O(1)$ | $O(1)$ | $O(n)$ | $O(1)$ |
Add (Random) | $O(n)$ | $O(n)$ | $O(n)$ | $O(n)$ | $O(n)$ |
Add (Back) | $O(n)$ | $O(n)$ | $O(1)$ | Amortized $O(1)$ | $O(1)$ |
Delete (Front) | $O(n)$ | $O(1)$ | $O(1)$ | $O(n)$ | $O(1)$ |
Delete (Random) | $O(n)$ | $O(n)$ | $O(n)$ | $O(n)$ | $O(n)$ |
Delete (Back) | $O(n)$ | $O(n)$ | $O(1)$ | Amortized $O(1)$ | $O(1)$ |
Merge | $O(n)$ | $O(n)$ | $O(1)$ | $O(n)$ | $O(n)$ |
Space complexity | $O(n)$ | $O(n)$ | $O(n)$ | $O(n)$ | $O(n)$ |
Example code
Here is the simple example of arrays.
1 |
|
Performance
Performance depends on the experiment environment. I tested in my raspberry PI and results looks like as belows.
Without compiler optimization option
Time consumption | Basic array | Circular array | std::vector | Basic/Circular | std/circular |
---|---|---|---|---|---|
Insert first | 5.43E+08 | 730299 | 80218289 | 743.5641 | 109.8431 |
Erase first | 5.07E+08 | 249735 | 77053027 | 2030.535 | 308.5392 |
Insert middle | 2.72E+08 | 2.63E+08 | 64868023 | 1.03441 | 0.246638 |
Erase middle | 3.69E+08 | 93046427 | 60962537 | 3.963127 | 0.655184 |
Insert random | 2.69E+08 | 1.32E+08 | 67554999 | 2.028211 | 0.510193 |
Erase random | 2.79E+08 | 1.34E+08 | 61448193 | 2.081033 | 0.457566 |
With compiler optimization option
Time consumption | Basic array | Circular array | std::vector | Basic/Circular | std/circular |
---|---|---|---|---|---|
Insert first | 69745452 | 243124 | 72715663 | 286.8719 | 299.0888 |
Erase first | 19728613 | 352 | 70335698 | 56047.2 | 199817.3 |
Insert middle | 28044463 | 33800611 | 58150249 | 0.829703 | 1.72039 |
Erase middle | 13088558 | 7911878 | 58191397 | 1.654292 | 7.354941 |
Insert random | 29029662 | 11617777 | 59405666 | 2.498728 | 5.113342 |
Erase random | 9968517 | 11553333 | 57890608 | 0.862826 | 5.010728 |