DEV Community is a community of 638,459 amazing developers

We're a place where coders share, stay up-to-date and grow their careers.

Learning Algorithms and Data Structures: Arrays

Lareen Melo ✨ ・5 min read

An array is a structure that contiguously-allocates data of the same type.

Meaning that an array stores data sequentially in memory, making it easy to access the elements within. Each element can be efficiently located by its index. Array indices start from 0, so if you have an array with a size of 3 the indices will be 0, 1 and 2.

Analogy 🏘

Imagine an array being a street with houses. Houses are sequentially built one next to another and houses have numbers that identify them (just like arrays have indices).

Pros and cons

Pros

• Space efficiency: arrays consist purely of data, so memory doesn't have to do extra computation or formatting.
• Accessing elements of the array is a constant-time operation as long as the index of the element is known.

Cons 👎

• Arrays are fixed-sized, which can be a big constraint.
• Costly inserts and deletes. On average-case and worst-case, these operations can be O(N).

Complexity

• Look-ups: O(1)

Look-ups on arrays are pretty efficient because as long as we know the index of the element we want, its lookup will be constant time.

• Inserts: O(N)

Insertions are a little more complex. Inserting elements on your array can cause a change of positions of other elements. For instance, if I have the following array: [1, 5, 6, 8, 3, x] and then I’d like to insert a 4 between 6 and 8. We’d have to scoot the elements 8 and 3 to the right and insert the 4 right where 8 was.

Doing that is not constant time because the number of elements you'd have to shift to the right will depend on your array size, and that's a characteristic of a complexity that is not constant.

The reason why we have to do a shift of the elements is due to the fact that arrays store elements sequentially in memory. That’s part of the array’s contiguous characteristic that makes it space-efficient.

• Appends: O(1)

Insertions at the end are constant because we just need the current index of the empty slot of the array (if the array is not full, the last position).

• Deletions: O(N)

Just as insertions, when deleting an element from the array sometimes we will need to scoot certain elements to arrange them sequentially.

Arrays are good.

Computers love arrays just because they work so well with their hardware. However, being fixed-sized it's a big con, especially when dealing with data that isn’t fixed and it's prone to growth.

This is why we have dynamic arrays ✨.

Dynamic Arrays

Dynamic arrays possess all of the characteristics a normal array does with the exception that they can automatically resize to a bigger array once the capacity has been filled.

Pros and cons

Pros

• Accessing the elements within the array is a constant-time operation.
• Dynamic sized
• Space efficiency

Cons 👎

• Just like normal arrays, costly inserts, and deletes.
• Slow appends, whenever an array is at its max capacity and it needs to append/insert another element, a resize needs to happen.

When a resize happens, a new array with a bigger size is created in memory, the elements of the current array are copied to the new and bigger array and then we delete the current array and use the new and bigger array as our current array... and finally, we append. Obviously, our constant-time appends become more complex.

Complexity

• Look-ups: O(1)
• Insertions: O(N)
• Deletions: O(N)
• Appends: O(N)

So here's the thing, worst-case appends on dynamic arrays are O(N). But on the average-case, we could say it's O(1). Crazy, I know, hear me out. There's a concept called 'Amortized Analysis' which invites us to take into account that the worst-case scenario of appending is not an often occurrence and that for each doubling append we perform, the number of constant appends will also double. So really, our worst-case analysis it's quite pessimistic because we compensate for one big expensive operation with many simple ones. So, with that being said, yes, the worst-case scenario is O(N) complexity but on the average worst-case it will be O(1).

🔥🔥 Implementation 🔥🔥

So I implemented a dynamic array in C++ and it was a challenge. I'm not well-versed in C++ which is a big disadvantage but I enjoy working with it and learning it makes me happy.

Anyways, here's my initial implementation of a simple Dynamic Array in C++.

Community support

One of the beautiful things about forums and the programming community is the number of resources available online to improve and learn. So I asked my engineer friends and on the Code Review section of StackExchange (1) for tips on how to improve my current code.

Here's a reduced list of the things mentioned as my code was being critiqued and analyzed by more experienced developers:

• C++ guidelines

As I mentioned, I'm not well-versed in C++ so a lot of the critiques regarding my implementation were the way I handled C++ resources.

• Offset by 1 error on my delete function.

Yup, still struggling in life 😩.

A suggestion I found interesting was the use of a load_factor when resizing the array. So instead of increasing the size whenever capacity was at the max, I should increase the size of an array once the capacity is filled at 80% (it can be any number logically determined, I just randomly used 80, don't be like me).

Here's the final result after the improvements:

Problem-solving 👩‍💻

Here's where I've been struggling.

Problem-solving is hard for me. My initial goal was to solve 10 problems on arrays with mixed levels of difficulty. However, I ended up solving 4 easy labeled problems on Leetcode and boy I struggled.

Having the "coding is the easy part" quote as a mantra really helped me think before starting to code right away, but in all honesty, I still need to practice this skill with a passion. So hopefully next time I can solve problems more smoothly.

ON TO THE NEXT ONE: Strings.

Follow my competitive programming journey on: https://github.com/lareenmelo/algorithms 💓