DEV Community

Cover image for Day 20: Unveiling the Magic of Programming in Java, C++, Python, and Kotlin! ๐Ÿš€
Nitin-bhatt46
Nitin-bhatt46

Posted on

Day 20: Unveiling the Magic of Programming in Java, C++, Python, and Kotlin! ๐Ÿš€

DAY - 20

For more Tech content Join us on linkedin click here

All the code snippets in this journey are available on my GitHub repository. ๐Ÿ“‚ Feel free to explore and collaborate: Git Repository

Todayโ€™s Learning :-

Today Also we will go deep in Time and Space Complexity :-

"The understanding of time and space complexity is paramount for crafting exceptional products. Every programmer must grasp these concepts to elevate their coding prowess."

Time Complexity:
Time complexity measures the amount of time required by an algorithm to produce the output, based on the size of its input. It's all about analysing how the runtime of an algorithm scales with the size of the problem.

Time complexity is usually expressed using Big O notation (O()), which represents an upper bound on the growth rate of an algorithm as the input size increases.

Big O Notation (O()):
Big O notation describes the upper bound or worst-case scenario of an algorithm's time complexity.

It provides an asymptotic upper bound on the growth rate of an algorithm.

For example, an algorithm with time complexity O(n) means that the runtime of the algorithm grows linearly with the size of the input (n).

Common time complexities expressed using Big O notation include
O(1) (constant time), O(log n) (logarithmic time), O(n) (linear time), O(n log n) (linearithmic time), O(n^2) (quadratic time), O(2^n) (exponential time), and O(n!) (factorial time).

Omega Notation (ฮฉ()):

Omega notation describes the lower bound or best-case scenario of an algorithm's time complexity.
It provides an asymptotic lower bound on the growth rate of an algorithm.
For example, an algorithm with time complexity ฮฉ(n^2) means that the runtime of the algorithm is at least quadratic as the input size increases.
Omega notation is less commonly used than Big O notation but can provide insights into the best-case performance of an algorithm.

Theta Notation (ฮ˜()):

Theta notation describes both the upper and lower bounds of an algorithm's time complexity, providing a tight bound on its growth rate.
It indicates that an algorithm's runtime grows at the same rate as the function represented by ฮ˜().
For example, an algorithm with time complexity ฮ˜(n) means that its runtime grows linearly with the input size, neither faster nor slower.
Theta notation is useful for describing algorithms with deterministic behaviour where the best and worst cases are the same.

Industry Standard: Big O Notation
In the industry, we commonly use Big O notation because it represents the worst-case scenario of an algorithm's performance. It provides an upper bound on the time or space complexity, helping us understand how the algorithm behaves as the input size increases.

For example, let's consider searching for an element in an array:
Linear Search: O(n) - The worst-case time complexity is linear, meaning the time it takes grows linearly with the size of the array.
Binary Search: O(log n) - In a sorted array, binary search exhibits logarithmic time complexity, making it much faster for large datasets.

Rule for time complexity :

Constant result term ignore.
Always take the Max term. From N3 * N2 so we will choose N3.

Space Complexity:
Space complexity evaluates the amount of memory space an algorithm requires as a function of its input size. It focuses on understanding how much memory is needed for the algorithm to run efficiently.

Types of Space Complexity:
Just like time complexity, space complexity can be categorised into different notations:
Big O (O): Represents the upper bound on the amount of memory space an algorithm needs. It gives us an estimate of the maximum space used.

Omega (ฮฉ): Denotes the lower bound on space usage, indicating the minimum memory required by an algorithm.

Theta (ฮ˜): Provides a tight bound on space complexity, showing both the upper and lower limits.
Why Space Complexity Matters:
Optimising space usage is crucial, especially in memory-constrained environments like embedded systems or mobile devices. Understanding space complexity helps in designing algorithms that minimise memory consumption while maintaining optimal performance.

Consider a simple algorithm to store elements in an array:
If we declare an array without initialising its size, like int[] arr;, it occupies space without storing any values. This adds to the space complexity unnecessarily.
Remember, choosing the right algorithm with optimal time and space complexity is key to efficient problem-solving and system performance.

Feel free to reshare this post to enhance awareness and understanding of these fundamental concepts.
Code snippets are in the Git repository.

๐Ÿ™ Thank you all for your time and support! ๐Ÿ™

Top comments (0)