******************* Big O *********************
- Time Complexity: The concept of Time Complexity is If code1 runtime is less than code2...then code1 is better. The thing about time complexity that is interesting is that it is not measured in time. Because if you took the same code and ran it on a computer that runs twice as fast, it would complete twice as fast. It doesn't make the code any better just means the computer is better. So it is measured in the number of operations that it takes to complete something, and we'll look at examples of that as we go along here.
So let's say that code one.
While it runs very fast, comparatively, let's say it takes up a lot of memory when it runs.
And maybe code two, even though it takes much longer to finish.
Maybe it takes up less memory if preserving memory space is your most important priority and you don't
mind having some extra time complexity.
Maybe Cota is better.
- Space Complexity: So let's say that code one.
While it runs very fast, comparatively, let's say it takes up a lot of memory when it runs.
And maybe code two, even though it takes much longer to finish.
Maybe it takes up less memory if preserving memory space is your most important priority and you don't
mind having some extra time complexity.
Maybe code one is better.
- Now while dealing with time complexity and space complexity there are three Greek letters. They are Omega Theta and Alma Cron. Macron is better known as O as in O. But technically, when we talk about Big O, we are always talking about worst case.
#############Types of Big O
- O(n) : Simply we can say [O(2n),O(3n),O({any constant}n)]=O(n) So the graph on this bottom axis that's in. And on the other axis, this is the number of operations so oh of in is always going to be a straightmline. It is what is called proportional.
2.Drop non dominants:So to explain this, I'm going to start with our nested for loop that we saw in the last video, and
I want to follow this with another separate loop.
So this is not inside of the nested for loops above.
This is a completely separate one.
So let's go take a look at this and vs code.
OK, so there is our function there, and we're going to call this with the number 10.
So I'm going to say run code.
So going to start at the top here.
So this started at zero zero and went to 99.
That's the nested for loop here.
Then the other for Loop here.
This one printed out zero through nine.
OK, so let's flip back.
So these nested four loops ran o of n squared times.
This single four loop ran a wave in times and the total number of items that we output was all been
squared plus over then, so we could write it like this or within squared plus end.
But the main thing we're concerned about within is what happens when we start having in b a very large
number, even if we have a not so large number like 100 in squared is going to be 10000 plus in that
single in that stands alone is only going to be 100.
So as a percentage of the total number of operations, it is insignificant.
And as the number becomes a thousand or a million, that stand alone in is a very small portion of the
number of operations.
So in this equation, in squared is the dominant term and that stand alone in is the non-dominant term.
So we just drop the in.
We drop in on dominance.
inshort
O(n^2+n) or any other like this will be O(n^2)
3.O(1): We'll explain this with a function.
We're going to call this add items.
We're going to pass at a no end and all we're going to do is we're going to return in.
Plus in.
So with the previous big O's that we've seen as in gets bigger, the number of operations increases.
But in this case, the only operation is the addition.
So if in is one, you have one operation.
If it's a million, you still have one operation.
And that?
Here's a one.
But what if you have two operations, is that going to become oh of two?
Well, as you may have guessed, even in this situation, we're going to call it.
Oh, of one.
So, oh of one is also called constant time, meaning that as in increases, the number of operations
is going to remain constant.
So even if we have two additions like we do here.
The number of operations will remain constant as in increases.
So let's take a look at this on the graph.
Of one is just that flat purple line across the bottom.
Just going to stare across the bottom because as it increases, we are not increasing the number of
operations.
It's the most efficient big show any time that you can make anything of one.
That is optimal, as you can make it.
OK, so that is our overview.
Of all of one.
inshort
O(n+n+n+....)=o(1)
- O(log n): I'm going to bring up a list that has items one through eight in it and for what I'm about to do here
for this demo.
You have to have sorted data.
And what are we going to do is find the most efficient way of finding a number in this list.
So we're going to look for the number one.
And of course, we're not going to know where it is in the list.
We have to find the most efficient way that would work for finding any number.
So this is what we're going to do.
We're going to take the list, we're going to cut it in half and say, is it in the first half or is
it in the second half was not in the second half.
So we can remove that.
And now we only have to search through this many numbers.
And that will do it again.
It's not in the second half and we can remove those.
We do it a third time.
And we find the number that we're looking for.
So let's bring all of these numbers back and count how many steps we had to find the item that we were
looking for.
It was one.
To.
Three steps.
So we had eight items in the list, and it took us three steps to find a number.
And it just so happens that two to the third power is eight.
So we're not going to get heavy into the math and logarithms and things like that.
We're just going to do something basic here.
But I'm going to take this equation here and turn it into a logarithm two to the third power equals
eight is the same as saying log sub two of eight equals three.
Let's put this back again and watch how all the numbers move.
So what you're basically saying is to to the what power equals eight to two, the what power equals
eight?
Well, it's two to the third power.
Another way to look at this is.
How many times you have to take the number eight and divide it by two to get down to one item, and
that is three times.
But the real power of this is when we have very large numbers, let's say we have log sub two of this
number.
This is over a billion.
How many times do you have to cut this number in half to get down to one item, and that is 31 times.
So if you think about a list with a billion items in it, if you're going to iterate through that linearly
from beginning to end, you could have over a billion operations.
But if you do this process of cutting it and have repeatedly, you can find any number in that list
in 31 steps.
And that is the power of a of log in.
So now let's take a look at this on the graph.
You can see that it is very flat, very efficient, not as flat as Oval one, of course, but it is
far more efficient than oh of in or within squared.
So these four are going to be what we see for most of the course, but there's one other I want to show
here, and that is oh of in times, log in that is used with some sorting algorithms and we'll see a
couple of those in this course merge sort and quick sort.
And this is the most efficient that you can make a sorting algorithm.
Now there are some sorting algorithms that can sort just numbers that are more efficient than this.
But if you're going to sort out various types of data, you're going to sort strings.
This is the most efficient that you can make a sorting algorithm.
But for the rest of the course, other than those sorting algorithms, these will be the four that we
see.
All right.
That is our overview of a of log in.
inshort
log 2 base anynumber= number of cutting
- Different terms of object: o(a+b) o(a*b) 6.Big O(lists):And since this is a built in data structure, it's very common to compare other data structures like
the ones we're going to build against lists.
So we're going to start with a list that has 11, 323, seven, just some random numbers, except I'm
going to represent them like this.
So we have the indexes as well because that's going to be important in talking about the big O of lists.
So let's look at a few things that you can do with the list, I'm going to call this list my list,
and if we're going to do an append or append the number 17 on here like this?
This is a simple operation, there isn't any re indexing that has to happen.
We don't have to do any re indexing of any of these items all the way down the list.
So this is a simple operation with a list.
Now let's take a look at taking that same item and removing it with a pop.
Once again, when we remove that there is no indexing all the way down the line.
All we have to do is add or remove when we do an append or a pop.
So that is oh of one.
But if we take my list and say pop at the index of zero, that is that first item there.
When we remove this, the problem that we have is that this index is now incorrect, so we need to take
that index of one and index it to a zero.
And we do that all the way down the line for the entire list.
The same is true if we're going to bring that item back and we're going to say we're going to insert
at the index of zero the value of 11, we're just going to bring that same one back.
In this case, we have to re index the two like this and the next one all the way down the line in order
to be able to put this one.
Right here.
So it doesn't matter if you're removing or if you're adding to a list, it's a wave in and in in this
case is the number of items in the list.
So as far as big of list goes, the most important things to remember on this end.
Adding or removing his old one?
On this end, removing because of the re indexing that has to happen and bringing it back, adding it
on to the list.
Like that is a wave in, so our one on one end and of in on the other.
So what if we say my list and we insert something in the middle?
So this is at the index of one.
And we're going to add this one that says hi.
You can see that right here.
This index is no longer correct, so we're going to have to re index that and do that all the way down
the list if we add something in the middle.
Therefore, it is over then.
Now you might say, hey, that's half in, because if it's somewhere in the middle, you're only going
through half the items.
Well, there's two problems with that logic.
First, Big O measures.
Worst case.
Not average case.
The second problem is that one half is a constant and we drop constants.
But either way, this is a event.
Same is true if we remove the item at that index, we are going to have to do all of this re indexing
and that is going to be of in.
So how about if we're going to look for an item and we're going to look for it by value?
We're going to look for the number seven.
We're going to start here and we're going to loop through iterate through this list until we get to
the seven that is going to be over then.
However, if you're going to look at this.
By index.
And you want the value that's at the index of three.
You can go directly to that place and memory based on that index, that is a of one.
And that is our overview of Big Oil.
Of lists.
7.Wrap up: Somewhat to bring up our graph here and say in his 100.
Let's look at what each of these four are.
When in his 100s, we saw in the bottom there with over one.
And that's going to equal one.
Over log in is going to be approximately seven.
Oh, of in is going to be a 100 because it is 100.
And all then squared is 10000, so you can already see that there's a very big difference between each
of these and that over then squared compared to the other three is very inefficient.
But this spread becomes even bigger.
When in becomes larger.
So let's look at what happens when in is a thousand.
Over one down at the bottom, there is going to still be one it's not affected by in becoming larger
oh of log in is approximately 10 because two to the tenth powers 1024 thousand twenty four.
So a thousand is pretty close to that.
So we'll call that approximately ten.
But notice it only went from seven to 10, even though in went from 100 to a thousand.
Over then, it's going to be a thousand and over, then squared goes all the way to a million.
So as in grows in squared is going to grow very, very fast.
It's very inefficient compared to any of these others.
So we'll see as we get into the course, there will be situations where you do something and it's all
within squared and you can rewrite the code and make it all within.
That is a huge increase in efficiency when you can do that.
So now let's look at some terminology for all of these.
All of in squared is a loop within a loop.
Oh, of in is proportional, it will always be a straight line of log in.
Divide and conquer when you hear that that is oh of log in and of one is constant time.
So those are our four terms for our four big O's that we'll see for most of the course.
So now I want to flip over to a website.
This is called Big O Cheat Sheet dot com.
At the very top of the page, you have this chart, these two over here, we're not going to see in
this course.
In fact, over in factorial the one all the way to the left, there is something that you would have
to intentionally write bad code to achieve.
The worst one that we're going to see in this course is going to be of N squared, which is in the horrible
category.
And we'll make our way across here or live in times, log in and remember, we're going to see this
in a couple of sorting algorithms and then down here is where we want to stay with these three or within
or log in and all of one.
So below the chart, if you scroll.
They have common data structure operations, so you have a variety of data structures on the left there.
And this whole area is time complexity, and it's broken into average on this side.
Notice these all start with the Greek letter Theta.
And then worst on this side, this is going to be oh, but also over here, we have space complexity.
And this just has big oh for space complexity, not omega or theta.
And you can see that except for one, the skipped list, which we're not going to build in this course.
They're all over then.
And this is one of the reasons we're going to spend much more time on time complexity than space complexity
because in the entire data structure section, all the space complexity is going to be the same.
So then if we scroll again.
You have a Ray sorting algorithms.
And this has best with Omega average, Peter worst.
Oh, four time complexity.
And then the space complexities are all over the place.
And this is where we will talk about space complexity.
And one of the things that's interesting is quick, sort and merge sort appear at the top.
For the most part, is over, then times log in.
But the space complexity is not the best.
Whereas you get down here, we're going to build these three bubble sort, insertions, sort and selection
sort, and you can see the time complexities, for the most part, are not good.
They're all been squared.
But all three of them have a space complexity of of one.
So these three are considered to be, shall we say, primitive sorting algorithms, but from a space
complexity, they're great.
Also, in a best possible scenario over here, they're more efficient than quick, short or merge sort.
This situation, by the way, is if you have already sorted data or almost sorted data, then it's going
to be in.
And that's also the situation up here for quick short when it has its worst possible scenario.
So let's say you have sorted data or almost sort of data, and that could happen if you have a sorted
list and you add one item to the end and then you're going to sorted, it is almost sorted.
Or it could be that that item on the end is the highest number and it's completely sorted.
Quick word is going to be terrible in a situation like that.
But bubble sort, an insertion sort are going to be very good.
So these are the kinds of questions that you're going to get in an interview.
This is why you have to know the big O.
And for that matter, the Theta or the Omega in this case, for the time complexity and understand the
space complexity to be able to answer these questions.
But also, I would encourage you to go to this website and spend some time looking this over after you
start getting an idea of how this works in the course.
I will add a link to this website in the resources for this video.
And that is our wrap up for big show.
Clases
class Cookie:
def init(self, color):
self.color = color
def get_color(self):
return self.color
def set_color(self, color):
self.color = color
cookie_one = Cookie('green')
cookie_two = Cookie('blue')
print('Cookie one is', cookie_one.get_color())
print('Cookie two is', cookie_two.get_color())
cookie_one.set_color('yellow')
print('\nCookie one is now', cookie_one.get_color())
print('Cookie two is still', cookie_two.get_color())
pointers:
num1 = 11
num2 = num1
print("Before num2 value is updated:")
print("num1 =", num1)
print("num2 =", num2)
print("\nnum1 points to:", id(num1))
print("num2 points to:", id(num2))
num2 = 22
print("\nAfter num2 value is updated:")
print("num1 =", num1)
print("num2 =", num2)
print("\nnum1 points to:", id(num1))
print("num2 points to:", id(num2))
dict1 = {
'value': 11
}
dict2 = dict1
print("\n\nBefore value is updated:")
print("dict1 =", dict1)
print("dict2 =", dict2)
print("\ndict1 points to:", id(dict1))
print("dict2 points to:", id(dict2))
dict2['value'] = 22
print("\nAfter value is updated:")
print("dict1 =", dict1)
print("dict2 =", dict2)
print("\ndict1 points to:", id(dict1))
print("dict2 points to:", id(dict2))
pointers work differently with different data types.
So we'll start out with a variable we call number one.
We'll set that equal to an integer 11, and then we'll create a second variable called NUM two and will
set that to be equal to NUM one.
So when we do this, we're creating an integer 11 somewhere in memory and then we point the variable
number one at that integer.
But the question is what happens when we set NUM two to be equal to number one, are we creating a separate
number 11 that NUM two points to or does NUM two point to the same number 11 in memory that NUM one
is pointing to?
So to test this out, let's flip over to VS code.
So there is our number one equal to 11 and our num to equal to number one up here and there, we're
going to print out the values of numb one and numb to here.
But it's these two lines here that we're really interested in.
And this is going to give us the address where NUM one and two are stored.
So I'll run this and you can see here that numb one and numb two point to the exact same address.
They are pointing to the same number 11 and memory.
So now let's flip back.
So now the next question is what happens if we set numb to two be equal to 22?
Does this 22 overwrite the 11?
And if this is the case, then numb one and numb to would now be pointing at 22?
Or do we create a separate integer somewhere else in memory and have numb two point to that integer?
So now to test this out, let's go back to this code.
So this is the code we just looked at in verse code.
But now I'm going to add a few lines and I'll paste this in.
So this is where we set NUM two to be equal to 22.
And then with these lines will show what num one is equal to and number two is equal to.
And then down here we'll show the addresses that number one and number 2.2.
So I'll run this.
So this up here is what we had printed out before.
And then this is after NUM two has been updated.
In other words, after this line of code over here.
And you can see that num one and two are now two different values.
And that number one here after the change is the same as it was before the change.
So now let's flip back.
So this diagram here is a representation of how it works.
And the reason for that is that integers are what are called immutable, which means they cannot be
changed.
Once you put a number 11 into a particular place in memory, you cannot change that number 11.
Now you can point number one to a different integer that is stored somewhere else, but you can't actually
change the number 11 once you create it.
So now let's look at another data type that works in a different way than integers do.
I'm going to create a dictionary called Dictionary one.
I'm going to set it equal to value 11.
So this variable is going to point at this dictionary in memory.
Now, if we create another dictionary that we call dictionary two and we set this equal to dictionary
one, the question is, do we create an additional dictionary that is identical someplace else in memory
and have dictionary to point at it?
Or do we have dictionary to point at the same dictionary in memory?
So the answer to that is they are both going to point to the same dictionary.
So this is something that is going to be the same as if we are working with an integer.
But what if we say dictionary to value is equal to 22?
Do we take that 11 and overwrite it like this?
Or do we create a second dictionary and have dictionary to point it that?
So now to test this out, let's flip back to vs code.
So that's where we create a dictionary, one with a value of 11 and then set dictionary to to be equal
to dictionary one, and then we'll print out dictionary one in two.
And this is where we look at the address where dictionary one and dictionary two are stored.
So I'll run this.
And you can see the dictionary one and dictionary two are both pointing to the same address and memory.
So now I'm going to add a few lines of code here.
And this is where we changed the value in dictionary two to be equal to 22.
And then we're going to print out the values of both dictionary one and dictionary two again.
And then print out the addresses for both.
So I'll run this.
So now, after the value is updated, both dictionary one and dictionary two have value 22.
They are the same.
This is different than what we had with integers.
Also, you can see the dictionary one and dictionary two are still pointing to the same address, which
is identical to what it was before we did the update.
So now let's flip back and talk about this.
So with dictionaries, we can take that number 11 and then change it to a 22 where integers are immutable
and can't be changed.
Dictionaries can be changed.
So dictionary one and dictionary two continue pointing to the same place.
But we are able to change the value.
So why is this important for data structures and algorithms course?
Well, the reason for that is we're going to see this concept come up a lot.
For example, when we have a linked list, the nodes on a linked list are going to operate very much
like a dictionary.
So if you have two variables that point at the same node, you can change the value of that node.
So we change this from a 22 to a 44, say the variables continue to point at the same node, even though
we changed the value.
So I'm going to change this back to a 22 and gonna change this back to a dictionary and show a couple
of other concepts.
Let's say we have another dictionary that we call Dictionary three, and this one is value 33.
If we set dictionary to to be equal to dictionary three.
We can take that and point that down at the new dictionary so we can change where these variables point
to.
So this is something that you'll see quite a bit in the next section on linked lists.
We might want to move ahead to point to a new node.
And when we do that, we just need to set head to be equal to something else that is pointing at that
node.
And then there's another concept I want to explain.
And to do that, I'm going to set dictionary one to be equal to dictionary two.
And of course, that's going to point that down here.
And we'll have all three variables pointing at this same dictionary.
Now, the question is what happens to this dictionary?
Because we no longer have anything pointing to it.
And if you don't have a variable that's pointing to this, you don't have any way to access this.
So what Python does in this situation is Python will remove this through a process called garbage collection.
And that is our overview.
Of pointers.
Top comments (0)