I am always happy to see people exploring and explaining theoretical aspects of programming, so thank you very much for that!
I'd like to add one aspect though. "objects" themselves are implemented with other data structures, depending on the programming language used. If you can use them like key value pairs, they will often be implemented using key value pairs. This implies that accessing and removing items can't be O(1) but rather O(log n) like a search in a dictionary. Most likely, adding won't be O(1) as well as hashing is involved and this could lead to collisions, which need to be handled.
If you are using a compiled language (e.g. Java or C#), this will be different, as the compiler knows how to access each of the known items.
I think it can be very insightful to explore the internals of the programming language of choice. Knowing this will help a lot to find hidden complexity in your algorithms.
In the average case (and practically), it really is constant access. But as the Big-O notation is about the worst case, so it's about constructing some obscure cases. Hash-based implementations often use buckets based on the result hash. The worst case would be, that all values have the same hash, so we basically end up with a plain list of values with the complexity of lists.
But this does not really apply to the practice and the implementations usually are very optimized, that it should be very hard to even construct such a case practically.
For further actions, you may consider blocking this person and/or reporting abuse
I am always happy to see people exploring and explaining theoretical aspects of programming, so thank you very much for that!
I'd like to add one aspect though. "objects" themselves are implemented with other data structures, depending on the programming language used. If you can use them like key value pairs, they will often be implemented using key value pairs. This implies that accessing and removing items can't be O(1) but rather O(log n) like a search in a dictionary. Most likely, adding won't be O(1) as well as hashing is involved and this could lead to collisions, which need to be handled.
If you are using a compiled language (e.g. Java or C#), this will be different, as the compiler knows how to access each of the known items.
I think it can be very insightful to explore the internals of the programming language of choice. Knowing this will help a lot to find hidden complexity in your algorithms.
Isn't the amortized insertion/lookup/delete complexity for hash tables O(1)? But yeah you are right, it is more complicated.
In the average case (and practically), it really is constant access. But as the Big-O notation is about the worst case, so it's about constructing some obscure cases. Hash-based implementations often use buckets based on the result hash. The worst case would be, that all values have the same hash, so we basically end up with a plain list of values with the complexity of lists.
But this does not really apply to the practice and the implementations usually are very optimized, that it should be very hard to even construct such a case practically.