Coding since 11yo, that makes it over 30 years now ~~~
Have a PhD in Comp Sci ~~~
Love to go on bike tours ~~~
I try to stay as generalist as I can in this crazy wide place coding is at now.
It's always nice to avoid a second iteration over elements if possible and efficient. This could allow a function to be applied on a stream/very-large-string, makes analysis of it easier, and just avoids having it do unnecessary work on elements it doesn't strictly need to deal with.
Here's a single loop TS implementation that will early exit as soon as it finds an element that...
has been seen before, or...
makes the new max char code too distant from the new min char code
If the whole string has been covered and the call to every didn't early exit, then all letters must be distinct (rule two covered), and the max code minus the min code must be less than the length of the string (they're all different to each other so this bound means they must be adjacent, rule one covered)
Coding since 11yo, that makes it over 30 years now ~~~
Have a PhD in Comp Sci ~~~
Love to go on bike tours ~~~
I try to stay as generalist as I can in this crazy wide place coding is at now.
You're right, space is part of the equation too. It's all about what tradeoffs you're happy to make.
In this case, we're checking for duplicates in the string so we're either storing a memo hash (O(n) on space) or iterating over the pairs of elements (O(n^2) on time).
This one is O(n) on space and time, but you could defn make a fn that was O(1) on space if you were ok with O(n^2) on time.
(O(n) == O(2n) btw. The notation includes a multiplier to cover differences in the base case. So the top function up there where [...s] implicitly loops through the string before even hitting the every, actually has the same complexity as the lower function that is genuinely just one loop.)
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
It's always nice to avoid a second iteration over elements if possible and efficient. This could allow a function to be applied on a stream/very-large-string, makes analysis of it easier, and just avoids having it do unnecessary work on elements it doesn't strictly need to deal with.
Here's a single loop TS implementation that will early exit as soon as it finds an element that...
If the whole string has been covered and the call to
everydidn't early exit, then all letters must be distinct (rule two covered), and the max code minus the min code must be less than the length of the string (they're all different to each other so this bound means they must be adjacent, rule one covered)Tested in kata
Edit: refactor to get rid of that sneaky arrayifying spread operator...
It's always nice to avoid a second iterationWe also have space complexity
You're right, space is part of the equation too. It's all about what tradeoffs you're happy to make.
In this case, we're checking for duplicates in the string so we're either storing a memo hash (O(n) on space) or iterating over the pairs of elements (O(n^2) on time).
This one is O(n) on space and time, but you could defn make a fn that was O(1) on space if you were ok with O(n^2) on time.
(O(n) == O(2n) btw. The notation includes a multiplier to cover differences in the base case. So the top function up there where
[...s]implicitly loops through the string before even hitting theevery, actually has the same complexity as the lower function that is genuinely just one loop.)