I think the tricky thing about this particular thought experiment is the 'arbitrary' part  setting up a proper test generally requires being specific about the test case and having a tangible case to test. This problem subverts that by stating that the arrays (in this context) can be infinitely deep.
One approach that might actually help is to split this problem into two layers. The first layer is identifying two arrays that need to be merged (thus one is contained within the other). The second layer is actually combining the two arrays. That may seem a bit arbitrary, but splitting these two concepts will actually help us test.
Let's get more specific. Let's say that ResponsibilityA is simply determining that (if our scope is an array): we've found an array within it that needs to be decomposed. For the sake of tangible examples, this could be foo = [1, 2, [bar]] and ResponsibilityA(foo) would determine that foo needs [bar] decomposed at position 2. That's it. It doesn't care what bar itself is or what it contains.
Now, with that basis we can define ResponsibilityB as the decomposer. Its job is also fairly simple  do a single denesting. Again for tangibility, if we have baz = [1, 2, 3] and we call Responsibility(baz), it ought to return 1, 2, 3. This may seem like a small distinction since all we did was remove the square brackets, but the difference here is that it turned baz from a single object into three objects. If this seems hard to understand from a practical standpoint, that's because it is π most languages don't support a function that can return an arbitrary number of return objects, so you can either consider this as returning an enumerable / enumeration, or you can abstract this entire concept to Javascript's style spread operator:...baz
The last step to making these two machines work together is setting up the structure of ResponsibilityA a little more. Let's say that as ResponsibilityA crawls through an array, once it finds a subarray that needs to be decomposed, it actually calls out to ResponsibilityB to decompose that subarray in realtime and in place, then reassesses that index before moving forward. To put that into the visual, let's say foo = [9, 9, [foo, bar, [x, y, z]], 9]. ResponsibilityA(foo) would then begin crawling:
Element at position 0 is 9 which is not an array, move forward
Element at position 1 is 9 which is not an array, move forward
Element at position 2 is [foo, bar, [x, y, z]] which is an array:
Call ResponsibilityB and pass [foo, bar, [x, y, z]]
ResponsibilityB returns foo, bar, [x, y, z]
ResponsibilityA replaces element at position 2 with the returned value
ResponsibilityA augments loop or iterations to reassess element at position 2 next
(for mental debugging purposes, foo is now [9, 9, foo, bar, [x, y, z], 9]
Element at position 2 is foo which is not an array, move forward
Element at position 3 is bar which is not an array, move forward
Element at position 4 is [x, y, z] which is an array:
Call ResponsibilityB and pass [x, y, z]
ResponsibilityB returns x, y, z
ResponsibilityA replaces element at position 4 with the returned value
ResponsibilityA augments loop or iterations to reassess element at position 4 next
(for mental debugging purposes, foo is now [9, 9, foo, bar, x, y, z, 9]
Element at position 4 is x which is not an array, move forward
Element at position 5 is y which is not an array, move forward
Element at position 6 is z which is not an array, move forward
Element at position 7 is 9 which is not an array move forward
Index == Length; complete; return foo ([9, 9, foo, bar, x, y, z, 9])
So what's the point of splitting these two concepts? Each one is individually testable! As I mentioned above, the "arbitrary" bit of the problem statement prevents us from writing a full test for the problem since you can't write a test with arbitrary data. That would be infinite. What we can do is write a test for each of the individual responsibilities above to make sure they work independently, then just a simple test case to prove that they work together (the above walkthrough is a perfect 'simple test case to prove that they work together', so we'll use it below).
(For the math geeks out there, this process is effectively similar to constructing a math proof on the basis of induction π€)
So let's do it. ResponsibilityA is a singlelayer deep concern, meaning that if you call ResponsibilityA([1, 2, [foo, [bar]]), it will recognize that [foo, [bar]] needs to be decomposed at position 2 but it will not dig further into that array to also determine that [bar] needs to be decomposed too. Cool? Let's write a test then. In order to cover the case of a nested array being found at the first position, middle, and last position, let's just wrap this into a single test of inputs and outputs:
If I give ResponsibilityA an argument of [[1, 2], [[a], b], [#, $]], it should identify that array decompositions need to occur at positions 0, 1, and 2. How do we test that? We mock ResponsibilityB and expect it to receive a call with argument [1, 2]. Let's also mock it to return foo instead of 1, 2 so we can prove that the "replace inplace" bit is working too. So overall, we will expect ResponsibilityB to:
#1 (mentioned above) Receive a call with argument [1, 2] and mock return foo
#2 Receive a call with argument [[a], b] and mocked return [bar], baz
#3 Receive a call with argument [bar] and mocked return bar
#4 Receive a call with argument [#, $], and we'll mock it to return qux.
If we run that test, we can rely on the expectations of ResponsibilityB receiving those four calls with those three specific arguments, and the mock returns should guarantee that ResponsibilityA ultimately returns a final product of [foo, bar, baz, qux]. That's a perfectly valid test of ResponsibilityA that proves it is
Identifying subarrays
Calling to ResponsibilityB and passing the subarrays as it encounters them (indicated by receiving call #3 after getting response #2 [instead of jumping straight from #2 to #4])
Replacing the subarray object at that index inplace with whateverResponsibilityB gives back
Reassessing the index of the subarray it passed to ResponsibilityB (since #2 above sends back an array in the first position) after it does the replaceinplace
That's awesome. That test proves that ResponsibilityA does exactly what it's intended to do. Now we just need to test that ResponsibilityB actually does what it's supposed to.
For the sake of brevity, let's just say that if I pass ResponsibilityB[1, 2, 3] it should return 1, 2, 3 and if I pass it [1, [2], 3] it should return 1, [2], 3 (just takes off the outer brackets).
Since I've proven that ResponsibilityA correctly identifies and replaces subarrays in place (then reassesses the same index) but doesn't actually determine how to denest the array and I've proven that ResponsibilityB can denest a single array, putting those together does indeed prove that denesting at an arbitrary length and depth is achieved. If that's hard to understand, that's totally okay! Induction is a really tough concept to wrap your head around. We're effectively proving that each responsibility works on its own arbitrarily and therefore putting them together will work arbitrarily too.
Technically we ought to also have at least one test that tests the two things actually working together, so without mocking what ResponsibilityB should expect and mocking what it will return, we can just write a simple test that says:
If I pass [[1, 2], [[a], b], [#, $]] to ResponsibilityA, it should return [9, 9, foo, bar, x, y, z, 9]. Same example from above but the idea is the same prove both responsibilities individually then use a simple test to prove that they work together and you've proven their potential.
I hope that makes sense.. sorry it turned into an essay!!

JonSullivanDev
Also thanks for the inspiration; might turn this into a full blog post
Well Jon, this is more a proof than an implementation. While technically correct, this approach will however fail if you have an infinite (i.e. unknown in advance) array, or a stream.
What you call "responsibilities" I call "the different cases when inspecting next element". My first solution didn't yield a lazy solution like I wanted, however; I should try again and see if the same subdivision that you envision emerges from that approach or not.
The length and depth of your answer, however, begs a question: what really is a test? is the goal of TDD to prove a piece of code "correct", or just that "it works"?
You do prove that your approach is "correct" (for finite length arrays); but is this a "unit test"? Would you call this TDD?
Thanks Jon for taking the time for thinking of such a detailed solution, but the core of my question is TDD.
I wonder if this problem can be solved in a series of micro redgreenrefactor cycles (30 seconds)
I don't want a solution I want you to try it and heard about your thoughts.
I'm questioning the usage of TDD not the problem.
Well forgive me for being a rather wordy fellow but the tail end of my solution does outline the very specific red/green tests you could write to TDD the problem... I just give a lot of foundational theory basis for why I chose those tests ;)
You can TDD anything given the right mindset :D
@michelemauro
I didn't read streams or infinite lists as being part of the parameters of the problem but the same solution could be slightly adjusted (really just in ResponsibilityB to handle infinite sequences and/or streams pretty readily.
Anyway, all around good conversation guys  cheers ππ»
Reading again all of your answer so you are describing an algorithm which possibly solves the problem and you identify two sub problem that can be tested individually,
This is a valid technique for solving a problem but is totally the opposite of tdd
Because in tdd you donβt know the solution in advance, as you resolve every micro test with the minimum amount of code to satisfy the test you discover the algorithm.
So you donβt know the algorithm in advance.
We're a place where coders share, stay uptodate and grow their careers.
I think the tricky thing about this particular thought experiment is the 'arbitrary' part  setting up a proper test generally requires being specific about the test case and having a tangible case to test. This problem subverts that by stating that the arrays (in this context) can be infinitely deep.
One approach that might actually help is to split this problem into two layers. The first layer is identifying two arrays that need to be merged (thus one is contained within the other). The second layer is actually combining the two arrays. That may seem a bit arbitrary, but splitting these two concepts will actually help us test.
Let's get more specific. Let's say that
ResponsibilityA
is simply determining that (if our scope is an array): we've found an array within it that needs to be decomposed. For the sake of tangible examples, this could befoo = [1, 2, [bar]]
andResponsibilityA(foo)
would determine thatfoo
needs[bar]
decomposed at position 2. That's it. It doesn't care whatbar
itself is or what it contains.Now, with that basis we can define
ResponsibilityB
as the decomposer. Its job is also fairly simple  do a single denesting. Again for tangibility, if we havebaz = [1, 2, 3]
and we callResponsibility(baz)
, it ought to return1, 2, 3
. This may seem like a small distinction since all we did was remove the square brackets, but the difference here is that it turnedbaz
from a single object into three objects. If this seems hard to understand from a practical standpoint, that's because it is π most languages don't support a function that can return an arbitrary number of return objects, so you can either consider this as returning an enumerable / enumeration, or you can abstract this entire concept to Javascript's style spread operator:...baz
The last step to making these two machines work together is setting up the structure of
ResponsibilityA
a little more. Let's say that asResponsibilityA
crawls through an array, once it finds a subarray that needs to be decomposed, it actually calls out toResponsibilityB
to decompose that subarray in realtime and in place, then reassesses that index before moving forward. To put that into the visual, let's sayfoo = [9, 9, [foo, bar, [x, y, z]], 9]
.ResponsibilityA(foo)
would then begin crawling:9
which is not an array, move forward9
which is not an array, move forward[foo, bar, [x, y, z]]
which is an array:ResponsibilityB
and pass[foo, bar, [x, y, z]]
ResponsibilityB
returnsfoo, bar, [x, y, z]
ResponsibilityA
replaces element at position 2 with the returned valueResponsibilityA
augments loop or iterations to reassess element at position 2 nextfoo
is now[9, 9, foo, bar, [x, y, z], 9]
foo
which is not an array, move forwardbar
which is not an array, move forward[x, y, z]
which is an array:ResponsibilityB
and pass[x, y, z]
ResponsibilityB
returnsx, y, z
ResponsibilityA
replaces element at position 4 with the returned valueResponsibilityA
augments loop or iterations to reassess element at position 4 nextfoo
is now[9, 9, foo, bar, x, y, z, 9]
x
which is not an array, move forwardy
which is not an array, move forwardz
which is not an array, move forward9
which is not an array move forward[9, 9, foo, bar, x, y, z, 9]
)So what's the point of splitting these two concepts? Each one is individually testable! As I mentioned above, the "arbitrary" bit of the problem statement prevents us from writing a full test for the problem since you can't write a test with arbitrary data. That would be infinite. What we can do is write a test for each of the individual responsibilities above to make sure they work independently, then just a simple test case to prove that they work together (the above walkthrough is a perfect 'simple test case to prove that they work together', so we'll use it below).
(For the math geeks out there, this process is effectively similar to constructing a math proof on the basis of induction π€)
So let's do it.
ResponsibilityA
is a singlelayer deep concern, meaning that if you callResponsibilityA([1, 2, [foo, [bar]])
, it will recognize that[foo, [bar]]
needs to be decomposed at position 2 but it will not dig further into that array to also determine that [bar] needs to be decomposed too. Cool? Let's write a test then. In order to cover the case of a nested array being found at the first position, middle, and last position, let's just wrap this into a single test of inputs and outputs:If I give
ResponsibilityA
an argument of[[1, 2], [[a], b], [#, $]]
, it should identify that array decompositions need to occur at positions 0, 1, and 2. How do we test that? We mockResponsibilityB
and expect it to receive a call with argument[1, 2]
. Let's also mock it to returnfoo
instead of1, 2
so we can prove that the "replace inplace" bit is working too. So overall, we will expectResponsibilityB
to:[1, 2]
and mock returnfoo
[[a], b]
and mocked return[bar], baz
[bar]
and mocked returnbar
[#, $]
, and we'll mock it to returnqux
.If we run that test, we can rely on the expectations of
ResponsibilityB
receiving those four calls with those three specific arguments, and the mock returns should guarantee thatResponsibilityA
ultimately returns a final product of[foo, bar, baz, qux]
. That's a perfectly valid test ofResponsibilityA
that proves it isResponsibilityB
and passing the subarrays as it encounters them (indicated by receiving call #3 after getting response #2 [instead of jumping straight from #2 to #4])ResponsibilityB
gives backResponsibilityB
(since #2 above sends back an array in the first position) after it does the replaceinplaceThat's awesome. That test proves that
ResponsibilityA
does exactly what it's intended to do. Now we just need to test thatResponsibilityB
actually does what it's supposed to.For the sake of brevity, let's just say that if I pass
ResponsibilityB
[1, 2, 3]
it should return1, 2, 3
and if I pass it[1, [2], 3]
it should return1, [2], 3
(just takes off the outer brackets).Since I've proven that
ResponsibilityA
correctly identifies and replaces subarrays in place (then reassesses the same index) but doesn't actually determine how to denest the array and I've proven thatResponsibilityB
can denest a single array, putting those together does indeed prove that denesting at an arbitrary length and depth is achieved. If that's hard to understand, that's totally okay! Induction is a really tough concept to wrap your head around. We're effectively proving that each responsibility works on its own arbitrarily and therefore putting them together will work arbitrarily too.Technically we ought to also have at least one test that tests the two things actually working together, so without mocking what
ResponsibilityB
should expect and mocking what it will return, we can just write a simple test that says:If I pass
[[1, 2], [[a], b], [#, $]]
toResponsibilityA
, it should return[9, 9, foo, bar, x, y, z, 9]
. Same example from above but the idea is the same prove both responsibilities individually then use a simple test to prove that they work together and you've proven their potential.I hope that makes sense.. sorry it turned into an essay!!

JonSullivanDev
Also thanks for the inspiration; might turn this into a full blog post
Well Jon, this is more a proof than an implementation. While technically correct, this approach will however fail if you have an infinite (i.e. unknown in advance) array, or a stream.
What you call "responsibilities" I call "the different cases when inspecting next element". My first solution didn't yield a lazy solution like I wanted, however; I should try again and see if the same subdivision that you envision emerges from that approach or not.
The length and depth of your answer, however, begs a question: what really is a test? is the goal of TDD to prove a piece of code "correct", or just that "it works"?
You do prove that your approach is "correct" (for finite length arrays); but is this a "unit test"? Would you call this TDD?
Thanks Jon for taking the time for thinking of such a detailed solution, but the core of my question is TDD.
I wonder if this problem can be solved in a series of micro redgreenrefactor cycles (30 seconds)
I don't want a solution I want you to try it and heard about your thoughts.
I'm questioning the usage of TDD not the problem.
Well forgive me for being a rather wordy fellow but the tail end of my solution does outline the very specific red/green tests you could write to TDD the problem... I just give a lot of foundational theory basis for why I chose those tests ;)
You can TDD anything given the right mindset :D
@michelemauro I didn't read streams or infinite lists as being part of the parameters of the problem but the same solution could be slightly adjusted (really just in
ResponsibilityB
to handle infinite sequences and/or streams pretty readily.Anyway, all around good conversation guys  cheers ππ»
Ok, as soon as I have some free time I'll try to follow your tests and tell you back what was my experience.
thank you!
Reading again all of your answer so you are describing an algorithm which possibly solves the problem and you identify two sub problem that can be tested individually,
This is a valid technique for solving a problem but is totally the opposite of tdd
Because in tdd you donβt know the solution in advance, as you resolve every micro test with the minimum amount of code to satisfy the test you discover the algorithm.
So you donβt know the algorithm in advance.