<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lucas Lira Gomes</title>
    <description>The latest articles on DEV Community by Lucas Lira Gomes (@x8lucas8x).</description>
    <link>https://dev.to/x8lucas8x</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/x8lucas8x"/>
    <language>en</language>
    <item>
      <title>What to put on the Redux state</title>
      <dc:creator>Lucas Lira Gomes</dc:creator>
      <pubDate>Wed, 20 May 2020 20:02:51 +0000</pubDate>
      <link>https://dev.to/x8lucas8x/what-to-put-on-the-redux-state-44g0</link>
      <guid>https://dev.to/x8lucas8x/what-to-put-on-the-redux-state-44g0</guid>
      <description>&lt;p&gt;Following up the &lt;a href="https://reactjs.org/"&gt;react&lt;/a&gt;/&lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; community these past years, I've seen many inquiring whether &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt;, or any equivalent state management solution, was even needed for achieving X or Z in a SPA. Alternatives such as using the &lt;a href="https://reactjs.org/docs/context.html"&gt;context API&lt;/a&gt;, as a simpler &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; clone, or even ignoring it altogether and going for &lt;a href="https://reactjs.org/docs/state-and-lifecycle.html"&gt;component state&lt;/a&gt; were also common place.&lt;/p&gt;

&lt;p&gt;Being not one to favour dichotomies, I won't try to convince you that &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; is the way to go, given I myself had repented over using &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; for some use cases. In hindsight one of those bad decisions involved adopting &lt;a href="https://github.com/redux-form/redux-form"&gt;redux form&lt;/a&gt; for handling form state. If you asked me, a couple years ago, had you any need to have form data in &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; specifically? I certainly couldn't raise one valid point for that. Of course &lt;a href="https://github.com/redux-form/redux-form"&gt;redux form&lt;/a&gt; API was nice and facilitated form handling, but it also created its own share of problems.&lt;/p&gt;

&lt;p&gt;Unfortunately &lt;a href="https://github.com/final-form/react-final-form"&gt;react final form&lt;/a&gt;, &lt;a href="https://jaredpalmer.com/formik/"&gt;formik&lt;/a&gt; and others weren't available at the time.So &lt;a href="https://github.com/redux-form/redux-form"&gt;redux form&lt;/a&gt; just seemed right. That said I've had my own share of issues, especially when dealing with big forms. More specifically it dispatches a &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; action per field registration, during initial render, which made forms with many fields feel sluggish. Still an issue present on it, which libraries relying on &lt;a href="https://reactjs.org/docs/state-and-lifecycle.html"&gt;component state&lt;/a&gt; such as &lt;a href="https://github.com/final-form/react-final-form"&gt;react final form&lt;/a&gt; and &lt;a href="https://jaredpalmer.com/formik/"&gt;formik&lt;/a&gt; don't have. In fact even &lt;a href="https://github.com/redux-form/redux-form"&gt;redux form&lt;/a&gt;'s creator recommends the use of &lt;a href="https://github.com/final-form/react-final-form"&gt;react final form&lt;/a&gt; nowadays, if you have no need to have form data in &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt;. It's worth mentioning that both were created by the same person.&lt;/p&gt;

&lt;p&gt;With that I intend to exemplify that being adamant about a technical or methodological choice might cause you trouble anyway. Oh, I won't have such problems because I opted on using the &lt;a href="https://reactjs.org/docs/context.html"&gt;context API&lt;/a&gt; instead of &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt;, you might say. But relying on the &lt;a href="https://reactjs.org/docs/context.html"&gt;context API&lt;/a&gt; has its drawbacks. Most noticeably the fact that any change on the value of a given context provider will trigger a re-render of its underlying component tree, even if it changes something that your component isn't even using. Of course you can have your &lt;a href="https://reactjs.org/docs/react-api.html#reactmemo"&gt;React.memo&lt;/a&gt;, &lt;a href="https://reactjs.org/docs/react-api.html#reactpurecomponent"&gt;PureComponent&lt;/a&gt; or &lt;a href="https://reactjs.org/docs/react-component.html#shouldcomponentupdate"&gt;shouldComponentUpdate&lt;/a&gt; life-cycle method in place, but that wouldn't be an issue with &lt;a href="https://react-redux.js.org/"&gt;react redux&lt;/a&gt;. On &lt;a href="https://react-redux.js.org/"&gt;react redux&lt;/a&gt;, assuming default behaviour, a re-render will only happen when the &lt;a href="https://react-redux.js.org/using-react-redux/connect-mapstate#return-values-determine-if-your-component-re-renders"&gt;shallow comparison of the returned object's fields of the mapStateToProps is true&lt;/a&gt; or when the &lt;a href="https://react-redux.js.org/7.1/api/hooks#equality-comparisons-and-updates"&gt;strict equality of the returned value of the useSelector hook is true&lt;/a&gt;. So as you see &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; is better optimized when it comes to avoiding re-renders, which is paramount when you are connecting components at many levels of the component tree, as you should.&lt;/p&gt;

&lt;p&gt;For those reasons, I'd recommend the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; for interactions with the backend, which are tied to pages, not shareable components. Most GETs, PUTs, PATCHes and DELETEs should apply.&lt;/li&gt;
&lt;li&gt;Use the &lt;a href="https://reactjs.org/docs/context.html"&gt;context API&lt;/a&gt; for global level configuration, such as styles, internationalization, authentication info, user preferences and what not.&lt;/li&gt;
&lt;li&gt;Use the &lt;a href="https://reactjs.org/docs/state-and-lifecycle.html"&gt;component state&lt;/a&gt; for the kind of components that are shared across distinct pages (e.g. form handling, dialogs), even when those require backend interactions (e.g. autocomplete selects). Usually if you can't justify why you need &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt;, that's what you need.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's your take on it? Do you agree? Do you have your own set of guidelines? Please follow up on the comments below.&lt;/p&gt;

</description>
      <category>redux</category>
      <category>frontend</category>
    </item>
    <item>
      <title>Shaping the Redux state</title>
      <dc:creator>Lucas Lira Gomes</dc:creator>
      <pubDate>Sat, 04 Apr 2020 14:46:46 +0000</pubDate>
      <link>https://dev.to/x8lucas8x/shaping-the-redux-state-480l</link>
      <guid>https://dev.to/x8lucas8x/shaping-the-redux-state-480l</guid>
      <description>&lt;p&gt;I've being using &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; since early 2016 and no doubt I learned a lot through the process. Transitioning from &lt;a href="https://github.com/reduxjs/redux-thunk"&gt;thunk&lt;/a&gt; to &lt;a href="https://github.com/redux-saga/redux-saga"&gt;redux saga&lt;/a&gt; for easier testing and greater flexibility, adopting &lt;a href="https://github.com/reduxjs/reselect"&gt;reselect&lt;/a&gt; to prevent costly re-renders, using&lt;a href="https://github.com/immerjs/immer"&gt;immer&lt;/a&gt; to tame our reducers when plain destructuring and &lt;a href="https://github.com/ramda/ramda"&gt;ramda&lt;/a&gt; revealed their shortcomings, including &lt;a href="https://github.com/paularmstrong/normalizr"&gt;normalizr&lt;/a&gt; to facilitate data normalization across reducers sharing action types, and even materialising past learnings through my own &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; abstraction layer (aka &lt;a href="https://github.com/kayak/redux-data-model"&gt;redux data models&lt;/a&gt;). More on &lt;a href="https://github.com/kayak/redux-data-model"&gt;redux data models&lt;/a&gt; on another post.&lt;/p&gt;

&lt;p&gt;One thing though required a particularly iterative process. That was how our team have being shaping reducers' state. As you might know there is no convention forthat. So one day you might find yourself with a requirement for an user list page. After creating an endpoint for retrieving the &lt;em&gt;list&lt;/em&gt; data, you might think you could simply represent all that with the following reducer state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  list: [
    {...},
    ...
  ],
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Time passes by and an user details page is now in need. So you could go ahead and filter the list array all the time for a given user entry, assuming a non paginated endpoint, but it turns out the details page requires a few extra fields that you don't need for the list page. Perhaps those extra fields are expensive to generate for multiple objects, so you'd rather not include them in the list endpoint, or you would rather have the list endpoint as lean as possible. So now, although you could be altering the &lt;em&gt;list&lt;/em&gt; array, you are more likely to just add a &lt;em&gt;data&lt;/em&gt; object, such as in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  list: [
    {...},
    ...
  ],
  data: {
    ...
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Data&lt;/em&gt; object that are likely to be used in the user details page only, as if &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; was some sort of database. Been there, done that, but not particularly proud of it.&lt;/p&gt;

&lt;p&gt;One of the many problems of just using &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; as a database, therefore tending to duplicate data in multiple places when convenient, after all &lt;a href="https://redux.js.org/"&gt;redux&lt;/a&gt; is just a means to an end, is clearly the fact that you no longer have a single source of truth. In the event you create yet another user related page, should you read from &lt;em&gt;list&lt;/em&gt; or &lt;em&gt;data&lt;/em&gt;, when looking for the most up to date version of that something? Oh, but I'm ensuring that my reducers update both when interacting with the backend. Good, but that's definitely not foolproof. One of those other reducers of yours might be just lacking the proper logic to keep them in sync. And now you have outdated data out there.&lt;/p&gt;

&lt;p&gt;Let's say you re-fetch the data while navigating through different pages.Perhaps this address the outdated data issue, at the expense of extra burden on the backend, but not everything is about the backend state. What about all the intermittent state your UI might need, such as draft changes on top of the backend data or auto-save snapshots? Those might be necessary to keep, even though you navigated to a different page. What about re-fetching data only when the backend data has changed? That still does not address the fact that, given duplication, not all state mirrors backend's data.&lt;/p&gt;

&lt;p&gt;So it seems we might be better of avoiding data duplication by referencing ids in the &lt;em&gt;list&lt;/em&gt; array, instead of simply storing whatever payload we get from the backend. That way a &lt;em&gt;data&lt;/em&gt; entry is our single representation of any given user. It goes without saying that memoization is assumed when pursuing this path, which you should be employing nevertheless. Back to the reducer shape, one could rightly assume a reducer shape such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  list: ['oneId', 'anotherId'],
  data: {
    oneId: {
      ...
    },
    anotherId: {
      ...
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That's already an improvement, but there's more to be done. Naming things properly for one. &lt;em&gt;List&lt;/em&gt; and &lt;em&gt;data&lt;/em&gt; keys are not very descriptive. If it was named after the domain it represents that would be much better. Say:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  // userIdOrder, userIdOrdering, ..., are all good alternatives!
  userIds: ['oneId', 'anotherId'],
  userById: {
    oneId: {
      ...
    },
    anotherId: {
      ...
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Small changes, but that keep it easier to reason about the state's content. I particularly appreciate the &lt;em&gt;someThingByOtherThing&lt;/em&gt; pattern, especially because you might have the need to reference users by other things that not ids. I've had that need once. That is, to reference an entry by its alias for instance.So it made sense to maintain a &lt;em&gt;somethingByAliases&lt;/em&gt;, which naturally would just reference an id in &lt;em&gt;somethingById&lt;/em&gt;, so as not to duplicate data. As in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  userIds: ['oneId', 'anotherId'],
  userByAliases: {
    aliasForOneId: 'oneId',
    aliasForAnotherId: 'anotherId',
  },
  userById: {
    oneId: {
      ...
    },
    anotherId: {
      ...
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Naming changes apart, another thing to take into account is where to place metadata, perhaps without a counterpart in the backend. Think boolean flags, pagination parameters, and what not. Most often than not the first one that come to mind is a loading flag or a current state enum. Although those might be part of the original &lt;em&gt;userById&lt;/em&gt; content, I'd rather keep them separate. If not for anything, for the fact that them changing shouldn't invalidate components relying on the &lt;em&gt;userById&lt;/em&gt; content, especially when memoization is in place.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  userIds: ['oneId', 'anotherId'],
  userByAliases: {
    'aliasForOneId': 'oneId',
    'aliasForAnotherId': 'anotherId',
  },
  loadingById: {
    ...
  },
  userById: {
    oneId: {
      ...
    },
    anotherId: {
      ...
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;All simple guidelines I follow, which have proved to be useful and future proof in my own use cases. Does it make sense for you? Do you have your own tips to share? Please follow up on the comments below.&lt;/p&gt;

</description>
      <category>redux</category>
      <category>frontend</category>
    </item>
    <item>
      <title>On Integration Testing and Microservices</title>
      <dc:creator>Lucas Lira Gomes</dc:creator>
      <pubDate>Sat, 04 Apr 2020 14:26:02 +0000</pubDate>
      <link>https://dev.to/x8lucas8x/on-integration-testing-and-microservices-4pcl</link>
      <guid>https://dev.to/x8lucas8x/on-integration-testing-and-microservices-4pcl</guid>
      <description>&lt;p&gt;Some weeks ago I was asked how to properly do integration testing in a microservices environment. At that time, I was aware that testing basic behaviour with integration tests was not a smart move. Guaranteeing compatibility among services' interfaces, however, was something I could see the value of. The dos and don'ts, in such effort, were not familiar to me, though. Therefore, I decided to dive in and find some answers.&lt;/p&gt;

&lt;p&gt;To begin with, higher level (e.g. end-to-end, integration) testing lacks several benefits of unit testing, many of which we have come to value as an industry. On the other hand, not all bugs are apparent at an unit level. They could also happen on the wiring between components or even in those off-the-shelf solutions that you employed to speed up your development. Yet you often heard the agile community endorsing unit tests as the backbone of a solid testing strategy. People like &lt;a href="https://twitter.com/mikewcohn"&gt;@mikewcohn&lt;/a&gt;, who established the initial model of the &lt;a href="http://martinfowler.com/bliki/TestPyramid.html"&gt;testing pyramid&lt;/a&gt;, were key in developing the notion that the ratio of a particular kind of test, in your test suite, should be inversely proportional to the degree of granularity of the tested scope. Principle that helps a solid test suite to be built in the most cost-effective way. And beware, since going in the opposite direction may result in an instance of the &lt;a href="http://watirmelon.com/2012/01/31/introducing-the-software-testing-ice-cream-cone/"&gt;ice cream cone&lt;/a&gt; anti pattern. So, even though you certainly could find several definitions for the properties of good unit tests out there, those roughly translate to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast&lt;/li&gt;
&lt;li&gt;Automated&lt;/li&gt;
&lt;li&gt;Isolated&lt;/li&gt;
&lt;li&gt;Informative&lt;/li&gt;
&lt;li&gt;Idempotent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first one is easy to explain, as long-running tests are the easiest way to make programmers develop the bad habit of avoid running tests frequently. And, if that happens, what is the point anyway? Automated since the intention is to facilitate adoption, not making people do repetitive work. Isolated because they should not overlap, otherwise you would have more places to look for when something fails. Informative, because the context of the failure should be explicit. Yep, you probably do not want people to analyse your test's source so that they can understand what went wrong. Finally, idempotency implies that they should behave the same, no matter which order or how many times they were run. Believe me, tests that randomly fail are a recipe for madness. They are worse than no tests at all, as they undermine developers' trust in their test suite.&lt;/p&gt;

&lt;p&gt;So, how do those properties apply to higher level (e.g. end-to-end, integration) testing. First, they are not as fast as unit tests. Especially if you are testing two &lt;a href="http://martinfowler.com/articles/microservices.html"&gt;microservices&lt;/a&gt;, process that would involve exchanging some packets over the network (latency sucks :/). They are not idempotent too, as there are many ways they could go wrong. Units that rely in global state (e.g. singleton pattern) can also suffer from that unpredictability, but a proper use of dependency injection can fix the problem. As for lost packets, network partition, good luck with it. Informative, well, you know something in between a set of components/services is not working well. Isolated? Nope, even though you can be cautious enough to avoid chatty components/services, one bug in one of them and you would suddenly find yourself in a situation in which every code path along the way fail. But hey, they could be automated.&lt;/p&gt;

&lt;p&gt;Did you find my point of view a bit extreme? Then try &lt;a href="https://twitter.com/jbrains"&gt;@jbrains&lt;/a&gt; amazing talk titled &lt;a href="https://vimeo.com/80533536"&gt;Integrated Tests Are a Scam&lt;/a&gt;. Seriously, if you want to laugh a bit with integrated tests' infamous positive feedback loop of negative emotions, watch it. Additionally, a written equivalent of it is available on &lt;a href="http://www.jbrains.ca/permalink/integrated-tests-are-a-scam-part-1"&gt;Integrated Tests are a Scam: Part 1&lt;/a&gt;. In the aforementioned talk, &lt;a href="https://twitter.com/jbrains"&gt;@jbrains&lt;/a&gt; shows how the promises of high level testing lures developers into favouring writing more tests of the same kind that, at the end, would provide very few coverage. Due to the combinatorial explosion of required tests, needed for the continuously increasing code paths. Instead, he advocates that we should spend our time with worthwhile tests. By worthwhile he means tests that help assessing the quality of our architecture, allowing us to improve its design in the long run.Position that is understandable, since he is a test-driven development (TDD) practitioner. After all, TDD is not about testing, it is about design. As for the tests, they are solely a pleasant by-product.&lt;/p&gt;

&lt;p&gt;In &lt;a href="http://www.jbrains.ca/permalink/part-2-some-hidden-costs-of-integration-tests"&gt;Part 2: Some Hidden Costs of Integration Tests&lt;/a&gt;, he also discusses about an important side-effect of slow tests, they destroy developers' productivity. Waiting for a few seconds is OK, but it is not rare to find tests suites that take ten minutes or more. Unfortunately, one cannot simply return to him/her peak performance right after such a long interruption. In &lt;a href="http://www.jbrains.ca/permalink/part-3-the-risks-associated-with-lengthy-tests"&gt;Part 3: The risks associated with lengthy tests&lt;/a&gt;, the focus changes to the insidious consequences of frequent false alerts, given the lack of isolation when things fail. And with less trust in the test suite, a fear of change starts to evolve among developers. Individuals that tend to justify their behaviour by mentioning an old engineering saying_"If it works, why change it?"_. Rationale that will ultimately lead to an architectural stagnation and to high interest rates in the form of &lt;a href="http://martinfowler.com/bliki/TechnicalDebt.html"&gt;technical debt&lt;/a&gt;. Quite the contrary to what you expected when you got bought into the practice of &lt;a href="http://martinfowler.com/bliki/SelfTestingCode.html"&gt;self testing code&lt;/a&gt;, right?&lt;/p&gt;

&lt;p&gt;But do not get him wrong, his disregard for integration testing limits itself to cases in which they are used for assessing basic correctness. Role that is better suited to unit tests, in the first place. In &lt;a href="http://www.jbrains.ca/permalink/using-integration-tests-mindfully-a-case-study"&gt;Using integration tests mindfully: a case study&lt;/a&gt;, for instance, he does see a value in employing integration tests for identifying system-level issues like broken database schema, mistaken cache integration, and more complex problems. That is, using integration tests to check the presence of a expected feature, is perfectly fine.&lt;/p&gt;

&lt;p&gt;Still, in &lt;a href="https://vimeo.com/80533536"&gt;Integrated Tests Are a Scam&lt;/a&gt;, &lt;a href="https://twitter.com/jbrains"&gt;@jbrains&lt;/a&gt; proposes an alternative for testing the interaction between components without resorting to integration testing. He suggests combining collaboration and contract tests. Collaboration tests are a well known practice, often named as &lt;a href="http://martinfowler.com/bliki/TestDouble.html"&gt;test doubles&lt;/a&gt;. More specifically, stubs are the the kind of doubles we are interested in. Stubs tend to mimic others' interfaces, but instead of doing real work, they return pre-computed results. Behaviour that is really useful when non-deterministic or slow operations (e.g. IO) are at stake, as we can employ fast and predictable unit-like tests to achieve a similar end. As for contract tests, they check the format of a component/service response, not its data. So, in the case of &lt;a href="http://martinfowler.com/articles/microservices.html"&gt;microservices&lt;/a&gt;, you would be testing if the outcome of a particular call has the fields you expected and if so, whether they comply with your use cases. Similarly, &lt;a href="https://twitter.com/martinfowler"&gt;@martinfowler&lt;/a&gt; also see the combination of stubs and contract tests as a good way to tackle the slowness and unreliability of integration tests, as stated in &lt;a href="http://martinfowler.com/bliki/IntegrationContractTest.html"&gt;integration contract test&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One challenge though is that testing against a double does not guarantee that the external component/service is being accurately represented. And even if so, future changes would require the double to be updated accordingly. One alternative to streamline the updates of stubs would be what &lt;a href="https://twitter.com/martinfowler"&gt;@martinfowler&lt;/a&gt; calls &lt;a href="http://martinfowler.com/bliki/SelfInitializingFake.html"&gt;self initialising fakes&lt;/a&gt;. Similarly, contract testing also suffers from the same synchronisation burden, however &lt;a href="http://martinfowler.com/bliki/SelfInitializingFake.html"&gt;self initialising fakes&lt;/a&gt;cannot help contract tests in the same manner. Additionally, there is also the possibility of contracts and stubs getting out of sync. Problem that could be mitigated by a shared metadata file or data structure that specifies available calls and what should be received in response, so that you do not have to concern yourself with it.&lt;/p&gt;

&lt;p&gt;To reduce the odds of getting out of sync, therefore breaking your test cases, or even worse, being misled by passing tests that should have failed, it is recommended to adopt a consumer-driven contract approach. In &lt;a href="http://martinfowler.com/articles/consumerDrivenContracts.html"&gt;Consumer-Driven Contracts: A Service EvolutionPattern&lt;/a&gt;, the concept is explained in a relatively implementation agnostic fashion. In a nutshell, consumer-driven contracts are a means of applying "just enough" validation, as proposed by &lt;a href="https://en.wikipedia.org/wiki/Robustness_principle"&gt;John Postel's Law&lt;/a&gt;, which puts the responsibility to specify what the service provider must comply on the clients. The service provider must then check the union of its consumers' expectations, in order to verify that there were no regressions. Additionally, that approach has some notable design advantages. First, it facilitates evolving your interface, as you wouldn't have to rely on schema extension points for adding new fields to your messages. Second, since what is consumed is explicitly stated, deprecating a field that nobody is using is way easier.&lt;/p&gt;

&lt;p&gt;Fortunately, the usefulness of mixing stubs and consumer-driven contract tests have led to the development of frameworks such as &lt;a href="https://github.com/realestate-com-au/pact"&gt;Pact&lt;/a&gt; and &lt;a href="https://thoughtworks.github.io/pacto/"&gt;Pacto&lt;/a&gt;, both written in Ruby. More importantly, they facilitate your stubs and contract tests to be in sync. Personally, I think that frameworks like that are a really promising way for guaranteeing compatibility among services' interfaces, while maintaining many of the unit testing properties. So, next time you get yourself considering to test some &lt;a href="http://martinfowler.com/articles/microservices.html"&gt;microservices&lt;/a&gt; with integration testing, think twice. If you just want to check compatibility among services' interfaces, invest in stubs and consumer-driven contract testing instead.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>testing</category>
    </item>
    <item>
      <title>Pythonic interfaces in Go Generators</title>
      <dc:creator>Lucas Lira Gomes</dc:creator>
      <pubDate>Sat, 04 Apr 2020 14:21:12 +0000</pubDate>
      <link>https://dev.to/x8lucas8x/pythonic-interfaces-in-go-generators-6eo</link>
      <guid>https://dev.to/x8lucas8x/pythonic-interfaces-in-go-generators-6eo</guid>
      <description>&lt;p&gt;One of the amazing things about Python is that once you embody the so called &lt;a href="https://www.python.org/dev/peps/pep-0020/"&gt;Zen of Python&lt;/a&gt;, no matter which language you are using, the philosophy will prevail. Or, in other words, even if you are learning the Go language, the extent of the pythonistas' ethos will probably find its way in. So, in order to contextualise a bit, this post will dwell on the implementation of a lazy list evaluation mechanism, equivalent to Python's generators, in the Go language.&lt;/p&gt;

&lt;p&gt;Among &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt;'s many superpowers, generators is clearly a major one. Brought from functional languages, like &lt;a href="https://www.haskell.org/"&gt;Haskell&lt;/a&gt;, which have demonstrated how better being lazy is in terms of speed and memory management, especially when collections' size can grow indefinitely.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt;, besides list comprehension, there are also the less known dictionary and generator comprehensions. Being the later, what we are interested in the scope of this post. So, considering we want a program that receives an user input and then print all the multiples of 2 up to a certain limit, unknown a priori. For such, we could:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from itertools import (count, takewhile)

limit = int(input())
multiples_of_2 = takewhile(lambda x: x &amp;lt;= limit, (x*2 for x in count()))

for x in multiples_of_2:
    print(x)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For the sake of simplicity, note that no error checking concerning the user input was done in the previous example. The same rule will apply for next examples. Despite that, one could use the yield operator instead of employing generator comprehension:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from itertools import count

def multiples_of_2(limit):
    for x in (x*2 for x in count()):
        if limit &amp;gt; 25:
            raise StopIteration
        yield x

limit = int(input())

for x in multiples_of_2(limit):
    print(x)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For your information, those Python's examples were meant to &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt; 3. As some of you may know, the input() function, in &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt; 2, also evals the input string, behaviour that can lead to serious security flaws. Therefore, if you still use &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt; 2, favour the raw_input() function instead.&lt;/p&gt;

&lt;p&gt;Even though we do not have the same constructs in Go, we still can employ &lt;a href="https://golang.org/"&gt;Go&lt;/a&gt;'s channels to end with a very similar effect. In such effort, two channels could be used. One for passing the data per se and another to signal that the upper bound limit was reached, therefore closing the data's channel. That is needed to emulate the &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt;'s StopIteration exception, which signals that the generator is now empty. So, without further ado:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import "fmt"

func multiples_of_2(c chan int, quit chan struct{}, limit int) {
    for x := 0; true; x += 2 {
        if x &amp;gt; limit {
            quit &amp;lt;- struct{}{}
            break
        }

        c &amp;lt;- x
    }
}

func main() {
    var limit int
    fmt.Scan(&amp;amp;limit)

    c := make(chan int)
    quit := make(chan struct{})

    defer close(c)
    defer close(quit)

    go multiples_of_2(c, quit, limit)
    for {
        select {
        case x := &amp;lt;-c:
            fmt.Println(x)
        case &amp;lt;-quit:
            return
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that the quit channel uses an empty struct. The reason is twofold. First, empty structs does not occupy memory space, amount that could be substantial at scale. Second, as the &lt;a href="https://www.python.org/dev/peps/pep-0020/"&gt;Zen of Python&lt;/a&gt; states: "explicit is better than implicit".So, by passing an empty struct we make it clear that the whole point of that particular channel is for signalling only, therefore avoiding users to wonder if there is a difference between true and false if, otherwise, we have declared that channel as a bool channel for instance.&lt;/p&gt;

&lt;p&gt;Besides that, the multiples_of_2's interface expose a lot about the business logic of our custom generator. Besides that, the whole process of initialising/closing a channel is quite repetitive. And as the &lt;a href="https://en.wikipedia.org/wiki/Don't_repeat_yourself"&gt;DRY&lt;/a&gt; principle preaches, repetition is the root of all evil. Not to mention the fact that we could solve this problem with a single channel, instead of two. But fear not, that required channel can be encapsulated inside multiples_of_2, leading to an interface that is very similar to the pythonic one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import "fmt"

func multiples_of_2(limit int) (chan int) {
    c := make(chan int)

    go func() {
        defer close(c)
        for x := 0 ; true; x+=2 {
            if x &amp;gt; limit { break }

            c &amp;lt;- x
        }
    }()

    return c
}

func main() {
    var limit int
    fmt.Scan(&amp;amp;limit)

    for x:= range multiples_of_2(limit) {
        fmt.Println(x)
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So, that is all. At the expense of some extra work in our APIs, we can provide very pythonic interfaces for lazy evaluated lists in &lt;a href="https://golang.org/"&gt;Go&lt;/a&gt;. Now without the need to worry about channels or concurrency at all. As a matter of fact, synchronousAPIs, like the last example, should be favoured in &lt;a href="https://golang.org/"&gt;Go&lt;/a&gt;, given using synchronousAPIs in an asynchronous manner is easy in &lt;a href="https://golang.org/"&gt;Go&lt;/a&gt;, while the contrary is not.&lt;/p&gt;

</description>
      <category>python</category>
      <category>go</category>
    </item>
    <item>
      <title>Connected, but still not interoperable</title>
      <dc:creator>Lucas Lira Gomes</dc:creator>
      <pubDate>Sat, 04 Apr 2020 14:20:17 +0000</pubDate>
      <link>https://dev.to/x8lucas8x/connected-but-still-not-interoperable-2dd0</link>
      <guid>https://dev.to/x8lucas8x/connected-but-still-not-interoperable-2dd0</guid>
      <description>&lt;p&gt;Cisco call it the Internet of Everything (IoE), most players would rather name it the Internet of Things (IoT) and, although less common, you probably heard the term the Industrial Internet too. And if you ever read any post or whitepaper about one of those buzzwords, especially those written by big players in the market (e.g. Cisco, IBM), you probably saw a trend in depicting scenarios where your things (e.g. car, appliances, lighting, HVAC) are interconnected and, somehow, interacting among themselves independently of human interference. Great vision, but how far are we from it?&lt;/p&gt;

&lt;p&gt;Right now IoT is composed by a jungle of different solutions. You can probably outline those that seem more promising. In the networking spectrum there is a whole stack solutions, which tries to provide you not only data link layer features but also routing, addressing and some even encryption, those are mainly Zigbee, Z-Wave,Bluetooth and WirelessHart. On the other hand you also have WIFI, raw IEEE 802.15.4,GPRS and all sort of radios operating in sub-gigahertz frequency ranges. Each has an use case of its own. Z-Wave, for instance, is more present in home automation,Bluetooth V4 is usually the right one for wearables, WirelessHart is an adaption of the Highway Addressable Remote Transducer Protocol (HART) protocol for industrial wireless networks and Zigbee is the wild card among them (i.e. thanks to DIGI's amazing AT programming interface).&lt;/p&gt;

&lt;p&gt;Along with that there is also efforts to bring the IP protocol to the constrained devices like those that use IEEE 802.15.4 and its variations. Well, the advantages are many. First there is the seamless exchange of information between devices utilizing any IP-enabledMAC/PHY (e.g. Wi-Fi, Ethernet). Second we cannot forget the battle-tested tooling all those years of IP predominance have provided us (e.g. ping, traceroute, netcat, wireshark, tcpdump). The &lt;a href="http://www.ipso-alliance.org/"&gt;IPSO Alliance&lt;/a&gt; is one of the major advocates in this matter. They have several whitepapers publicising standards like &lt;a href="http://www.ipso-alliance.org/downloads/6LoWPAN"&gt;6LowPan&lt;/a&gt;, an IPV6-compatible addressing with better header compression, and &lt;a href="http://www.ipso-alliance.org/downloads/RPL"&gt;RPL&lt;/a&gt;, a mesh-enabled routing protocol for low power and lossy networks. The &lt;a href="http://www.zigbee.org/"&gt;Zigbee Alliance&lt;/a&gt; also realised the advantages of IPv6-based wireless mesh networking and created ZigBee IP, and open standard built on top of IEEE 802.15.4 that provides end-to-endIPv6 networking.&lt;/p&gt;

&lt;p&gt;On top of IP, on the application level, &lt;a href="http://mqtt.org/news"&gt;MQTT&lt;/a&gt; and &lt;a href="https://tools.ietf.org/html/rfc7252"&gt;COAP&lt;/a&gt; shine. The first is a lightweightPUB-SUB protocol based on TCP. Now you may wonder if &lt;a href="http://mqtt.org/news"&gt;MQTT&lt;/a&gt; is appropriate for wireless sensor networks. In fact anything TCP based is not by design, but in such cases you can use &lt;a href="http://mqtt.org/news"&gt;MQTT-SN&lt;/a&gt;, a UDP based variation of &lt;a href="http://mqtt.org/news"&gt;MQTT&lt;/a&gt; that is especially tailored for low-cost and low-power sensor devices that run over bandwidth-constrained wireless networks. While &lt;a href="https://tools.ietf.org/html/rfc7252"&gt;COAP&lt;/a&gt; is a lightweight HTTP compatible protocol, based on UDP, with support for multicasting and service discovery. Both of them are quite popular and you can probably find an implementation for your favourite programming language or IoT platform (e.g. &lt;a href="http://contiki-os.org/"&gt;Contiki OS&lt;/a&gt;, &lt;a href="https://tools.ietf.org/html/rfc7390"&gt;Arduino&lt;/a&gt;).Unfortunately, given a non-IP network, developing a gateway to map your custom protocol in the interface your backend/server uses and vice-versa is a necessary burden.&lt;/p&gt;

&lt;p&gt;So, there is certainly no doubt there were major progresses on the connectivity front, but still something is absent in this equation. Connectivity is certainly necessary but IoT is as much about connectivity as the internet is about the web. That vision those big players describe of smart Xs, being X anything, autonomously interacting among themselves is heavily dependent of those devices being able to discover each other and access their functionalities, without being explicitly pre-programmed to do so. You probably saw companies like &lt;a href="http://www.smartthings.com/"&gt;Smart Things&lt;/a&gt;, &lt;a href="https://ninjablocks.com/"&gt;Ninja Blocks&lt;/a&gt; or the former &lt;a href="http://revolv.com/"&gt;Revolv&lt;/a&gt;, bought by &lt;a href="https://nest.com/"&gt;Nest&lt;/a&gt;, stating that their platforms/hubs supports different vendors or "play well with others", which is great but has its own limitations.&lt;/p&gt;

&lt;p&gt;Up to now, in platforms like the aforementioned, integration of new products occurs in an incremental fashion. So if you want to support Phillip's &lt;a href="http://www2.meethue.com/pt-br/"&gt;Hue&lt;/a&gt; or &lt;a href="https://www.lifx.com/"&gt;LIFX&lt;/a&gt; lamps, you will have to read the documentation of their REST API. Which seems great given REST apis are easy to integrate, but the crude reality of IoT is way less welcoming. In most you cases, you will find yourself with vertically integrated systems that do not permit easy third-party integration. And even if they permitted, the manual process of integrating with new devices and/or platforms have two problems.&lt;/p&gt;

&lt;p&gt;First, a great deal of products does not provide public documented APIs for third-parties. And the reason is that, currently, most vendors tend to sell a solution, from hardware to user interface, therefore not caring for those who want to use their products differently from what they envisaged (i.e. makers suffer :/). Consider &lt;a href="https://www.plugwise.com/"&gt;Plugwise&lt;/a&gt;, for instance, they have one of the most complete energy management solutions out there, but without a consistent effort to provide a public API or SDK. You may even find unofficial libraries, made by someone who probably had to sniffer &lt;a href="https://www.plugwise.com/"&gt;Plugwise&lt;/a&gt;'s devices in order to reverse engineer their proprietary protocol. But using those, you would not have any guarantee of future support. Besides that it is common for unofficial libraries not to be feature complete, so good luck if you want to use the most recent capabilities.&lt;/p&gt;

&lt;p&gt;Second, manual integration does not scale. Vendors may try to pinpoint the most popular products, to focus their integrations efforts, or form partnerships, but that degree of interoperability will come at the expense of tight vendor integration with specific partners.&lt;/p&gt;

&lt;p&gt;To solve those problems, devices need to discover and access each other functionalities, not necessarily directly like M2M scenarios portrait. And for such two things are required. First a data model that could explicitly state what each piece of data is about, so that you do not have to read a manual to realise that a sensor is measuring temperature in Celsius. Ontologies are usually the answer in such cases, but &lt;a href="http://www.w3.org/2001/sw/wiki/OWL"&gt;OWL&lt;/a&gt; and &lt;a href="http://www.w3.org/2001/sw/wiki/RDF"&gt;RDF&lt;/a&gt; are not appropriate given the bandwidth limitations. The &lt;a href="http://www.ipso-alliance.org/"&gt;IPSO Alliance&lt;/a&gt; tried to fill this gap with its &lt;a href="http://www.ipso-alliance.org/smart-object-guidelines"&gt;Smart Objects&lt;/a&gt; specification, which describes a reusable data model for IoT. That data model defines a set of data types and structures that can be used by different devices, in order to enable them to interoperate since the semantics is now in the data itself. Still, despite the &lt;a href="http://www.ipso-alliance.org/smart-object-guidelines"&gt;Smart Objects&lt;/a&gt; specification, ontologies have an important role at the users level, as the tooling from semantic web technologies (e.g. &lt;a href="http://www.w3.org/2001/sw/wiki/SPARQL"&gt;SPARQL&lt;/a&gt;, &lt;a href="http://www.w3.org/2001/sw/wiki/OWL"&gt;OWL&lt;/a&gt;, &lt;a href="http://www.w3.org/2001/sw/wiki/RDF"&gt;RDF&lt;/a&gt;) can provide great value for those interested in composing their own IoT solutions by accessing higher level services.&lt;/p&gt;

&lt;p&gt;Although being able to determine the content of the messages sent by sensors is important, no equivalent exists in terms of actuation. And that is key to a large adoption of IoT, especially in the end-consumer market. Right now, businesses can get value by tracking trends and analysing data, but for end-consumers automation is the real killer application. And by automation I mean not only actuation, that translates itself in an event in the physical world, but also remote configuration of these devices. All that provided, without devices being pre-programmed to do so, would be huge. But, unfortunately, no lightweight UPnP exist for the IoT yet.&lt;/p&gt;

</description>
      <category>iot</category>
    </item>
    <item>
      <title>Zeroless</title>
      <dc:creator>Lucas Lira Gomes</dc:creator>
      <pubDate>Sat, 04 Apr 2020 14:18:15 +0000</pubDate>
      <link>https://dev.to/x8lucas8x/zeroless-4o4</link>
      <guid>https://dev.to/x8lucas8x/zeroless-4o4</guid>
      <description>&lt;p&gt;I have been an enthusiast of ZeroMQ for quite some time. If there was an opportunity that required some sockets on steroids, I would not think twice. Ah, how those three messaging pattern were useful (i.e. Request/Reply, Push/Pull, Publish/Subscribe). They had the amazing trait of being able to drastically streamline the development of distributed systems. And its portability, with wrappers for more than 30 languages, brokerlessness and amazing documentation made it a no brainer for me to favour ZeroMQ over other messaging alternatives.&lt;/p&gt;

&lt;p&gt;Using &lt;a href="http://zeromq.org/"&gt;ZeroMQ&lt;/a&gt; in Python with &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt;, however, always made me feel like I'm coding inC/C++, which I also love by the way. Unfortunately, that lack of &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt; API's, if I may say, "pythonicity", just felt wrong to me. And by the end of last January, I decided to do something about it. So that is how &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt; was born.&lt;/p&gt;

&lt;p&gt;My mission was to leverage on &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt; to build a more elegant wrapper for &lt;a href="http://zeromq.org/"&gt;ZeroMQ&lt;/a&gt;. Something more aligned with the python way of doing things. And, to a certain degree,I have succeeded. However, I have never made a comprehensive effort to publicise &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt; in any way so, in this post, I hope not only to explain how &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt; differs from &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt; but also to reach a greater audience, that may be as enthusiastic about &lt;a href="http://zeromq.org/"&gt;ZeroMQ&lt;/a&gt; as myself. Therefore, without further ado, here goes some of the design decisions I have made for &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;:&lt;/p&gt;

&lt;h5&gt;
  
  
  TCP only for the win
&lt;/h5&gt;

&lt;p&gt;Ok, I know PGM, INPROC and IPC have their use cases. PGM for instance provides aPublish/Subscribe specific transport, that scales better than TCP in thePublish/Subscribe use case, as it cut out the ACK flood publishers gets on every new message. There are also some extra reliability, that you also cannot find in TCP. IPC, on the other hand, is a pattern agnostic way of providing more efficient inter-process communication than traditional networking, but is Unix-like only. As for INPROC's particular case, which efficient applicability is being hindered by the Python's GIL, I do not see why bother with it.&lt;/p&gt;

&lt;p&gt;Nevertheless, I have a feeling that the vast majority of the users, like myself, are quite good with just TCP. Which is exactly what you need when building really horizontally scalable networked services, especially in this time of a renewed vision for SOA, with microservices having a lot of attention. So let us just useTCP and free our minds to think about other matters.&lt;/p&gt;

&lt;h5&gt;
  
  
  No more contexts
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt; applications require users to create a context, in order to instantiate sockets. Technically, a context serves as a container for all your sockets and usually one of it, per process, is just what you need. As a matter of fact, you could have more, but why bother your runtime with more event loops, for your socket stuff, when one suffices? Also, if you are using INPROC as transport, you may also need to share a context for the communication to happen. But again, if INPROC is is not that useful in Python as aforementioned, do we really need explicitly manage contexts?&lt;/p&gt;

&lt;p&gt;Not at all, so that is why in &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt; you just have to manage Clients, sockets that connect, and Servers, sockets that bind, without concerning yourself with contexts ;). For instance, in order to instantiate a client you would:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client = Client()

# You could use connect_local(port=12345) as well
client.connect(ip='127.0.0.1', port=12345)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Similarly, for servers, you would:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server = Server(port=12345) # No need to call bind here
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally, note that no real connect/bind will occur unless you instantiate a messaging pattern, which are the subject of our next topic.&lt;/p&gt;

&lt;h5&gt;
  
  
  Like a factory method pattern
&lt;/h5&gt;

&lt;p&gt;One thing I never liked about &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt; sockets's instantiation is that we have to calla method called socket, which receives an enum representing the type of the socket. Why don't they just provide a separate method for every socket possible, like as if it was a factory method pattern kind of interface. That would allow a more straightforward experience for developers, that could then rely on their favouriteIDE's code complete to quickly understand what kind of sockets and parameters they could set. That enum approach, however, will probably make your users go to the documentation, but solely because of the way the &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt;'s interface is.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;, we fixed that, so you don't need to check the documentation every time you want to instantiate a socket, instead just have a descent code complete support and you are done. For instance, compare how you would instantiate a publisher socket with &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub = Server(port=12345).pub(topic=b'')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  Connections awareness
&lt;/h5&gt;

&lt;p&gt;One of the questions you may ask is to whom your clients are connected to. And for that, &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt; cannot help you. Unless you manage that list of connections by yourself, you wouldn't be able to get it afterwards. Therefore, in &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;, we provide an addresses property, so that you can retrieve all your ip and port pairs as a list of tuples. But that is only for clients of course, as there is no way to know which sockets are connected to your server without building some sort of infrastructure for that yourself.&lt;/p&gt;

&lt;h5&gt;
  
  
  Subscribe should not be tricky
&lt;/h5&gt;

&lt;p&gt;In terms of interface, the subscribe case is particularly problematic in &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt;. One must use the not so intuitive &lt;a href="https://zeromq.github.io/pyzmq/api/zmq.html#zmq.Context.setsockopt"&gt;setsockopt()&lt;/a&gt; method, in order to define the topics it subscribes to. Like in the following snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;socket = context.socket(zmq.SUB)
socket.setsockopt(zmq.SUBSCRIBE, b"") # Subscribe to all topics
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I believe most new &lt;a href="http://zeromq.org/"&gt;ZeroMQ&lt;/a&gt;'s user get this wrong at first, as they suppose no topic means you are subscribed to all topics, and keep asking himself/herself why that damn subscriber socket does not receive your published messages.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;, we fixed that, so that you don't have to instantiate your socket and set something as essential as a topic, in the subscribe case, via some kind of "obscure" method. Just compare how you would instantiate a subscriber socket with &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;listen_for_pub = client.sub(topics=[b''])
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  Generators and high-order functions as first class citizens
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt; sockets tend to use &lt;a href="https://zeromq.github.io/pyzmq/api/zmq.html#zmq.Socket.send"&gt;send()&lt;/a&gt; and &lt;a href="https://zeromq.github.io/pyzmq/api/zmq.html#zmq.Socket.recv"&gt;recv()&lt;/a&gt; methods for the message exchange part. However, it always felt wrong to me to do stuff like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while True:
    data = socket.recv()
    # do something with data
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That is, if Python has built-in support for iterables, or generators if you prefer, why don't we just do something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;listen_for_push = Server(port=12345).pull()

for data in listen_for_push:
    # do something with data
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Way more idiomatic to read incoming messages that way, right? As for sending them,I also followed a different path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;push = client.push()
push(b"Msg1")
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Therefore, in &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;, every time you instantiate a message pattern that is supposed to send messages, use it as a function. Otherwise, treat it as a generator.&lt;/p&gt;

&lt;h5&gt;
  
  
  Multi-part made easy
&lt;/h5&gt;

&lt;p&gt;In &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt;, if you want to send a multipart message, you have to use the&lt;a href="https://zeromq.github.io/pyzmq/api/zmq.html#zmq.Socket.recv_multipart"&gt;recv_multipart()&lt;/a&gt; and &lt;a href="https://zeromq.github.io/pyzmq/api/zmq.html#zmq.Socket.send_multipart"&gt;send_multipart()&lt;/a&gt; methods. Methods that instead of a single message, will deal with a list of them.In &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;, I favoured consistency for a quicker and easier learning path, therefore there is no difference between the single part and the multipart API.&lt;/p&gt;

&lt;p&gt;If you want to send a multipart message, just consider that your send function have a printf like interface and you are set. So, for instance, if you want to send an id separated from your message body, you could:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;push = client.push()
push(b'1', b'OK')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Additionally, if someone send you a multipart message, your generator will return a tuple with all of its parts. As a result of that, to get the message from the previous example you would need to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;listen_for_push = Server(port=12345).pull()
for id, msg in listen_for_push:
    # do something with id and msg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  The future
&lt;/h5&gt;

&lt;p&gt;Although feature parity was never part of my plans, there still some of &lt;a href="https://github.com/zeromq/pyzmq"&gt;PyZMQ&lt;/a&gt;'s functionalities I would like to provide in &lt;a href="https://github.com/zmqless/python-zeroless"&gt;Zeroless&lt;/a&gt;. Like both &lt;a href="https://zeromq.github.io/pyzmq/api/zmq.html#poller"&gt;poller&lt;/a&gt; and &lt;a href="https://zeromq.github.io/pyzmq/api/zmq.devices.html"&gt;devices&lt;/a&gt; APIs, for instance. So expect more on the way o/. While that, if you felt compelled to help shape this project, please clone our &lt;a href="https://github.com/zmqless/python-zeroless.git"&gt;repository&lt;/a&gt; and see our &lt;a href="http://python-zeroless.readthedocs.org/en/latest/development.html#contributing"&gt;guidelines&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>zeromq</category>
      <category>python</category>
    </item>
  </channel>
</rss>
