<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ITHAKA</title>
    <description>The latest articles on DEV Community by ITHAKA (@ithaka).</description>
    <link>https://dev.to/ithaka</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ithaka"/>
    <language>en</language>
    <item>
      <title>Choose Boring Releases</title>
      <dc:creator>Dane Hillard</dc:creator>
      <pubDate>Thu, 26 Sep 2024 16:52:04 +0000</pubDate>
      <link>https://dev.to/ithaka/choose-boring-releases-3h7k</link>
      <guid>https://dev.to/ithaka/choose-boring-releases-3h7k</guid>
      <description>&lt;p&gt;Academic research increasingly relies on diverse content types, including gray literature and primary source materials, alongside traditional peer-reviewed works. ITHAKA supports this evolving research landscape through initiatives that foster cross-content connections on &lt;a href="https://www.jstor.org/" rel="noopener noreferrer"&gt;JSTOR&lt;/a&gt;, our digital library that supports research, teaching, and learning. These include &lt;a href="https://www.about.jstor.org/whats-in-jstor/infrastructure/" rel="noopener noreferrer"&gt;infrastructure services&lt;/a&gt; that enable institutions to make their digital archives and special collections discoverable on the platform, and a years-long effort to integrate Artstor — a vast collection of images and multimedia for educational use — onto the JSTOR platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj66dv73cd7vhuj2wmu1k.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj66dv73cd7vhuj2wmu1k.jpeg" width="800" height="610"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Albrecht Dürer (German, Nuremberg 1471–1528 Nuremberg). “Alberti Dvreri Pictoris et Architecti Praestantissimi De Vrbibvs…” 1535. Illustrated book, 78 pp.; H: 13 3/4 in. (35 cm). The Metropolitan Museum of Art. &lt;a href="https://jstor.org/stable/community.34718827" rel="noopener noreferrer"&gt;https://jstor.org/stable/community.34718827&lt;/a&gt;. Just one of the millions of items now available on JSTOR.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;On August 1, 2024, the &lt;a href="https://www.about.jstor.org/news/jstor-announces-artstor-on-jstor/" rel="noopener noreferrer"&gt;Artstor migration to the JSTOR platform&lt;/a&gt; culminated in the need to smoothly guide users from the Artstor Image Workspace (AIW) at library.artstor.org to appropriate counterparts on &lt;a href="http://www.jstor.org" rel="noopener noreferrer"&gt;www.jstor.org&lt;/a&gt;. The amount of work this required was daunting; the AIW platform was as large and complex as JSTOR, and covered many different product areas with ownership spread across several different teams. To achieve this transition without drawing focus away from other objectives, we had to adopt a strategic approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  The challenge
&lt;/h3&gt;

&lt;p&gt;Our two main constituents for this transition were our users — primarily art history researchers and faculty — and search engine web crawlers. Like JSTOR users, many Artstor users start their searches outside of our platforms and need to be able to find things no matter what platform changes we make.&lt;/p&gt;

&lt;p&gt;The first piece of the challenge was technical: Client-side routing was used extensively on the AIW platform. That routing wasn’t built with web crawlers in mind, so many modern web crawlers couldn’t handle it very well. Plus, these client-side routes were generally invisible to our servers because they used hash-based routing rather than the history API. Some pages did support server-side rendering so web crawlers could index them, which gave us an additional scenario to support.&lt;/p&gt;

&lt;p&gt;The second challenge was human: Domain knowledge about the details of all this routing and the user value of various pages was spread throughout the organization, and the people with the domain knowledge didn’t always have the technical knowledge to deal with the redirects.&lt;/p&gt;

&lt;p&gt;Finally, there was the sheer scope of the work to be done. AIW itself was a huge platform, but on top of that we also had to redirect supporting areas like the marketing and support sites. Very early on, we had to set some parameters to rein in the scope:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We wouldn’t guarantee or aim for 100% of traffic to be redirected. Some pages and requests simply couldn’t be redirected anywhere reasonable, and redirecting to generic informational pages can lead to search engine ranking penalties. More importantly for our user focus, it would be disorienting to click a bookmark or a search engine result and get redirected to something irrelevant.&lt;/li&gt;
&lt;li&gt;We would create a central, extensible service designed for loose coupling of the URL patterns to match and the behavior when such a match occurred. This would reduce unnecessary coupling and dependencies between teams as well as the context those teams had to hold onto during the migration project.&lt;/li&gt;
&lt;li&gt;We would enable teams to implement redirects for the broadest-used page types (such as individual items), but would equally enable implementing redirects for the long tail of irregular page types. This would create a familiar implementation pattern for teams that they wouldn’t have to abandon for more exotic use cases.&lt;/li&gt;
&lt;li&gt;We would enable visibility into the behavior of the system as early as possible, so that we could observe the trend over time. This fostered an experimentation mindset: We could develop hypotheses about how the traffic would be handled, implement a change, and quickly confirm or disconfirm the desired effect after deploying the new change. This also gave us opportunities to leverage parts of our deployment platform we hadn’t exercised much previously, adding some greenfield work to an otherwise brownfield kind of activity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So instead of perfection, we defined our scope as removing as much risk as possible and building in as much confidence as we could ahead of our launch date. Our goal was to coast through the finish line instead of scrambling.&lt;/p&gt;

&lt;h3&gt;
  
  
  The approach
&lt;/h3&gt;

&lt;p&gt;So how did we do that? A few key tactics showed up for us and made things possible throughout this project.&lt;/p&gt;

&lt;h4&gt;
  
  
  Practice product-minded engineering
&lt;/h4&gt;

&lt;p&gt;We approached the project using product-minded engineering principles, which leverage user-focused agile techniques and strong cross-disciplinary and interdisciplinary skills to identify problems worth solving. By considering both user needs and architectural needs together, we came away with a more globally maximum solution without oversimplifying or overengineering.&lt;/p&gt;

&lt;h4&gt;
  
  
  Use traffic shadowing
&lt;/h4&gt;

&lt;p&gt;Traffic shadowing allows inbound requests to be sent to two or more destinations, each able to take some action. Although one system is still responsible for sending a response back to the user, other systems can also make observations about or take action on requests. Our Capstan platform made this easy to try, and it worked very well. We continued serving AIW pages to users while also sending requests to our new redirect service, so we could see how it would behave and perform.&lt;/p&gt;

&lt;h4&gt;
  
  
  Measure early and often
&lt;/h4&gt;

&lt;p&gt;We built metrics into the redirect service so that as we started implementing and deploying redirect behaviors we could assess the total traffic volume and understand what portion of that traffic was being handled by a known redirect. That was a huge confidence builder, as we could actually demonstrate not only that the service &lt;em&gt;should&lt;/em&gt; redirect the implemented URLs once turned on, but that &lt;em&gt;it was in fact doing so&lt;/em&gt; — with the shadow traffic.&lt;/p&gt;

&lt;h4&gt;
  
  
  Use feature flags
&lt;/h4&gt;

&lt;p&gt;Where we couldn’t implement traffic shadowing for some supporting pages, we used &lt;a href="https://medium.com/ithaka-tech/deploying-features-under-cover-of-darkness-f112ce444bba" rel="noopener noreferrer"&gt;feature flagging&lt;/a&gt;. With feature flags, we can ship two different behaviors and then allow staff and automated tests to toggle those behaviors on or off before exposing that new behavior to users. Even under normal circumstances we have many teams developing across many areas of the platform, and a feature flagging strategy brings &lt;a href="https://medium.com/ithaka-tech/find-bugs-before-your-users-do-closing-the-software-development-risk-exposure-gap-cace5dbd19d2" rel="noopener noreferrer"&gt;speed and safety&lt;/a&gt; by ensuring we don’t have to perform a “big bang” release at the very end of a project.&lt;/p&gt;

&lt;h4&gt;
  
  
  Think in transition architectures
&lt;/h4&gt;

&lt;p&gt;In effect, all of the redirect architecture we built to support this migration was a transition architecture. But we also built some smaller transition systems to move the control of traffic from external vendors into our own platform, making it easier to flip a switch on release day instead of having to handle several moving parts in a row.&lt;/p&gt;

&lt;p&gt;Easing the transition architecture burden made a big difference on several of our support sites. For example, the DNS records for these sites were initially pointed directly at third-party vendors; before we could control the traffic, the traffic would have to come to us. If we waited until release day for that to happen, the DNS change could take longer than expected to propagate, or it could result in unforeseen consequences, or the logic we implemented for it could be wrong. Instead, we designed these constraints away. We worked to bring the DNS under our control earlier, and although it initially continued to point to the third-party vendors, we could now make immediate changes to its behavior.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create a loosely coupled design
&lt;/h4&gt;

&lt;p&gt;Because we had to collect and implement so much widely scattered domain knowledge into these redirects, we had to build a system that made it easy to contribute and hard to cross wires or step on other people’s toes. The system we created allowed teams to come in with nothing more than a URL pattern to match on AIW and a destination URL to send the traffic to. If necessary, they could also contact downstream services for data to help them decide where to redirect that traffic.&lt;/p&gt;

&lt;p&gt;Basically, we decided to meet teams where they are, with their domain knowledge, so they didn’t have to think too much about the technical aspects of how redirection works. This also allowed the team working on the redirect service to do that work without having to worry too much about the domain knowledge. This separated responsibility in a nice way and ensured better outcomes for us and our users.&lt;/p&gt;

&lt;h4&gt;
  
  
  Have regular, valuable trade-off discussions
&lt;/h4&gt;

&lt;p&gt;Finally, we engaged throughout this whole project in high-quality trade-off discussions. We talked about the level of effort a particular change required compared to the expected user impact. We talked about specific pages and how to handle them. And because we had such a rich cross-section of technical and domain knowledge in those discussions, we made better decisions about how to simplify things or adjust to reduce our overall risk and costs.&lt;/p&gt;

&lt;p&gt;This is really a reprise of product-minded engineering, but this highlights one of its most impactful outcomes — giving us context about what’s possible, what gives us the best ROI, and how we might deliver better outcomes for us and users.&lt;/p&gt;

&lt;h3&gt;
  
  
  The result
&lt;/h3&gt;

&lt;p&gt;We worked hard leading up to our launch date, August 1. But release day itself was one of the most boring of all time. We did about 30 minutes of real work to flip on some of those feature flags and deploy a couple of applications, all of which we were highly confident in because of our previous metrics and testing. In that short time we started redirecting over 95% of all AIW traffic to known locations on JSTOR and its supporting sites. The remaining requests were for things we had intentionally decided not to handle.&lt;/p&gt;

&lt;p&gt;We only shipped one bug that we know of, which we spotted quickly. Because of our system design we were able to revert immediately to the old system while we worked on a short-term fix, and within a couple of weeks put a long-term fix in place so we could sunset the old system.&lt;/p&gt;

&lt;p&gt;Meanwhile, we’re listening for user experience feedback in case anyone is confused or notices broken redirects, and we’re watching how our search engine crawling and ranking responds. As Google and Bing and others recrawl our content, they’ll start sending people directly to JSTOR. &lt;a href="https://groups.niso.org/higherlogic/ws/public/download/26321" rel="noopener noreferrer"&gt;NISO recommends&lt;/a&gt; leaving these redirects in place for at least a year, so we’ve committed to that. Next year we’ll evaluate whether this architecture has run its course and act accordingly.&lt;/p&gt;

&lt;h3&gt;
  
  
  The takeaways
&lt;/h3&gt;

&lt;p&gt;While the technical tools and architecture we used were obviously important, adopting a loosely coupled, extensible design was key to our success. By bringing solutions to teams that allowed them to focus on their domain knowledge we played to everyone’s strengths. It also ensured teams weren’t colliding with each other too often. This approach helped us focus and synthesize everyone’s strengths. It may not be possible in all cases, but for this project it was one of our best and most productive decisions.&lt;/p&gt;

&lt;p&gt;By thinking in transition architectures and building key visibility into the system at the outset, we were able to create significant confidence in our progress and readiness for release day, making the finale just a blip on the radar. This raised an unexpected consequence — it would have been easy to say, “Look, we did the thing,” and part ways on release day, because it was so uneventful. Choose boring releases, but be sure to keep the afterparty exciting.&lt;/p&gt;

&lt;p&gt;Interested in learning more about working at ITHAKA? Contact recruiting to learn more about &lt;a href="https://www.ithaka.org/careers/?gh_src=1ba9e9eb5us" rel="noopener noreferrer"&gt;ITHAKA tech jobs&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>integrationtesting</category>
      <category>releasemanagement</category>
      <category>agile</category>
      <category>migration</category>
    </item>
    <item>
      <title>Our Open Source Software in the Wild</title>
      <dc:creator>Dane Hillard</dc:creator>
      <pubDate>Fri, 21 Jun 2019 17:20:42 +0000</pubDate>
      <link>https://dev.to/ithaka/our-open-source-software-in-the-wild-3kh2</link>
      <guid>https://dev.to/ithaka/our-open-source-software-in-the-wild-3kh2</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F882%2F1%2A_COUuOHLmKfILvbwWp873w.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F882%2F1%2A_COUuOHLmKfILvbwWp873w.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ithaka.org/" rel="noopener noreferrer"&gt;ITHAKA&lt;/a&gt; has a mission to make knowledge widely available, accessible, and sustainable. This mission runs parallel to the open source approach to software development. In making software, systems, and methodologies public, organizations enable others to build on their successes as well as critically examine these artifacts for bugs or security flaws.&lt;/p&gt;

&lt;p&gt;The collaborative nature that open source strives toward acts as a tide that raises all ships; the community can produce improvements that benefit everyone. These software changes are tracked over time and the historical record that arises also acts as an audit trail for reproducibility or as a catalyst for someone hoping to learn how something is done.&lt;/p&gt;

&lt;p&gt;Our front-end developers that work on &lt;a href="https://www.jstor.org/" rel="noopener noreferrer"&gt;JSTOR&lt;/a&gt; consume data from a myriad of back-end systems, from those we manage in-house for authentication and search to third-party APIs for fetching data from Wikipedia. As the list of services we interact with grew over the last few years, we found it difficult to manage the services, let alone all the endpoints we communicate with in each of them. We built a &lt;a href="https://www.python.org/" rel="noopener noreferrer"&gt;Python&lt;/a&gt; project, &lt;a href="https://github.com/ithaka/apiron" rel="noopener noreferrer"&gt;ithaka/apiron&lt;/a&gt;, in order to ease this burden. Sometime soon after that, we decided that apiron might be useful enough to others to warrant publication and got to work on all the legal and marketing threads involved.&lt;/p&gt;

&lt;p&gt;Since open sourcing apiron, it’s seen moderate but intriguing interest from the Python community. Jeff Triplett, a Director of the Python Software Foundation, was &lt;a href="https://github.com/ithaka/apiron/pull/12" rel="noopener noreferrer"&gt;the first third-party contributor&lt;/a&gt; to the apiron project. Jeff lives in Lawrence, KS, the birthplace of the &lt;a href="https://www.djangoproject.com/" rel="noopener noreferrer"&gt;Django web framework&lt;/a&gt; that we make heavy use of on JSTOR. He organizes the DjangoCon US conference as the president and co-founder of the Django Events Foundation North America.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ithaka/apiron/pull/14" rel="noopener noreferrer"&gt;Another third-party contributor&lt;/a&gt;, who we know only by the cryptic handle teffalump, used apiron to build a Python package for communicating with servers managing DICOM, a standard for working with medical imaging records. Excitingly, their work went on to be &lt;a href="https://github.com/brown-bnc/bnctools" rel="noopener noreferrer"&gt;used by folks&lt;/a&gt; in the Brown University Behavioral and Neuroimaging team.&lt;/p&gt;

&lt;p&gt;One beautiful consequence of the freedom of open source software is that you never quite know where it will end up. We built apiron to improve how we talk to a search service, and already it’s spread to use cases we couldn’t have imagined at the outset if we’d tried. Exploring these use cases as opportunities to learn will ultimately help us improve our software and ourselves.&lt;/p&gt;

&lt;p&gt;As ITHAKA publishes software that generates reach and impact, those we engage with may develop an interest in working with us more directly, perhaps as collaborators or even as employees It’s also a new way to pursue our mission to expand access to knowledge (in all its forms).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you’re curious about the steps involved in open sourcing projects, my experience with it, how to effectively manage an open source project, or are thinking about using apiron, please leave a comment or reach out to me on Twitter at&lt;/em&gt; &lt;a href="https://twitter.com/easyaspython" rel="noopener noreferrer"&gt;&lt;em&gt;@easyaspython&lt;/em&gt;&lt;/a&gt;&lt;em&gt;!&lt;/em&gt;&lt;/p&gt;




</description>
      <category>softwaredevelopment</category>
      <category>opensource</category>
      <category>webdev</category>
      <category>python</category>
    </item>
    <item>
      <title>Blazing fast Python</title>
      <dc:creator>Dane Hillard</dc:creator>
      <pubDate>Wed, 19 Dec 2018 15:01:01 +0000</pubDate>
      <link>https://dev.to/ithaka/blazing-fast-python-2kkl</link>
      <guid>https://dev.to/ithaka/blazing-fast-python-2kkl</guid>
      <description>

&lt;h4&gt;
  
  
  Profiling Python applications using Pyflame
&lt;/h4&gt;

&lt;p&gt;This post originally appeared &lt;a href="https://medium.com/build-smarter/blazing-fast-python-40a2b25b0495"&gt;on Build Smarter&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Perhaps you’ve faced the fortunate challenge of scaling up a Python application to accommodate a steadily increasing user base. Though most cloud hosting providers make it easier than ever to throw more hardware at a problem, there comes a point when the cost outweighs convenience.&lt;/p&gt;

&lt;p&gt;Around the time scaling horizontally starts looking less attractive, developers turn to performance tuning to make applications more efficient. In the Python community there are a number of tools to help in this arena; from the built-in &lt;code&gt;timeit&lt;/code&gt; module to profiling tools like &lt;code&gt;cProfile&lt;/code&gt;, there are quick ways to test the difference between a particular line of code and any of its alternatives.&lt;/p&gt;

&lt;p&gt;Although profiling tools help you see important information about which calls in your application are time consuming, it’s difficult to exercise an application during local development the same way your users exercise it in real life. The solution to bridging this gap? Profile in production!&lt;/p&gt;

&lt;h3&gt;
  
  
  Pyflame
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://pyflame.readthedocs.io/en/latest/"&gt;Pyflame&lt;/a&gt; is a profiling tool for Python applications that makes use of the &lt;a href="http://man7.org/linux/man-pages/man2/ptrace.2.html"&gt;&lt;code&gt;ptrace(2)&lt;/code&gt;&lt;/a&gt; system call on Linux. &lt;code&gt;ptrace&lt;/code&gt; allows processes to observe (and in some use cases, manipulate) another process’ memory. Pyflame ultimately uses &lt;code&gt;ptrace&lt;/code&gt; to aggregate statistics about a running Python process’ stack in a format that’s helpful for visualizing where that process spends the majority of its time. Since Pyflame runs as a separate process and inspects existing stack data, its overhead is small compared to solutions that run as part of the process itself. This also means you can be sure the profile analysis you end up with faithfully represents your application’s behavior and isn’t greatly skewed by the profiling work.&lt;/p&gt;

&lt;p&gt;After running Pyflame and a bit of post-processing you’ll be able to generate visualizations like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/798/1*yMBtTA-vHj0TZDzPxCik7A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/798/1*yMBtTA-vHj0TZDzPxCik7A.png" alt=""&gt;&lt;/a&gt;The y-axis is the call stack; the x-axis represents the proportion of CPU time spent in a particular call&lt;/p&gt;

&lt;h3&gt;
  
  
  But how?
&lt;/h3&gt;

&lt;p&gt;As &lt;code&gt;ptrace&lt;/code&gt; is specific to Linux, Pyflame can only target applications running in a Linux environment. Pyflame may also need &lt;a href="https://github.com/ithaka/pyflame/commit/2ec9e28e6caf671feba29549a18eef04e55bc2ba"&gt;some help finding your Python&lt;/a&gt;; it defaults to trying to link against Python 2.7 if it isn’t sure what Python to use. Given these caveats, installation takes just a few steps that will produce a &lt;code&gt;pyflame&lt;/code&gt; executable:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone [https://github.com/uber/pyflame](https://github.com/uber/pyflame)
$ cd pyflame/
$ ./autogen.sh
$ ./configure
$ make
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;pyflame&lt;/code&gt; executable can be found in the &lt;code&gt;src&lt;/code&gt; directory after the build.&lt;/p&gt;

&lt;p&gt;Now that you’ve built the tool, it should be ready to analyze your Python application. To do so, start by snagging the process ID (PID) of the application you’re interested in. If you’re running a Django application, for instance, this would probably be the PID of one of the WSGI workers running the Django app. After you’ve got a PID handy, you’ll run &lt;code&gt;pyflame&lt;/code&gt; with options for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How long to sample the process, in seconds&lt;/li&gt;
&lt;li&gt;How often to sample the stack of the process, in seconds&lt;/li&gt;
&lt;li&gt;Removing the idle time of the application from the data to simplify the resulting visualization&lt;/li&gt;
&lt;li&gt;The name of the output file&lt;/li&gt;
&lt;li&gt;The PID of the process&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This will look something like&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./src/pyflame -s 300 -r 0.01 -x -o my-app-20181212.samp -p 9876
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Pyflame will sample the process with PID &lt;code&gt;9876&lt;/code&gt; every ten milliseconds for five minutes, producing the &lt;code&gt;my-app-20181212.samp&lt;/code&gt; data file. Trying to read this file yourself won’t make a lot of sense, but fortunately it’s in just the right format for &lt;a href="https://github.com/brendangregg/FlameGraph"&gt;FlameGraph&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  FlameGraph
&lt;/h3&gt;

&lt;p&gt;FlameGraph is an interactive visualization tool for exploring CPU, memory, and other trace data. Since you’ve got your trace data, you can run it through FlameGraph in a couple of additional steps:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone [https://github.com/brendangregg/FlameGraph](https://github.com/brendangregg/FlameGraph)
$ cd FlameGraph/
$ cat /path/to/my-app-20181212.samp | ./flamegraph.pl &amp;gt; my-app.svg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Open up the &lt;code&gt;my-app.svg&lt;/code&gt; file in a browser and you’ll see something resembling the screenshot from earlier. You can hover over any segment to see what call it is and how much of the CPU time it represents. Clicking on a segment zooms into it so that you only see it and its children. Most usefully, you can perform a regex-capable search using the “Search” button in the top right. Searching for a particular pattern will highlight any matches:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/798/1*MLzC2ZtG-bH2v4H89GxC0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/798/1*MLzC2ZtG-bH2v4H89GxC0w.png" alt=""&gt;&lt;/a&gt;Pink segments match the search query&lt;/p&gt;

&lt;p&gt;By exploring a flame graph, you can more easily find slow and common calls in your application that aren’t always obvious from poring over loads of raw text. And by identifying these calls, you can start to understand how you might effectively pare things down into a leaner application.&lt;/p&gt;

&lt;p&gt;The pink segments above represent instances of a particular method call within our application; this call isn’t particularly expensive, but occurs many times on nearly every request. We realized that while we had code that seemed like it was caching these calls, the TTL was only set to one second. By updating the TTL to ten seconds, we were able to reduce how often this call occurred, which was confirmed in the “after” graph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/1024/1*SYk6RmNZiT4QQQMauyQJDQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/1024/1*SYk6RmNZiT4QQQMauyQJDQ.png" alt=""&gt;&lt;/a&gt;Less pink means less overall CPU time spent on matching calls&lt;/p&gt;

&lt;h3&gt;
  
  
  So what?
&lt;/h3&gt;

&lt;p&gt;The call we were examining now consumes less of the total CPU time, but how does this translate to the real world? We’ve seen a noticeable drop in actual CPU load per request, shown by the increased gap between the two lines below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/892/1*H8Pz-snDWoKbSEeAjdhO-w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/892/1*H8Pz-snDWoKbSEeAjdhO-w.png" alt=""&gt;&lt;/a&gt;Requests/minute in yellow, CPU load in green (the big dip at 10:40 is an artifact between deployments)&lt;/p&gt;

&lt;p&gt;This drop in CPU load translated to increased throughput and a significant drop in response time:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/730/1*nb9EKJcJ_yCGmc4qhniB4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/730/1*nb9EKJcJ_yCGmc4qhniB4w.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before this change, our application was largely CPU bound. We tune Apache to throttle connections in tandem with this behavior so that instances aren’t overwhelmed. After this change, there was enough headroom to reduce the application’s footprint by two server instances. We’re now connection bound and can look into opening that throttling up a bit, which could allow us to shed another instance or two. Across all our applications, especially those requiring significant computing resources, these kinds of optimizations can lead to thousands in savings annually.&lt;/p&gt;

&lt;p&gt;Pyflame is not just helpful for identifying opportunities, but also helpful for confirming hypotheses. Optimizing calls that appear frequently/loudly in the flame graph can result in tangible benefits, which can ultimately reduce cost. I recommend adding this tool to your arsenal for performance testing and monitoring.&lt;/p&gt;

&lt;p&gt;To see more about flame graphs, Pyflame, and profiling tools, check out these resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=D53T1Ejig1Q"&gt;“Visualizing Performance with Flame Graphs”&lt;/a&gt; by Brendan Gregg, USENIX ATC ’17&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rbspy/rbspy"&gt;rbspy&lt;/a&gt;, a sampling profiler for Ruby (by Julia Evans)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.deconstructconf.com/2018/julia-evans-build-impossible-programs"&gt;“Build Impossible Programs”&lt;/a&gt; by Julia Evans, Deconstruct 2018&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=w97I5q0hbkw"&gt;“Flamegraph that! Self-service profiling tool for Node and Python services”&lt;/a&gt; by Ruth Grace Wong, PyCon Canada 2017&lt;/li&gt;
&lt;/ul&gt;





</description>
      <category>softwaredevelopment</category>
      <category>python</category>
      <category>performance</category>
    </item>
    <item>
      <title>Open sourcing apiron: A Python package for declarative RESTful API interaction</title>
      <dc:creator>Dane Hillard</dc:creator>
      <pubDate>Mon, 01 Oct 2018 17:22:32 +0000</pubDate>
      <link>https://dev.to/ithaka/open-sourcing-apiron-a-python-package-for-declarative-restful-api-interaction-5eo5</link>
      <guid>https://dev.to/ithaka/open-sourcing-apiron-a-python-package-for-declarative-restful-api-interaction-5eo5</guid>
      <description>

&lt;p&gt;This post originally appeared &lt;a href="https://medium.com/build-smarter/open-sourcing-apiron-1f2010393675"&gt;on Build Smarter&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;At ITHAKA our web teams write applications that each interact with a large handful of services—sometimes as many as ten. Each of those services provide multiple endpoints, each with their own set of path variables and query parameters.&lt;/p&gt;

&lt;p&gt;Gathering data from multiple services has become a ubiquitous task for web application developers. The complexity can grow quickly: calling an API endpoint with multiple parameter sets, calling multiple API endpoints, calling multiple endpoints in multiple APIs. While the business logic can get hairy, the code to interact with those APIs doesn’t have to.&lt;/p&gt;

&lt;p&gt;We created a module some time ago for low-level HTTP interactions, and use it throughout our code base. For a good while, though, the actual details of each service call—the service name, endpoint path, query parameters—were scattered throughout the code. This inevitably led to duplication as well as a bug or two when we made an update in one place and forgot about the other.&lt;/p&gt;

&lt;p&gt;To reduce the pains from this, we eventually took stock of these scattered configurations and centralized them in one registry module. This module essentially contains a giant dictionary of all the services we interact with:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;service_endpoints&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;'CONTENT_SERVICE'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;'SERVICE'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'content-service'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'METADATA_ENDPOINT'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'/content/{id}'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'CITATION_ENDPOINT'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'/citation/{citation_type}/{id}'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="s"&gt;'SEARCH_SERVICE'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;'SERVICE'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'search'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'SEARCH_ENDPOINT'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'/search'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'EXCERPTS_ENDPOINT'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'/excerpt?contentId={content_id}'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="o"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Each service has a &lt;code&gt;'SERVICE'&lt;/code&gt; key containing the name of the service used to discover hosts, and some number of &lt;code&gt;'*_ENDPOINT'&lt;/code&gt; keys that describe an endpoint and its parameters. Calling these services looks like this:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;http&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;make_get_request_with_timeout&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;services.registry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;service_endpoints&lt;/span&gt;

&lt;span class="n"&gt;CONTENT_SERVICE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;service_endpoints&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'CONTENT_SERVICE'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{})&lt;/span&gt;
&lt;span class="n"&gt;METADATA_ENDPOINT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;CONTENT_SERVICE&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'METADATA_ENDPOINT'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# determine content_id...
&lt;/span&gt;
&lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;make_get_request_with_timeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;service_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;CONTENT_SERVICE&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'SERVICE'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;METADATA_ENDPOINT&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;content_id&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;'Accept'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'application/json'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;request_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see, there are a variety of shapes to these endpoints. This solved the issue of duplication across the codebase, but we still faced a couple of problems with this approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Strings as endpoint descriptors don’t result in structured data. This is pretty difficult to introspect or validate.&lt;/li&gt;
&lt;li&gt;Even with fully-formattable strings, sometimes a call needed to exclude a parameter all together, or add a new one. This had to be done ad-hoc after the fact.&lt;/li&gt;
&lt;li&gt;Our HTTP module still had a laundry list of methods, each with slightly different behavior and unclear names like &lt;code&gt;make_get_request_fast&lt;/code&gt; (&lt;em&gt;how&lt;/em&gt; fast?). Many of these methods called the same underlying methods with different default parameters, and the stack got pretty deep sometimes. Choosing the right method for a call was hard.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In order to address the high variability of behaviors and lack of structured data of this problem, we built a new paradigm for HTTP interactions that provided a declarative interface for configuring services. We wanted a few things out of it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Code describes how a service interaction looks, not the details of how to make the underlying HTTP call happen.&lt;/li&gt;
&lt;li&gt;The endpoint descriptors are structured and support introspection.&lt;/li&gt;
&lt;li&gt;Default behaviors can be declared in the service configuration, but can also be easily overridden dynamically at call time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With these desires in mind, &lt;a href="https://github.com/ithaka/apiron"&gt;we came up with &lt;code&gt;apiron&lt;/code&gt;&lt;/a&gt;. With &lt;code&gt;apiron&lt;/code&gt; the same definition from above looks more like this:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;services&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;IthakaDiscoverableService&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;apiron.endpoint&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Endpoint&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ContentService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IthakaDiscoverableService&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;service_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'content-service'&lt;/span&gt;

    &lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Endpoint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'/content/{id}'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;citation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Endpoint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'/citation/{citation_type}/{id}'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And the code to call the service looks more like this:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;apiron.client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ServiceCaller&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Timeout&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;services&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ContentService&lt;/span&gt;

&lt;span class="n"&gt;CONTENT_SERVICE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ContentService&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# determine content_id...
&lt;/span&gt;
&lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ServiceCaller&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;CONTENT_SERVICE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;CONTENT_SERVICE&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;path_kwargs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;'content_id'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content_id&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;'Accept'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'application/json'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;timeout_spec&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Timeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;read_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can now define what &lt;code&gt;ContentService&lt;/code&gt; looks like and easily refer back to that class whenever we need to understand its shape. Service discovery is now a plugin system. Endpoints can be introspected and have their parameters validated and enforced.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;apiron&lt;/code&gt; we’ve been able to replace many of our existing service calls quickly and with little pain. The code has become clearer and with the cognitive load out of the way we can begin focusing on other gains like streaming responses and data compression. It’s been nice for us, and we’d like to make it nice for you too.&lt;/p&gt;

&lt;p&gt;You can install &lt;code&gt;apiron&lt;/code&gt; from PyPI with &lt;code&gt;pip&lt;/code&gt; (or your favorite package manager):&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pip install apiron
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There are a few other helpful tools in the package, so give it a try today!&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qF2jUiUG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-6a5bca60a4ebf959a6df7f08217acd07ac2bc285164fae041eacb8a148b1bab9.svg"&gt;&lt;a href="https://github.com/ithaka"&gt;ithaka&lt;/a&gt; / &lt;a href="https://github.com/ithaka/apiron"&gt;apiron&lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;🍳 apiron is a Python package that helps you cook a tasty client for RESTful APIs. Just don't wash it with SOAP.&lt;/h3&gt;
  &lt;/div&gt;
&lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="instapaper_body md"&gt;
&lt;h1&gt;
apiron&lt;/h1&gt;
&lt;p&gt;&lt;a href="https://apiron.readthedocs.io/en/latest/?badge=latest" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/4b7957c4f1805c5408706f74d009b959c05e8dc5/68747470733a2f2f72656164746865646f63732e6f72672f70726f6a656374732f617069726f6e2f62616467652f3f76657273696f6e3d6c6174657374" alt="Documentation Status"&gt;&lt;/a&gt;
&lt;a href="https://badge.fury.io/py/apiron" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/283a51345b3a4a4f16e2324714dbe3f0d365f865/68747470733a2f2f62616467652e667572792e696f2f70792f617069726f6e2e737667" alt="PyPI version"&gt;&lt;/a&gt;
&lt;a href="https://travis-ci.org/ithaka/apiron" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/b829e1137459025316689281124fbcaf044f909f/68747470733a2f2f7472617669732d63692e6f72672f697468616b612f617069726f6e2e7376673f6272616e63683d646576" alt="Build Status"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;apiron&lt;/code&gt; helps you cook a tasty client for RESTful APIs. Just don't wash it with SOAP.&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/ithaka/apiron/raw/master/docs/_static/cast-iron-skillet.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qg-rxGyX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/ithaka/apiron/raw/master/docs/_static/cast-iron-skillet.png" alt="Pie in a cast iron skillet" width="200"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Gathering data from multiple services has become a ubiquitous task for web application developers
The complexity can grow quickly
calling an API endpoint with multiple parameter sets
calling multiple API endpoints,
calling multiple endpoints in multiple APIs.
While the business logic can get hairy,
the code to interact with those APIs doesn't have to.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;apiron&lt;/code&gt; provides declarative, structured configuration of services and endpoints
with a unified interface for interacting with them.&lt;/p&gt;
&lt;h2&gt;
Defining a service&lt;/h2&gt;
&lt;p&gt;A service definition requires a domain
and one or more endpoints with which to interact:&lt;/p&gt;
&lt;div class="highlight highlight-source-python"&gt;&lt;pre&gt;&lt;span class="pl-k"&gt;from&lt;/span&gt; apiron &lt;span class="pl-k"&gt;import&lt;/span&gt; JsonEndpoint, Service

&lt;span class="pl-k"&gt;class&lt;/span&gt; &lt;span class="pl-en"&gt;GitHub&lt;/span&gt;(&lt;span class="pl-e"&gt;Service&lt;/span&gt;):
    domain &lt;span class="pl-k"&gt;=&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;https://api.github.com&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;
    user &lt;span class="pl-k"&gt;=&lt;/span&gt; JsonEndpoint(&lt;span class="pl-v"&gt;path&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;/users/&lt;span class="pl-c1"&gt;{username}&lt;/span&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;)
    repo &lt;span class="pl-k"&gt;=&lt;/span&gt; JsonEndpoint(&lt;span class="pl-v"&gt;path&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;/repos/&lt;span class="pl-c1"&gt;{org}&lt;/span&gt;/&lt;span class="pl-c1"&gt;{repo}&lt;/span&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;)&lt;/pre&gt;&lt;/div&gt;
&lt;h2&gt;
Interacting with a service&lt;/h2&gt;
&lt;p&gt;Once your service…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
&lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ithaka/apiron"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



</description>
      <category>python</category>
      <category>api</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
