<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: a little Fe2O3.nH2O-y</title>
    <description>The latest articles on DEV Community by a little Fe2O3.nH2O-y (@psedge).</description>
    <link>https://dev.to/psedge</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/psedge"/>
    <language>en</language>
    <item>
      <title>Why people don't make bi-directional code/modelling programs</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Thu, 19 Mar 2026 15:41:19 +0000</pubDate>
      <link>https://dev.to/psedge/why-people-dont-make-bi-directional-codemodelling-programs-53le</link>
      <guid>https://dev.to/psedge/why-people-dont-make-bi-directional-codemodelling-programs-53le</guid>
      <description>&lt;p&gt;There's a bug in Draw.io that means a call to  &lt;code&gt;app.editor.setGraphXml(app.editor.getGraphXml))&lt;/code&gt; isn't cleanly reproducing the diagram. I wonder why that is, possibly there's additional processing or cleaning on either a full file load or write. Individual nodes (proto &lt;code&gt;MxRectange&lt;/code&gt;, &lt;code&gt;MxCircle&lt;/code&gt;) appear to be recreated very well, but their relationships aren't (represented internally by both a top-level node in the model, as well as an additional object referring to the top level as a property of the sender or receiver in &lt;code&gt;MxNode.edges&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;The closest example of what I'm trying to accomplish is swimlanes.io and dbdiagram.io - both amazing tools for taking something defined as code (giving developers all the power of cut-paste etc, as well as uber-easy readability and updates) and producing something visual from it. Interestingly, they focus on code as the input method, both right hand sides (RHS) are essentially read-only. I wonder, why is that?&lt;/p&gt;

&lt;p&gt;Looking at Draw.io for a few hours, to the point where I have two panes: code LHS, local draw.io iframe RHS), I think I vaguely understand the problem and am at a point where I can put it down on paper.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Start with a blank slate, LHS+RHS. If we add code, we can add a new node to the diagram model, and update the view. Maybe our DSL allows for some metadata to be set, take that and represent it in the model. We store a reference to the model against the token in the AST, and move on. Let's say we now add a node on the RHS, similar procedure, we need to validate in our global map if we have a reference to it in code, and if not, add a declarative line. Hopefully our translation knows enough about the metadata available to express that through the generated code. &lt;/p&gt;

&lt;h2&gt;
  
  
  Enough
&lt;/h2&gt;

&lt;p&gt;I just made public a repo from 2022, with some Claude edits to productionise it. It feels good to be able to get out of the weeds, or just get unblocked on a fun idea I had back then. These projects are a bit of a graveyard for me - each spun out into a business in my head and I miss the guy who spent hours demo'ing to friends on Zoom during covid, talking through MVPs and ICPs for something with logically zero chance of ever launching.&lt;/p&gt;

&lt;p&gt;"If only the world had a modelling tooling for engineers, it would be the next X!"&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/psedge" rel="noopener noreferrer"&gt;
        psedge
      &lt;/a&gt; / &lt;a href="https://github.com/psedge/modeld" rel="noopener noreferrer"&gt;
        modeld
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      nobody gives a fuck about your models, pal
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/psedge/modeld/./static/modeld.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fpsedge%2Fmodeld%2F.%2Fstatic%2Fmodeld.png" alt="modeld"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Make every model interactive, declarative, and programmable.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A bi-directional, dual-representation modeling tool built on &lt;code&gt;draw.io&lt;/code&gt; and YAML; editing one updates the other in real time! Comes with an MCP server to assist with no/low-human workflows.&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/psedge/modeld/./example.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fpsedge%2Fmodeld%2F.%2Fexample.png" alt="example diagram"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt; ▐▛███▜▌   Claude Code v2.1.76
▝▜█████▛▘  Sonnet 4.6 · Claude Pro
  ▘▘ ▝▝    &lt;span class="pl-k"&gt;~&lt;/span&gt;/modeld
❯ ▎ Using the modeld MCP tools, create a minimal house security threat model with these elements:

  ▎ - A Thief (actor) outside the house, attempting entry through the Front Door (app)
  ▎ - A House boundary containing the Front Door and a Bedroom (boundary, trust: high) — the bedroom represents a locked trust zone
  ▎ - A Safe (app, trust: critical) inside the Bedroom, containing the family heirlooms

  ▎ Connections: Thief → Front Door (&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;attempts entry&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;), Front Door → Bedroom (&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;path through&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;).

  ▎ Follow the CLAUDE.md layout guidance to plan coordinates before&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/psedge/modeld" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;I'll venmo the first person to run the docker image and open a Github issue $10.&lt;/p&gt;

</description>
      <category>design</category>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Authority in Security</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Thu, 21 Oct 2021 10:03:05 +0000</pubDate>
      <link>https://dev.to/psedge/authority-in-security-1632</link>
      <guid>https://dev.to/psedge/authority-in-security-1632</guid>
      <description>&lt;p&gt;I've run into this several times now and I'm trying to formulate my thoughts on the problem. There's a strange dichotomy working in Security: you're tasked to review and ask questions about systems you don't own, and to give requirements or suggest improvements. I want to be a collaborator with owners, working together in the design of a service, hopefully thinking about a problem that they were too close to see - the equivalent of proof-checking an essay. I'm seen though, as someone who enforces rules of their own creation, coming down on owners to exercise authority - the teacher with a birch ruler.&lt;/p&gt;

&lt;p&gt;Some of the interactions I've had felt jarring because I got responses that implied I wasn't a positive influence, rather a blocking one, or at worst, a detriment: loading on work where it wasn't needed, asking questions I didn't have the right or seniority to ask, adding requirements where the team was already stretched. The conversation goes something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Me: hey, what do you think about doing X?
PO: no, it works like this and there's no problem here
Me: maybe X would be good, it would prevent Z
PO: we can't do X, it's not &amp;lt;needed|workable|a good solution&amp;gt;
Me: ok, X might not be right then, how about Y instead?
PO: where is this coming from, we didn't do X or Y in &amp;lt;other service|other company&amp;gt;, why should we do it here?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That exchange is my fault. It's difficult to stop yourself taking it personally, to avoid internalising the idea that you're blocking or nagging them. To do so is to forget that the response could be the result of any number of things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how Security is seen has a long history: maybe this person worked with a difficult or unsympathetic Security team before and were scarred by the experience&lt;/li&gt;
&lt;li&gt;maybe they're worried that doing X or Y will lead to missed deadlines or wasted time&lt;/li&gt;
&lt;li&gt;they felt that we didn't provide valuable input on a previous problem, and so lost some faith in our ability&lt;/li&gt;
&lt;li&gt;they have external objectives and pressures that don't align with security's right now, and plan to come back to it.&lt;/li&gt;
&lt;li&gt;maybe your idea is &lt;em&gt;genuinely&lt;/em&gt; misfounded, and they're just trying to tell you why, but they're swamped and don't really have time to walk through it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Taken verbatim from a recent conversation I had, after I had dropped into a channel to ask an implementation question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Part of the problem is that security has a lot of authority and it's hard to be casual at the same time." &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I don't want to exercise any authority, it's assumed and projected as a function of being in this team. I empathise that speaking to someone who can escalate an issue to get their way over yours is threatening or unpleasant - I'd be defensive with that person too. &lt;/p&gt;

&lt;p&gt;I need to be more aware of that prejudice, and explain myself better - understanding that I can't just appear and ask questions that imply I might be &lt;em&gt;requiring&lt;/em&gt; changes. I've started doing this more but it's fairly hard to sound natural: "Hey, I came across X and was just wondering how it works", then expanding the conversation to the point I'm concerned about. It's leading to more amicable conversations to be fair, and hopefully I'm changing the reputation of the team a little in the process. The desired end state is that product teams see us as a resource to be used to enhance or validate the quality of their own output. That's a difficult state to reach, and doesn't it absolutely doesn't happen accidentally.&lt;/p&gt;

&lt;p&gt;It's funny that engineering has lore around feedback loops, observability, abstraction etc that I have to be aware of and understand to take part in the process but engineers aren't aware of any thought work from Security in the last 10 years. We know about giving feedback in automation and not dropping requirements last minute, we made up our own terms for how to work with teams and people have written textbooks about how Security can integrate more tightly and cause less friction while keeping standards up. It's well-understood that being too adversarial and combative leads to employees and engineers taking shortcuts to getting around controls, increasing risk - we absolutely think and care about UX and developer psychology! The golden rule applies: "Don't be an asshole", and maybe that's the area that needs some work.&lt;/p&gt;

&lt;p&gt;We have requirements, and targets like any team: we are required to take responsibility for the security of things we have minimal exposure to, and limits to the depth we can explore in a fixed-term engagement. That breeds misunderstandings and issues with communicating, and I guess this has been a visceral topic of experience and growth for me personally. My takeaway is that I need to exude curiosity, not authority.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;EDIT (5yrs on).&lt;/em&gt;&lt;br&gt;
This never got any easier, maybe it's arrogance, maybe just dogmatism setting in deeper every year.&lt;/p&gt;

&lt;p&gt;AI seems like a whole new tempo of changes, projects, and concepts to stay relevant with - demanding ever-deeper understanding of everything from Figma plugin permissions to KSQLdb optimisations to intricacies of exponent handling in forex markups. The industry is accelerating, not maintaining - and it's hell to keep up, but the core of the problem is that: being expected to be as current as to be able to walk into an arbitrary room and give an informed decision on security primitives, whilst not asking questions or giving recommendations with weight, seems like a fallacy to me.&lt;/p&gt;

</description>
      <category>culture</category>
      <category>security</category>
    </item>
    <item>
      <title>Fundamentals of Vulnerability Management with Open Source Tools</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Thu, 27 Aug 2020 14:49:26 +0000</pubDate>
      <link>https://dev.to/psedge/fundamentals-of-vulnerability-management-with-open-source-tools-56n9</link>
      <guid>https://dev.to/psedge/fundamentals-of-vulnerability-management-with-open-source-tools-56n9</guid>
      <description>&lt;p&gt;tl; dr: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teams of 1: Use minimal images, add only the software you need, add a cronjob to auto-update using the package manager, or in the case of containers: pin your Dockerfile to the latest tag and redeploy as often as possible.&lt;/li&gt;
&lt;li&gt;Teams of 10: Automate your build process to create a golden image or container base image with best security practices.&lt;/li&gt;
&lt;li&gt;Teams of 100: Automate and monitor as much as possible. Try to keep your developers excited about patching, and start getting strict about not letting anything but your approved images go into production. Security team responsible for updates and patching strategy.&lt;/li&gt;
&lt;li&gt;Teams of 1000: Dedicated team for building, updating, and pentesting base images. Demand full E2E automation. Monitor in realtime and define RRDs with consequences.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lately I've spent some time thinking about Vulnerability Management, hereafter 'vulnmgmt' - a major blueteam responsibility, this refers to keeping packages, kernels, and OSes up-to-date with patches. Generally if you deploy a VM with the latest distribution of all software there will be a public CVE for it within a few weeks, which now leaves you vulnerable. While this sounds like a huge problem I want to say that I don't believe vulnmgmt should be anywhere near the top of the priority list for a team just starting to improve their security posture - there can be as many 10.0 CVEs on a box as you like if it is airgapped, sat in a closet somewhere collecting dust. Like all of security, this is a game of risk considerations - I would prefer to spend time and energy on ensuring strong network segmentation and good appsec than vulnmgmt. Inevitably though, it does become a priority - especially because it's an easy business sell to auditors, customers, etc.&lt;/p&gt;

&lt;p&gt;This is a huge industry with numerous different products and solutions being offered by most major vendors within the Cyber space, which of course means there's a lot of bullsh*t. I'm a big believer in build-not-buy as a general approach, although managers and senior engineers seem keen to tell me this will change as I get older/higher up. In short, I think Cyber is stuck in the 2000s-era of product development, trying to come up with these catch-all solutions which offer a silver bullet rather than keeping their products and feature sets inline with the unix philosophy of 'do one thing and do it well', and promoting interoperability. We should try to kill the idea that spending €100,000/yr on a product means we have good security.&lt;/p&gt;

&lt;p&gt;For a brief primer on vulnmgmt in an engineering-led organisation, we have several types of compute resources we want to secure: likely bare-metal or virtual, and containers. For each of those we have two states, pre- and post-deploy. Some of these resources may have very short lifetimes eg. EC2 instances in an autoscaling group, while some might be long-running eg. a database instance for some back-office legacy app. N.B. Most cloud-native organisations will have a reasonable amount of serverless code as well, which I won't touch on here.&lt;/p&gt;

&lt;h3&gt;
  
  
  VMs: Pre-Deploy
&lt;/h3&gt;

&lt;p&gt;Bare-metal and virtual instances will be deployed from an image, either from a generic OS distribution or with a 'Golden Image' AMI/snapshot (take a generic, use something like Packer or Puppet to run some setup steps, pickle it into a reusable artifact). In this state, the possible vulnerability sources are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From the generic base image, more likely if it is out-of-date&lt;/li&gt;
&lt;li&gt;From any packages or modifications made to the base during initialization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Containers are conceptually similar at this stage, except the base image isn't a single artifact but a multiple of layers comprising the container image that we're extending. Many applications tend to extend 'minimal' images (see alpine, ubuntu-minimal, debian-buster etc) which focus on small image size, but it is entirely possible that by the time we reach application images we have 10+ layers, each of which is a new opportunity to have packages pinned to a specific, vulnerable version.&lt;/p&gt;

&lt;p&gt;At this stage we should be focusing on a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We do not use public / non-hardened base images.

&lt;ul&gt;
&lt;li&gt;They're unlikely to be set up with defaults which are applicable for our use-case&lt;/li&gt;
&lt;li&gt;It is so cheap to maintain a clone of a public image but it ensures we start in a clean, healthy state. The further along in the process we apply controls, the more likely they are to fail. Catch problems early.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;We should be publishing our own base images as frequently as possible, which should pre-updated and upgraded, running the latest OS version and package upgrades.&lt;/li&gt;
&lt;li&gt;These images should be pre-enrolled into whatever monitoring/SIEM programs we're running, reducing workload for the end-user of them.&lt;/li&gt;
&lt;li&gt;We should use static scanners during this process, and prevent the publishing of images which contain &lt;em&gt;fixable&lt;/em&gt; vulnerabilities. Here is an awesome description of &lt;a href="https://tech.ovoenergy.com/catching-security-vulnerabilities-in-the-build-pipeline/"&gt;OVO's approach&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Luckily there's a multitude of tool options we have at our disposal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;, &lt;a href="https://puppet.com/"&gt;Puppet&lt;/a&gt;, &lt;a href="https://www.chef.io/"&gt;Chef&lt;/a&gt; - build-as-code providing strong repeatability and consistency.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.packer.io/"&gt;Hashicorp Packer&lt;/a&gt;, &lt;a href="https://www.vagrantup.com/"&gt;Vagrant&lt;/a&gt;, &lt;a href="https://aws.amazon.com/codebuild/"&gt;AWS CodeBuild&lt;/a&gt; - create Golden Images or deploy during CI and publish snapshots.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cloudinit.readthedocs.io/en/latest/index.html"&gt;cloud-init&lt;/a&gt; - the gold standard of consistent Unix initialization.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://vuls.io/"&gt;Vuls&lt;/a&gt; - agentless scanner for Unix systems, checking package versions against NVD. People will get tired of me talking about this project but it's such a great concept.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/osquery/osquery"&gt;osquery/osquery&lt;/a&gt; - query your VM like it's a SQL db&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/wazuh/wazuh"&gt;wazuh/wazuh&lt;/a&gt; - I haven't used this personally, I've heard good things.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Jsitech/JShielder"&gt;jsitech/JShielder&lt;/a&gt;, &lt;a href="https://github.com/CISOfy/Lynis"&gt;CISOfy/Lynis&lt;/a&gt;, &lt;a href="https://github.com/lateralblast/lunar"&gt;lateralblast/lunar&lt;/a&gt; - automated Linux hardening/compliance checkers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My perfect scenario IMO looks something like this: we have a hardened base image which is rebuilt on a daily/weekly basis using Packer. When that gets published to staging we use Lambda to spin up an instance with it, and perform whatever scans against it we want, either using Vuls or Lynis. If those tools pass then we can continue the build, publishing the image to production. If not, report the results and remediate issues. We should also validate that the instance connected successfully to our SIEM, and maybe we could attempt a portscan or try to drop a shell to verify it's catching low hanging fruit.&lt;/p&gt;

&lt;h3&gt;
  
  
  VMs: Post-deploy
&lt;/h3&gt;

&lt;p&gt;This is where things get more complex because our assets are now in the wild becoming more outdated and unpatched by the day. The longer we are in this state the further we deviate from our nice, safe, clean starting point - so a lot of effort should be reducing the expected lifetime of any single asset before redeploy. I would preach more for ensuring repeatable infrastructure than for perfect monitoring and patching of assets but unfortunately, that's just not a reality for a lot of contexts. Some guy will always keep a pet server that you can't upgrade because 'it might break something and it's mission-critical'.&lt;/p&gt;

&lt;p&gt;For VMs we will be relying on some solution to continuously monitor drift from initial state, and report back so that we can keep track of it. Previously I've used Vuls for this purpose, but if you have something like the &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html"&gt;AWS SSM&lt;/a&gt; agent installed on the instance then it's possible to run whichever tools best fit. This can be a minefield as you'll have to either 1) enable inbound access to the machine, increasing risk or 2) upload results from the box to some shared location. I'd prefer #2, as it's less complex from a networking ACL standpoint - but there could be complications with that too. &lt;/p&gt;

&lt;h2&gt;
  
  
  Containers
&lt;/h2&gt;

&lt;p&gt;Containers at runtime is slightly harder; if you've got a reasonably hardened base image and runtime environment then it is likely that inbound shell access is forbidden, and if it's not then you're unlikely to have access to the tools you need to perform runtime heuristic tests. If the containers are running within something like &lt;a href="https://istio.io/"&gt;Istio&lt;/a&gt; it is easier to extract logs and metrics, so it would be a good idea to integrate these into whatever alerting engine we're using. There are numerous k8s and docker scanners which check configuration against CIS benchmarks to determine the health of a pod/container:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/aquasecurity/kube-bench"&gt;KubeBench&lt;/a&gt; is particularly popular&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/octarinesec/kube-scan"&gt;KubeScan&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As well as hardening the environment, we can scan our images for vulnerabilities at any point. The results of a static container image analysis tool will be different between build time and some point when it's running in the future, so we can periodically re-run the checks we ran at build time to establish whether that container has become vulnerable to any new CVEs since we launched it. If so, we probably want to rotate in a version which has been patched. After trying &lt;code&gt;docker scan&lt;/code&gt; (backed by Snyk) and some inbuilt Docker Trusted Registry (DTR) scanners (ECR+GCR) I strongly prefer  &lt;a href="https://github.com/quay/clair"&gt;quay/clair&lt;/a&gt; and &lt;a href="https://github.com/aquasecurity/trivy"&gt;aquasec/trivy&lt;/a&gt;. They're only part of the solution though, telling you what vulnerabilities exist at the surface - which is great for measuring overall progress but not for determining where you should focus. Container images are a composed of a series of layers, each executing some command or installing a set of files etc. A vulnerability can either be added or removed by a layer: essentially, we could have 0 vulnerabilities in our base and add them later in the image, or we could have 100s in the base and they could all be fixed later in the image.&lt;/p&gt;

&lt;p&gt;When it comes to operationalising the fixing of images, there seems to be two approaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Go to teams, show them their results, and tell them to fix it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This could either be by nudging teams with alerts, or by creating dashboards and giving POs ownership of their metrics.&lt;/li&gt;
&lt;li&gt;By putting responsibility directly on teams for the images they run, you get close to the problem. &lt;/li&gt;
&lt;li&gt;It's likely there will be some duplicated effort, and it requires strong communication about the problem and education on how to fix the problems.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Determine which base images are used, and fix those.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To do this, we need a way to link a leaf image (one that runs) to it's parents. We can do that by inspecting the manifest and keeping a trie of which images have subsets of ours. That's quite expensive, and I haven't seen any open-source solution.&lt;/li&gt;
&lt;li&gt;Once you have that information though, you can focus efforts. Depending on the team who owns a base image you can delegate the work, and make a large amount of impact very quickly (as many images likely inherit from one or two vulnerable base images)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Whether it's an engineering or a wider business effort to fix container vulnerabilities, it should be visible. When I started looking at this problem I thought that engineers would understand the risks associated with vulnerabilities in production; how attack chains work and the theories of defense-in-depth. That's probably not the situation, and education is the most time consuming part of all of this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;Rather than submitting to extortionate subscription fees and vendor lock-in, we can achieve great security posture for VMs and containers using open-source tools and a little engineering. As a result we'll be a lot more confident in our claims and have developed a deeper understanding of our environments, allowing us to deploy tools which are genuinely extensible and well-suited for our use-cases. These are hard problems involving a combination of technical and meat work, and they require planning and careful execution on both parts.&lt;/p&gt;

</description>
      <category>vulnerabilitymanagement</category>
      <category>security</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Automating my personal finance management</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Wed, 20 May 2020 10:16:29 +0000</pubDate>
      <link>https://dev.to/psedge/automating-my-personal-finance-management-516b</link>
      <guid>https://dev.to/psedge/automating-my-personal-finance-management-516b</guid>
      <description>&lt;p&gt;For the last few years I've been using &lt;a href="https://www.yolt.com/"&gt;Yolt&lt;/a&gt;, which is an awesome mobile-only account aggregation tool from ING, which works by connecting to your various accounts via Open Banking APIs and pulling all your spending / balances into a single coherent view. This has been working great, until recently as I've moved to Sweden which has caused several problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The integrations with Swedish banks is non-existent, they haven't announced any plan to add Nordea, SEB, or Swedbank.&lt;/li&gt;
&lt;li&gt;I'm actually doing a lot of my spending via TransferWise Borderless accounts, due to the fact that I've procrastinated setting up a Swedish bank account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So my budget tracking has gone out of the window and I've resorted to manually entering transactions from TransferWise into a Google Sheet at the end of every month, categorising spending as best as I can. After doing this a few times I've decided it's time to automate it. My broad goal is to have a single dashboard, ideally web-based, where I can see all balances from my various current and savings accounts in both the UK and Sweden, as well as the value of any other assets I own. I looked into YNAB which seems like a great fit, except that I'd still need to write some of the integrations myself on top of paying the subscription fee, which at $80 a year seems like a cop-out. I'm doing this to save money, not commit to another recurring outgoing. &lt;/p&gt;

&lt;p&gt;Enter &lt;a href="https://www.firefly-iii.org/"&gt;Firefly III&lt;/a&gt; - an open-source budgeting tool that you self-host. Of course, paying for a $5/month DigitalOcean VM is nearly as bad as the YNAB subscription, and I'm only going to be checking this a few times a week, so having the box available all the time seems pretty wasteful. Hosting it on Heroku, on a free-tier dynamo however, means that I'm only going to be charged for time when it's in-use, and only after the first 550 hours/month. Caveats with this model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It'll take a few seconds for the instance to start when I make a request, which could cause issues with the automation. If the calls timeout because Heroku took too long, I'll have to make the request again.&lt;/li&gt;
&lt;li&gt;I'm not sure about persistence. Heroku gives a free-tier Postgres instance as well, and according to the Firefly documentation that's good for about 10,000 transactions. It might be necessary at some point to pay for a bigger DB, potentially AWS RDS which has a substantially more generous free-tier (at least for the first year).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Integrations
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Monzo
&lt;/h4&gt;

&lt;p&gt;I have been a loyal customer of Monzo for some time, I love their UI and customer-centric feature development, Pots is a great concept and I love the flexibility of it all. When I was actively spending with them the push notifications were a breath of fresh air coming from old-school banks. Also they have a really extensive API, which I'll be querying from AWS Lambda (free invocations) and pushing the resulting TXs and balances to Heroku.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Wealthify
&lt;/h4&gt;

&lt;p&gt;This is a new addition to my portfolio which I'm really happy with, as they offer an ethical S&amp;amp;S ISA with a decent return. Sadly, no API and very little in the way of integrations beyond Yolt with whom they have an exclusive partnership. They do however have a web interface, so I'll be writing a Selenium bot to log in and scrape the balances of my ISA.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. TransferWise
&lt;/h4&gt;

&lt;p&gt;Although this has been a lifesaver in terms of flexibility during my migration to Sweden, I don't plan on using them forever - the fees are kind of high and their is nothing in terms of personal finance management, they really are just holding money and swapping it in and out of their reserves passing the costs onto the consumer. They have an API which I'll be querying a few times a week and pushing into Firefly. They are missing any kind of transaction categorisation which is a shame, but I'll come back to this later.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Stocks
&lt;/h4&gt;

&lt;p&gt;It's not really necessary to integrate directly with brokers as I know how much of each stock I own, and only rarely make significant changes, so I'm happy for this to be a manual process of logging into the AWS console and changing some symbol-quantity pairs I have set as ENV variables. Once a week I can fetch those amounts in their currencies from the Yahoo Finance API, convert into local currency, and push to Firefly.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;5. Slack&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;All of the above worked really nicely, and it was a super fun 2-day project to get it all set up and working. I've kind of achieved my goal of unifying my budgets which is honestly really useful. Of course, no project would be complete without a Slack integration - so this is what gets pinged to me each evening:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--du-lLdRd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w98507adn7f7v19hi8m3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--du-lLdRd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w98507adn7f7v19hi8m3.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Moving forwards, I'd love to figure out some way to determine when it was worthwhile making transfers between Monzo and TransferWise in different currencies, but this is a whole project in itself. I find it amazing that it takes this much work to get to a point where I have a single-pane-of-glass view of my finances, and wonder how normal people are expected to have any kind of oversight of it all considering the lack of interoperable APIs or systems. This is the premise of OpenBanking, but there's a long way to go with that.&lt;/p&gt;

</description>
      <category>budgeting</category>
      <category>pfm</category>
      <category>automation</category>
      <category>openbanking</category>
    </item>
    <item>
      <title>DevSecCon 2019: CI/CD write-up</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Mon, 06 Jan 2020 17:56:18 +0000</pubDate>
      <link>https://dev.to/psedge/devseccon-2019-ci-cd-write-up-k51</link>
      <guid>https://dev.to/psedge/devseccon-2019-ci-cd-write-up-k51</guid>
      <description>&lt;p&gt;In December I was lucky enough to attend DevSecCon 2019 in London through work, and had a blast. It was my first non-language/framework conference and it was really interesting seeing the variety of topics that were on the agenda.&lt;/p&gt;

&lt;p&gt;My favourite session though, was &lt;strong&gt;Securing the Sugar out of Azure DevOps&lt;/strong&gt;, given by Colin Domoney of Veracode. I hadn't used Azure Pipelines before so a lot of it was me just getting used to the ACL system and getting a basic pipeline together, and hearing Colin talk about some of the possibilities they'd explored regarding different security practices in CI/CD pipelines. I took notes and thought I'd share some of the learning from it here.&lt;/p&gt;

&lt;p&gt;Our two aims are to shift left as far as possible (bad news doesn't age well) and automate absolutely everything we can (don't do anything manually three times).&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets Checking
&lt;/h2&gt;

&lt;p&gt;Public GitHub repos are constantly scanned for credentials and I've personally committed a few myself, only to have my (friend's, sorry Chris) account for some service get locked because of it. We can use a tool like &lt;a href="https://github.com/dxa4481/truffleHog"&gt;TruffleHog&lt;/a&gt; in our pre-commit hook to make sure we haven't committed anything personal. Of course, our .gitignores should be checked and could go through manual approval for changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps: Push all the Things
&lt;/h2&gt;

&lt;p&gt;Briefly touched on in that session, but something I picked up at another DevSec meetup was the concept of GitOps, or infrastructure-as-code taken to the extreme. The more we have defined as code in our repositories the more we can instantly and easily validate and verify earlier in the process. The same goes for using services which have good APIs - if we can pull a VPC config from the AWS CLI and validate that only those ports are open, great!&lt;/p&gt;

&lt;p&gt;On the topic of secrets, one tool that I haven't played with but got an honourable mention was &lt;a href="https://github.com/AGWA/git-crypt"&gt;AGWA/git-crypt&lt;/a&gt; which using a .git-crypt file configures which files should be transparently encrypted on commit, and decrypted on checkout. A really cool concept meaning that our devs can develop and push application secrets like other files, and they'll be encrypted in our repository, staying that way if you don't have the key!&lt;/p&gt;

&lt;h2&gt;
  
  
  Open-Source Scanning
&lt;/h2&gt;

&lt;p&gt;Done to death, but essential. These are tools which check the versions of any open-source components we're using, and if a signature has been found to contain vulnerabilities we stop the build. These broadly fall into the categories of image scanners like &lt;a href="https://github.com/docker/docker-bench-security"&gt;docker/docker-bench-security&lt;/a&gt; and dependency checkers like &lt;a href="https://www.owasp.org/index.php/OWASP_Dependency_Check"&gt;OWASP Dependency Checker&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I swear there's another NPM/GitHub/X account takeover or malicious injection article topping HackerNews every other week. We want to take as much community-sourced intelligence as possible, and these are a great source of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Static Application Security Testing (SAST)
&lt;/h2&gt;

&lt;p&gt;I was recently asked how I'd do SAST in an environment where your company can't simply throw money at the problem and I was speechless. It hadn't occurred to me that budget was a concern for some Cyber departments, and I cobbled together an answer about open-source alternatives to Fortify and CheckMarx, making a note to look more into this scene. Heavily depending on language, some &lt;a href="https://www.owasp.org/index.php/Source_Code_Analysis_Tools"&gt;contenders&lt;/a&gt; would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;go: &lt;a href="https://github.com/securego/gosec"&gt;securego/gosec&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;java: &lt;a href="https://find-sec-bugs.github.io/"&gt;find-sec-bugs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;python: &lt;a href="https://pypi.org/project/bandit/"&gt;bandit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;php: &lt;a href="https://www.ripstech.com/"&gt;RIPS&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;js: ??&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It goes without saying, but we should heavily focus on unit test-based philosophies in our software development for a lot of reasons, and your SAST stack is just there to catch left-overs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Application Security Testing (DAST)
&lt;/h2&gt;

&lt;p&gt;At my company we have an amazing pentest team whose time gets booked out for every significant release of a project, unless a member of the sec team pre-approves the change. These tools will typically involve spinning up a container with a version of the application and automating attacks against it, fuzzing input or looking for changes in route responses from previous scans.&lt;/p&gt;

&lt;p&gt;Functional tests can be prepared by the developers and integrated in their CI/CD - they know their code best and can protect against common attacks early. Of course, we don't want to presume they added a check on something they might not have, so coming back to the resource problem, we can go some of the way with solutions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.owasp.org/index.php/ZAP"&gt;OWASP ZAP&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://sqlmap.org/"&gt;SQLMap&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nmap.org/"&gt;nmap&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cirt.net/Nikto2"&gt;Nikto2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Commercial or open-source, we broadly have three strategies for adding SAST/DAST into CI/CD:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;Synchronous&lt;/em&gt; - on build we run our tool and simply wait for it to finish. This is great because we can fail or succeed our build on the back of results, but not great if our tool takes an hour and we want to release multiple times a day&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Asynchronous&lt;/em&gt; - on build we kick off our tool in another process and proceed with the build. In the event of failure we flag the build (depending on CI/CD tool) as failed and rollback the release to the last stable build.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Mixed&lt;/em&gt; - we select some balance of the two, potentially running file analysis or faster tools in-band and slower tools out-of-band.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Secrets Management
&lt;/h2&gt;

&lt;p&gt;Having worked around Data Protection and Applied Cryptography for the last 2 years I love talking to whoever will listen about the applications of &lt;a href="https://www.vaultproject.io/"&gt;HashiCorp Vault&lt;/a&gt; or &lt;a href="https://aws.amazon.com/kms/"&gt;AWS KMS&lt;/a&gt; or &lt;a href="https://azure.microsoft.com/en-us/services/key-vault/"&gt;Azure Key Vault&lt;/a&gt;. These solutions allow us to wrap all of our application secrets in a central service, cloud or self-hosted, which we can secure as much as possible and control access to. We can enforce minimum key lengths and require TLS, maintaining an actual ACL for our secrets, and recording activity statistics. There's lots of reasons we should be using at least some of the engines of HashiCorp Vault, personally my favourite use case I've heard is having ultra-shortlived TLS certificates, reducing validity periods down to minutes. If the service still meets criteria, it makes a new request to Vault where a new certificate is signed and returned, and the application starts serving it. Amazing!&lt;/p&gt;

&lt;p&gt;There's also Secrets plugins for most self-hosted CI/CD solutions, including &lt;a href="https://plugins.jenkins.io/credentials"&gt;Jenkins Credentials&lt;/a&gt;, &lt;a href="https://github.com/gocd/gocd-file-based-secrets-plugin/releases"&gt;GoCD File-based Secrets&lt;/a&gt; and &lt;a href="https://concourse-ci.org/creds.html"&gt;ConcourseCI Credential Management&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Thanks again to Colin, and to the organisers - it was a great event and I'd love to attend again in 2020.&lt;/p&gt;

</description>
      <category>devsecops</category>
      <category>cicd</category>
      <category>sast</category>
      <category>dast</category>
    </item>
    <item>
      <title>cron, what a load of * * * * *</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Sat, 04 Jan 2020 16:04:07 +0000</pubDate>
      <link>https://dev.to/psedge/cron-what-a-load-of-1cj</link>
      <guid>https://dev.to/psedge/cron-what-a-load-of-1cj</guid>
      <description>&lt;p&gt;During university I took on freelance jobs on PeoplePerHour to pay rent. I had a client who I developed a piece of software for. This software, now referred to as 'the script', scrapes a series of websites looking for new job adverts and regex'ing out the good stuff. Once the script completes, I hit the MailGun API with a zipped archive of CSVs. Run-of-the-mill, nothing-complex, easyjob10minutes. &lt;/p&gt;

&lt;p&gt;Nothing's ever easy though. Running a ~20min (concurrent reqs) python script and manually making the ZIP was one thing, so automating that process shouldn't be a problem, right? Wrong. Beyond the fact that the sites naturally change the markup of their job adverts which requires rewriting the script each time, I just could not figure out why cron wasn't reliable. There's a .sh to kick off the script and wait for completion... technically, each site is a different python script, written in a micro-framework style to speed up development of any future scraping work that might come my way. Broadly it looks something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;rm &lt;/span&gt;results/&lt;span class="k"&gt;*&lt;/span&gt;

python3 scrapers/job1.py &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; scrapers/job2.py &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ...

&lt;span class="nb"&gt;rm &lt;/span&gt;results.zip 

zip results.zip results/

sh email.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not elegant, not clever, sequential - but this is running once a week and a failure in one step fails the whole thing and alerts me, at which point I can debug and fix. Each job came to me sequentially, so that structure made sense when it was one, and two, and I've just never refactored it. If it ain't broke. Recently I interviewed at a bank who gave me pretty much the same exact assignment as a code challenge, and my proposed solution was a little tighter, to say the least.&lt;/p&gt;

&lt;p&gt;In my mind, running crontab -e and adding the following should mean that I can sip back with a cup of tea:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;30 23 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; 7 &lt;span class="nb"&gt;cd&lt;/span&gt; /root &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; sh thescript.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alas, upon :wq and a service cron restart, the BCC'd email did not greet me first thing Monday morning. Why crond? Have I not satisfactorily commanded you? No but really, what the hell. That script is being run right? I'm just going to confirm locally that I understand the core concepts here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"hi"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /var/log/thescript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;!#service crond restart :wq&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt;: /var/log/thescript: No such file or directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;spits tea&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"sanity"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; service cron status 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sanity
● cron.service - Regular background program processing daemon
   Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/cron.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendor preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
   Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Fri 2020-01-03 16:27:59 GMT&lt;span class="p"&gt;;&lt;/span&gt; 1h 43min ago
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Still nothing in that log. Cron doesn't run? The cronfile is active, loaded, and the daemon is started and happy. Why am I not being greeted? It's great that this is locally reproducable but still, I swear this is how it works right? At this point we're not even running scripts, there's no environment necessary, and initd is root so it has write perms on /var/log/*, what the hell is going on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"hi"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /var/log/thescript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash: /var/log/thescript: Permission denied
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yep. Nice one psedge, you've been using Linux now as a daily driver for what, 8 years? cron can write to /tmp, owned by root:root, but not to /var/log/ owned by root:syslog. Again, initd being run as root, executing the cronjob should indeed be able to write there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;crontab &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="nb"&gt;whoami&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /tmp/who
watch &lt;span class="nt"&gt;-n0&lt;/span&gt;.1 &lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/who
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;peter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Oh. Each user gets their own crontab, which generates a user-specific file in /var/spool/cron/crontabs - and the cronjob gets executed as that user. I feel like cron should have some log though right, I remember something like /var/log/cron.log right, I'm not insane. If echo (as peter) exited with a non-0 exit code, why didn't /var/log/cron.log get created? &lt;/p&gt;

&lt;p&gt;/var/log/cron is indeed the default cron log path... on CentOS. Which is incidentally where I've done most of my sysadmin tasks, just because that was what was used by the companies I'd worked at. But for personal tasks I choose Ubuntu, especially now that they have their Minimal image for container contexts. In Ubuntu, cronjob executions get logged to /var/log/syslog by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /var/log/syslog | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"cron|CRON"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No exit codes, so that'll have to be managed by the script being executed. I think the lesson here I'm taking from this is also to wrap all logic in the shellscript being executed, as opposed to having multiple commands added in the cronjob itself.&lt;/p&gt;

&lt;p&gt;This is kind of what I enjoy about Linux in general; you can use it for a set of tasks and still find core gaps in your knowledge until you need to get something like this done. Although basic, it's good to go back and solidify your understanding, rather than only knowing things like AWS CloudWatch event or GCP Function cron. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;update&lt;/strong&gt;: since writing this I've had one or two more infuriating crond experiences and tend to feel like I had a different tool. Maybe one with better logging, or some kind of dry-run output. I'm not the only one it seems &lt;a href="https://github.com/dshearer/jobber"&gt;1&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cron</category>
      <category>scheduling</category>
      <category>scraping</category>
    </item>
    <item>
      <title>Why is data valuable? Ethics of Data Privacy for GAFA</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Fri, 13 Sep 2019 13:43:55 +0000</pubDate>
      <link>https://dev.to/psedge/why-is-data-valuable-ethics-of-data-privacy-for-gafa-56f5</link>
      <guid>https://dev.to/psedge/why-is-data-valuable-ethics-of-data-privacy-for-gafa-56f5</guid>
      <description>&lt;p&gt;In the cafe my team were having a discussion about what measures each of us took to protect our data online. Some of us used popular extensions like PrivacyBadger or uBlock Origin to anonymize or remove trackers, some talked about using VPNs on public WiFi, others talked about bad experiences with NoScript, etc, and there seemed to be a general confusion about what purpose each of these things &lt;em&gt;actually&lt;/em&gt; offered. Fair enough! Even for a group of people working in Cybersecurity, the field is complex and ranging, and not everyone is super technical or interested in web specifically.&lt;/p&gt;

&lt;p&gt;I'm going to fall back to thinking about web threats like any other malware, as they fit the definition of "devices or wares that are used against the will of the user", and describe those tools my team was talking about into 3 categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Anti-malware&lt;/strong&gt;: &lt;em&gt;eg. &lt;a href="https://noscript.net/" rel="noopener noreferrer"&gt;NoScript&lt;/a&gt;, &lt;a href="https://www.torproject.org/" rel="noopener noreferrer"&gt;TOR&lt;/a&gt;&lt;/em&gt; - 
any tool which prevents the involuntary execution of code in your environment. Whilst this is more vague, this category of tool might stop attacks like cryptominers which operate solely in the context of a web browser whilst on a website, or may aid in the deployment of an exploit kit aiming to trigger the next stage of an attack.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anti-spyware&lt;/strong&gt;: &lt;em&gt;eg. &lt;a href="https://nordvpn.com/" rel="noopener noreferrer"&gt;VPNs, &lt;/a&gt;&lt;a href="https://www.hidemyass.com/en-gb/index" rel="noopener noreferrer"&gt;proxies&lt;/a&gt;, &lt;a href="https://developers.cloudflare.com/1.1.1.1/dns-over-https/" rel="noopener noreferrer"&gt;DNS-over-HTTPS&lt;/a&gt;&lt;/em&gt; - 
any tool which prevents a 3rd party, illegal or not, from observing your traffic or otherwise sensitive information. Many of these tools may seek to prevent man-in-the-middle attacks which are typically performed by an entity with physical access to your network and may be sold under the anti-censorship umbrella. I don't count trackers in this category as I believe they deserve their own definition of;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anti-adware&lt;/strong&gt;: &lt;em&gt;eg. &lt;a href="https://www.eff.org/privacybadger" rel="noopener noreferrer"&gt;PrivacyBadger&lt;/a&gt;, &lt;a href="https://allaboutdnt.com/" rel="noopener noreferrer"&gt;DNT&lt;/a&gt;, &lt;a href="https://github.com/gorhill/uBlock" rel="noopener noreferrer"&gt;uBlock Origin&lt;/a&gt;, &lt;a href="https://www.mozilla.org/en-GB/firefox/facebookcontainer/" rel="noopener noreferrer"&gt;Firefox Containers&lt;/a&gt;, &lt;a href="https://duckduckgo.com" rel="noopener noreferrer"&gt;DuckDuckGo&lt;/a&gt;&lt;/em&gt; - 
any tool which prevents the display of advertising, or which deploys or retrieves devices which help in unique tracking across sites for reasons of commercial gain.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Generally it's important to know when each of these tools is necessary to use; you might not need to use a VPN on a safe network unless you're doing something shady, but in an airport it should be a necessity&lt;sup&gt;&lt;a href="https://www.forbes.com/sites/johnnyjet/2018/04/18/how-to-stay-safe-when-you-use-airport-wifi/" rel="noopener noreferrer"&gt;1&lt;/a&gt;&lt;/sup&gt;; you might not be able to use NoScript on a phone, but also try to stick to trusted websites. For the moment I'm going to focus on the 3rd category and what they seek to improve: privacy. It's a hot topic at the moment with consortiums and international bodies writing legislation left and right, and I really feel like it's been thrust into the public conversation in the last 5 years which is a great change for the better. Telling someone about tracking cookies in 2010 they'd have laughed in your face with a "who cares"!&lt;/p&gt;

&lt;p&gt;So who does care? The argument I hear most and even partially agree with to some degree is that software giants like Google, Amazon, and Facebook provide a service that is far from free to operate and maintain - the provision of those things we love so much is dependent on there being a business model behind it, right? "If you're not the customer, you're the product." The options here seem to amount to either: those companies can choose to develop and provide services as losses to drive sales of other products; those companies can charge money for all services; or those companies can not provide products. Let's take Google Maps as an example. The core product generates no revenue; people do not pay for maps or directions, but revenue comes from advertising businesses and making recommendations&lt;sup&gt;&lt;a href="https://www.investopedia.com/articles/investing/061115/how-does-google-maps-makes-money.asp" rel="noopener noreferrer"&gt;2&lt;/a&gt;&lt;/sup&gt;. If a company wants to appear higher in the search term 'pub' in a certain area, they pay. If a company wants to have their branding or logo appear on the map, they pay etc. Everyone's happy, maybe the user has been steered towards Shannigan's Irish Pub rather than Harry's the local, but welcome to capitalism baby. &lt;em&gt;Critically&lt;/em&gt;, these are typically businesses the user is already interested in and searching out of their own volition - they were thirsty regardless, Google just tipped them in the direction of one and not the other. In essence, the user gave &lt;strong&gt;Direct Consent&lt;/strong&gt; to accept the suggestions of Google when they asked for results. &lt;/p&gt;

&lt;p&gt;Amazon have perfected the e-commerce experience, offering customers thousands of options from competing sellers at the lowest prices, available at speeds which are unrivaled anywhere else in the industry, and they have dominated the landscape as a result. You can browse for the products you like, read reviews, see how popular items have been with other users, and perhaps buy something. Numerous products are advertised along the way. Here, the user has given an &lt;strong&gt;Implied Consent&lt;/strong&gt;; they didn't ask to be shown products they might be interested in but ultimately you were here buying products and Amazon is suggesting more of the same, or closely related items based on what other users did&lt;sup&gt;&lt;a href="http://rejoiner.com/resources/amazon-recommendations-secret-selling-online/" rel="noopener noreferrer"&gt;3&lt;/a&gt;&lt;/sup&gt;. The option to buy the product is yours, and the relationship remains clear.&lt;/p&gt;

&lt;p&gt;Companies have the right to use user information to better inform how they run  the business through which they acquired that information. Personally, I think Amazon is the best example of this; I wouldn't be mad if a bartender noticed me coming in every Thursday at 5PM to order a Bloody Mary, and started preparing it a few minutes early if he had spare time so that he could deal with other customers. He has used customer information to improve processes and become a better business. Amazon use huge amounts of data&lt;sup&gt;&lt;a href="https://www.investopedia.com/articles/insights/090716/7-ways-amazon-uses-big-data-stalk-you-amzn.asp" rel="noopener noreferrer"&gt;4&lt;/a&gt;&lt;/sup&gt; to better sell products and services. There is still an issue of how comfortable you are with a company holding that sort of personally identifiable information about you, but as long as the context of its use does not change then I don't see a challenge. When you give Amazon information through the products you buy, and it uses that data to offer you deals on products it thinks you want to buy in the future then the context has remained the same. However, if Amazon see you buying a yoga mat, and suggest local yoga studios to attend, the context has shifted from 'what I buy' to 'where I go', crossing an ethical boundary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fu9i0oqdpkccn5zd2xe0s.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fu9i0oqdpkccn5zd2xe0s.jpg" alt="Billboard vandalised with "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How about Facebook? Similarly to Google Maps, a social network where users share their lives with their friends and family is provided for free at the expense of having to endure some adverts. It is common knowledge that Facebook monetizes&lt;sup&gt;&lt;a href="https://sproutsocial.com/insights/facebook-advertising-strategy/" rel="noopener noreferrer"&gt;5&lt;/a&gt;&lt;/sup&gt; this 'life' information by catering adverts to very specific demographics, offering advertisers an incredibly comprehensive target marketing landscape. I believe that the reason Facebook has attracted so much of a 'creepy' reputation&lt;sup&gt;&lt;a href="https://www.cnet.com/news/facebooks-ad-targeting-has-created-a-creepy-image-problem-it-cant-shake/" rel="noopener noreferrer"&gt;6&lt;/a&gt;&lt;/sup&gt; through the debate it acquires information in one realm (photos, statuses, events) and uses it in another (general advertising). I personally would feel much less violated if Facebook advertised only things which were &lt;em&gt;directly&lt;/em&gt; related to things I have openly posted; locations of photos and check-ins, content of statuses, events similar to ones I've been to before, etc. In that way they would be maintaining the context in which I'd volunteered my data with my implied consent. However, this is not how they operate, but instead inferences are made about you as a person and target you based on those generalities. Not only is this level of complexity more difficult for a consumer to understand, removing the 'understanding' requisite of consent, but it is often wrong and leads to unintended consequences. The adverts I see are not related to the context of the service I am using, and so there is no consent, implied or direct. I could have no way of predicting what kind of adverts I might see, and whilst I can complain about certain adverts there is no way to use the service without my data being used in ways that are unclear to me. Transparency is an issue Facebook has struggled with massively, and they have abused users.&lt;/p&gt;

&lt;p&gt;Why does this matter to the end consumer? As long as you don't look at the adverts, the companies don't win right? Wrong. The ways that things are advertised to us are incredibly pervasive, and even if you hide every ad as 'Not relevant to me' you are inevitably influenced by the things you see. When you see your Facebook Feed you are in a state of mind of taking in information, making snap social judgements that affect your subconscious - and unfortunately the ads you're shown at the same time get processed in the same way. It's very intentional that stealth adverts&lt;sup&gt;&lt;a href=""&gt;7&lt;/a&gt;&lt;/sup&gt; &lt;sup&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2016/09/the-uncanny-valley-of-instagram-ads/501077/" rel="noopener noreferrer"&gt;8&lt;/a&gt;&lt;/sup&gt;, adverts which match the same visual style as normal content, are more effective&lt;sup&gt;&lt;a href="https://journals.sagepub.com/doi/abs/10.1177/0002764216660140" rel="noopener noreferrer"&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;sup&gt; &lt;a href="https://www.tandfonline.com/doi/abs/10.1080/00913367.2015.1115380" rel="noopener noreferrer"&gt;10&lt;/a&gt;&lt;/sup&gt; - for exactly this reason. &lt;/p&gt;

&lt;p&gt;The danger comes when the types of adverts shown are weakly controlled or regulated. It is one thing to be subverted towards buying a product you would have otherwise not been interested in, removing an element of free will about how you spend your money, but as a society we hold religious and political freedoms as sacred above that. The abuse of data in advertising campaigns for the promotion of political parties or groups with religious affiliations is potentially the most blatant and violent violation of personal liberty possible in the area of data ethics with huge implication on free will for the individual.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F832fphqd60ossgm4y6r7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F832fphqd60ossgm4y6r7.jpg" alt="Sign on the tube reads: "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Malicious advertising is an assault on your psyche, and should be treated as non-consensual data abuse. Definitions remain challenging, but the framework we have to discuss these issues is becoming more rigorous with every scandal, select committee, and piece of legislation. Ultimately how much time you invest in protecting yourself from adverts intended to alter your tastes or opinions is your choice, but the tools listed at the start of this article are a great place to start.&lt;/p&gt;

</description>
      <category>dataprivacy</category>
      <category>advertising</category>
      <category>dataownership</category>
    </item>
    <item>
      <title>Towards an OpenGL Music Visualiser</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Wed, 04 Sep 2019 13:41:49 +0000</pubDate>
      <link>https://dev.to/psedge/towards-an-opengl-music-visualiser-155g</link>
      <guid>https://dev.to/psedge/towards-an-opengl-music-visualiser-155g</guid>
      <description>&lt;p&gt;Computer graphics are amazing. I adore the &lt;a href="https://www.geeks3d.com/20190423/demoscene-revision-2019/"&gt;demoscene&lt;/a&gt; aesthetic and one day want to get to the point where I'm not just able to make something functional, but it becomes expressive.&lt;/p&gt;

&lt;p&gt;My side-piece at the moment is OpenGL, with the aim of creating a 3d demo which responds to music in realtime, inspired by the visuals of Tame Impala and deadmau5 which add a whole other dimension to the gigs.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Step one&lt;/em&gt;: find a way to create a basic demo. This turns out to be so insanely complex it's laughable - even starting out with a cube in c++ using OpenGL took hours because it's not a cube, it's 16 individual triangular meshes with their own vertex and fragment shader instances, buffers and bindings. And the camera, hah! Exactly what is the difference between model and world space, and how do I know whether I need an ortho or perspective camera? Let alone trying to find the right dimensions in this x,y,z space I'm trying to imagine - with every rebuild taking ~10 seconds. In the end I ended up giving up on c++ because of my lack of basic understanding of how the APIs worked and really, core GPU architecture. It's not like you can dynamically pass around object references and write to multiple buffer arrays simultaneously like I'd expected, you have to do everything procedurally when interacting with the GPU - which is fine but I found myself spending so long looking up the correct way to manage these things that it was aggravating. This is meant to be creative.&lt;/p&gt;

&lt;p&gt;I'd come across &lt;a href="https://threejs.org/"&gt;three.js&lt;/a&gt; a few times, and always thought it was amazing that you could get that kind of performance from WebGL so I flocked to that. We still have the same core concepts - cameras, geometries, materials, meshes, scenes; but now it's all in a familiar language and instantly interpreted. This move took me from "damn that point is a little bit out" to "looking good, what do I want it to do next" - which is to be expected as I'd just gone up several layers in the stack. The end result is &lt;a href="https://psedge.github.io/demo"&gt;psedge.github.io/demo&lt;/a&gt; - a real working web 'demo'.&lt;/p&gt;

&lt;p&gt;Getting to that point wasn't as easy as I make out. The main sticking points for me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Frame rates&lt;/em&gt;: it turns out different shaders have different performance impacts, and even though I'm only dealing with ~100 meshes at a time, using anything except BasicMaterial slowed renders down to a potato. This was simply solved by reducing either the material quality or the number of objects in view at any one time, and changing from Spotlights to DirectionalLight (much less intensive to calculate).&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Animation speeds&lt;/em&gt;: I still don't understand what role clocks play in requesting animation frames / renders. Let's say I have an animation that rotates a cube 90 degrees - I change the mesh rotation and request a frame, waiting until 1000/{FRAME_CAP}ms has passed if it hasn't already. However, in the situation where FPS &amp;lt; FRAME_CAP the animation is going to take ages. Eg. if we can only get 30FPS then the animations will take twice as long, which is unsuitable for time-sensitive animations. I think the solution to this is to make the rotation a function of the time elapsed since we started and desired total. My problem is that I understand the problem and know it must be a really common question but don't know how to ask. &lt;em&gt;edit&lt;/em&gt; Yep, this is the correct way.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Step two&lt;/em&gt;: get audio. I've looked at &lt;a href="https://www.freedesktop.org/wiki/Software/PulseAudio/"&gt;pulseaudio&lt;/a&gt; for a previous project but that was only for controlling audio levels on a Ubuntu machine over WiFi, not for taking audio input. At this point, it's going to be necessary to re-evaluate the usefulness of three.js and WebGL, because the HTML5 Audio APIs might be a limiting factor. I've come across examples which request an audio context and start listening in a Web Worker, passing frequency data back to the graphics thread but the lack of stackoverflow/reddit posts make me hesitant this is going to be easy to make work out of the box. I'm not sure whether to do this on a local server and use WebSockets to communicate with three.js, or to move the graphics to a proper client program as well. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Step three&lt;/em&gt;: analyse audio. This is where my complete lack of signal processing knowledge comes into full view. I understand that from a technical stance I need to record the audio from the mic for a set time (sample window) then run a Fast Fourier Transform on it to get amplitudes of different frequencies. I have no idea about what type of values I'm going to get from that, or how to choose frequency bands - whether that should be fixed or dynamic etc. There are apparently a load of libraries available in most languages for this, some specifically for audio analysis - but more research is needed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Step four&lt;/em&gt;: take analysed audio and use it in demo. At this point, the functional requirements are complete, we have the component parts to make a working visualiser, we just need to decide what to do with it. The problem here is that apparently for the warm "realtime" feeling &lt;a href="https://www.soundonsound.com/techniques/optimising-latency-pc-audio-interface#7"&gt;we need to achieve a sub-10ms delay&lt;/a&gt; from audio heard to frames being drawn. At our likely 60fps cap each frame is being drawn every 1ms, allowing our server and WS connection a generous 9ms for everything /s. The way gig visuals probably get around this is obviously to analyse and pre-render video, then sync on play.&lt;/p&gt;

&lt;p&gt;Needless to say I've really started to appreciate the games I play I lot more. I had no idea how complex this world was and just how much prerequisite knowledge was required across so many areas - not just soft framework knowledge but hard maths/mechanics understanding. I always used to see Game Design as a meta-topic on old PHPbb forums and wonder why it was so common to see a dedicated section for it - now I know. Also, I've decided that I enjoy at least using three.js for the rapid prototyping stage of the visual development, even if it requires then porting to c++ for performance reasons - personally I feel like for me it's the intermediate stage between mental visualisation and productionising.&lt;/p&gt;

</description>
      <category>opengl</category>
      <category>demoscene</category>
      <category>visualizer</category>
    </item>
    <item>
      <title>Teaching Javascript to 10 year olds</title>
      <dc:creator>a little Fe2O3.nH2O-y</dc:creator>
      <pubDate>Thu, 21 Mar 2019 16:03:14 +0000</pubDate>
      <link>https://dev.to/psedge/teaching-javascript-to-10-year-olds-33la</link>
      <guid>https://dev.to/psedge/teaching-javascript-to-10-year-olds-33la</guid>
      <description>

&lt;p&gt;tl;dr : I volunteer at a primary school teaching kids. Some of them are pretty good at it. I've made some a GitHub repo with stuff I'm trying to get them to learn here: &lt;a href="https://github.com/psedge/codeclub/"&gt;https://github.com/psedge/codeclub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've never volunteered in any real capacity before, except to scalp some free festival tickets. Now I'm working as a full-time backend developer in an organisation that generously gives me time off for a cause of my choice - I thought there was no better time to start.&lt;/p&gt;

&lt;p&gt;I'd heard about CodeClub through my friend, Jess, and her Mum - who teaches IT at secondary school level in our old hometown. In essence, the charity puts programmers and schools in touch and gives them resources to run after-school clubs to teach programming in Scratch (a visual language and integrated IDE developed at MIT), and Python. They also organise coding camps during the summer called CoderDojos sponsored by a few companies. Naturally, it was the second charity to come to mind when I wanted to volunteer (Battersea Dogs Home waitlist is super long it turns out) - so I sent off for my DBS and contacted a few schools in my area. A few weeks later after meeting with Ellie (the teacher), we had the first few weeks of lessons planned and kids signed up.&lt;/p&gt;

&lt;p&gt;It was the Friday before my first Monday's teaching that I went to a meetup at the V&amp;amp;A run by CodeClub; an opportunity to ask questions and meet other volunteers/teachers/parents. Having never done this before, I'd asked for tips running a club, which was met with enthusiasm and genuine interest by the crowd - pretty much dominating the conversation for the next half hour. Great community.&lt;/p&gt;

&lt;p&gt;Monday rolled around and I jogged out of work at 2.30pm, equal parts fear that the kids might jump me, and mischief of leaving this soon after lunch. The weirdest part of this experience was by far being met on the playground and escorted up to the classroom by a boy no older than 6 or 7. I felt like he was going to yell "stranger danger" at any second and I'd have to explain why I'd just walked off the street into a primary school. I got to the class and was greeted by 20 pairs of eyes staring at me. I walked in and said hi, and stood nervously at the front. After 10 seconds I asked one of the girls if I should introduce myself, and she nodded. "Hi I'm Peter and I'm a programmer..."&lt;/p&gt;

&lt;p&gt;After that initial awkwardness, the lesson flew by pretty much according to plan. Things I learned in those 45 minutes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In an hour-long lesson, usable time is probably 30 minutes. It takes kids a long time to get out laptops, find their seats, get logged on etc.&lt;/li&gt;
&lt;li&gt;There is a huge variety in skill level, even in kids in the same year. Even if Scratch is taught and they have all been exposed to it, basic skills like navigating tabs or how to open a minimised window are not universal.&lt;/li&gt;
&lt;li&gt;On that note, an instruction like 'copy and paste' that URL into Chrome probably won't do. "Highlight that link, right click, copy, press the Windows key, no that's shift, now type Google, click, right click at the top, and paste" encompasses everyone's abilities.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After a few classes using the provided Scratch exercises, 2 or 3 of the kids asked me about Javascript - something I'd mentioned in the introduction in the first week. I'd actually been recommended against teaching it by CodeClub, as they have examples using Raspberry Pis and Python - but at this school there were a mixture of laptops and devices; ThinkPads, Chromebooks, and iPads - so it was better to be able to teach browser-only. I'd initially thought I had to design the entire curriculum, so had several weeks' examples premade. I set them off on it and they crushed my 'Traffic Lights' introductory class. A few notes from the first Javascript lessons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debugging and useful error messages are super important for them - 'token undefined' makes complete sense when I notice a case error in a variable name, but it confuses and breaks the flow for learners - they need more information.&lt;/li&gt;
&lt;li&gt;You need to give room to explore beyond the initial purpose of the class; they managed to finish it and wanted to do some other stuff. Adding functions like 'shake' and 'wobble' with parameters means they can break things.&lt;/li&gt;
&lt;li&gt;Having something like an API reference helps a lot, as they can see the available functions and how they should be used - otherwise they'll need a lot of one-on-one help.&lt;/li&gt;
&lt;li&gt;Bugs are not only a pain for the kids, but they're a pain for the guy trying to answer 5 tiny hands in the air asking for help. Fixing the bugs in the GUI and weekly exercises saves you more than time: probably a decent lesson to take forward into general software development as well.&lt;/li&gt;
&lt;li&gt;To do this, you need to have a reasonable understanding of the content - I've actually had some good questions from the kids: "why do we need to put quotes around a word", "why isn't this working..." etc. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've signed up for doing another class, albeit with fewer kids, next term. I'm definitely going to be more ambitious about encouraging more of them to do the Javascript exercises but still set up the Scratch stuff for those that want to. Reducing class size should mean I have more time with each student, which has been a problem this term - definitely, 10-15 is an easier class size. The interface is improved now, and hopefully some of the bugs which were taking time to remedy in-class should be squashed as well. &lt;/p&gt;

&lt;p&gt;All in all, teaching's been hard - I finish lessons and need to sit in a quiet place for a little while with a G&amp;amp;T - but it's incredibly fun, and the kids in my class are funny, original people. I've given up on trying to garner much respect from them - best left for Ellie anyway.&lt;/p&gt;


</description>
      <category>teachingprogramming</category>
    </item>
  </channel>
</rss>
