<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: asrf</title>
    <description>The latest articles on DEV Community by asrf (@asdf).</description>
    <link>https://dev.to/asdf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/asdf"/>
    <language>en</language>
    <item>
      <title>'Welcome to the Era of Cyber-Education'</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Thu, 12 May 2022 07:29:44 +0000</pubDate>
      <link>https://dev.to/asdf/welcome-to-the-era-of-cyber-education-1lfb</link>
      <guid>https://dev.to/asdf/welcome-to-the-era-of-cyber-education-1lfb</guid>
      <description>&lt;h2&gt;
  
  
  Student loans, failed interviews, regretful job experiences, attempts to measure creativity … they all can lead one to break down in tears.
&lt;/h2&gt;

&lt;p&gt;Certain other elements of social existence can add on to this, things you may not immediately consider: “friendships,” “experience,” and “growing up”.&lt;/p&gt;

&lt;p&gt;Personal conditions are the key here, and there is no one external factor that truly unlocks the potential one holds.&lt;/p&gt;

&lt;p&gt;The rise of the internet, and also its bubble, has enabled autodidacts to learn more than they ever could have in human history. And yes, this also applies to the have-to or technical degrees.&lt;/p&gt;

&lt;p&gt;Welcome to the era of cyber-education, an age where Youtube displays the best lectures that the best professors has ever given in a lifetime at the click of a button.The platform also offers knowledge freely to anyone in the world, just like Twitter offering free alpha to traders, investors, thinkers…&lt;/p&gt;

&lt;p&gt;In many ways, Vitalik was the exception that proved the rule, crypto is a savage space where young prodigies are likely to have similar issues to child actors. It is a controversial topic that should be brought to the public attention.&lt;/p&gt;

&lt;p&gt;Overall, it depends on maturity, personality, company ethos, expectations, internal vs external pressure… all of that leading to different outcomes. The recruitment process in Crypto is doomed, but there is no other way around it.&lt;/p&gt;

&lt;p&gt;Just a pinch of luck and you will find like-minded builders that will help you run away from the cage of “expertism”. In many ways, “expertism” is a disease that has already infected many corners of modern life, and traditional media sites are the best example.&lt;/p&gt;

&lt;p&gt;But the productivity game is way more involved than that. The know-all critics are also starving for knowledge in the information era, they just don’t have what it takes. Their self-proclaimed victory is far from being true.&lt;/p&gt;

&lt;p&gt;Nobody cares about the badge of authority, specially in an era where freedom-speech pitches for the most influential social media site are reprimanded after showing its own signs of decay and proving its owners wrong without fear of betrayal.&lt;/p&gt;

&lt;p&gt;Almost-never heard degrees perhaps don’t deserve a place when it comes to filling the gaps that we can detect in any given industry. Sometimes, it even appears that incompetence is shown on purpose.&lt;/p&gt;

&lt;p&gt;Maybe it all goes back to the fundamentals, to the shared common good of knowledge for the sake of curiosity satisfaction, regardless of observers and the image we portray for the sake of being observed.&lt;/p&gt;

&lt;p&gt;Job preparation, indeed, is just not viable when it comes to innovation and job roles that appear on the go.&lt;/p&gt;

&lt;p&gt;Coding is not innate, it is just a set of skills that leverage thinking with the goal of solving a certain problem.&lt;/p&gt;

&lt;p&gt;Yes, it is that simple. Education is not a numbers game, neither is it a chaotic environment of finger-pointing or blaming for the sake of having something to say. Work in silence, and let the drums be the whistle.&lt;/p&gt;

&lt;p&gt;Furthermore, there is a blurry difference between teaching online, learning in public, or being wrong in public.&lt;/p&gt;

&lt;p&gt;Not all of those are zero-sum games, and they share almost no yield among each other. Anyone can teach by learning in public, but at the same time, journals and impulsive note-taking can be a distraction for unconscious pattern recognition, which kills intuition and invalidates all binary decisions.Beware the man of one book. Beware the man on metrics and experts of statistics.&lt;/p&gt;

&lt;p&gt;As time moves on, one has to feel comfortable removing emotions from the way to make decisions in life and start thinking about the long-term consequences of decisions that were made unconsciously, whether in an intuitive or an impulsive manner.&lt;/p&gt;

&lt;p&gt;Excess data floods one’s own mind with noise. But there will also be a random noise caused by inevitable variation, that’s why the analogy between poker and trading only works for those who have never played poker seriously.&lt;/p&gt;

&lt;p&gt;That’s also the reason why most traders do well on their simulation accounts and then fail to execute when real money is at stake.&lt;/p&gt;

&lt;p&gt;In ‘fact,’ part-time traders could do better off not only in health but also in monetary terms compared to the mental pressure reflected by 10 monitors pointing at the tired eyes of a trader.&lt;/p&gt;

&lt;p&gt;Learning in public makes no sense when the audience does not have the full picture of the actual product/idea.&lt;/p&gt;

&lt;p&gt;It overlooks the achievements and progress of more advanced people while inducing newbies to become ambitious and compound their mistakes while being disoriented. Same for investing narratives.&lt;/p&gt;

&lt;p&gt;Take a simple idea and take it seriously. Taking this seriously means recognizing that edges are incredibly valuable.&lt;/p&gt;

&lt;p&gt;The edges and alphas require complex skills, such that the potential for progress can be not only accessible, but also learnable for anyone, regardless of their economic or socio-cultural conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decentralized Education
&lt;/h2&gt;

&lt;p&gt;Confidence in public schools is at historic lows and new generations no longer trust the old antiquated system. Home-schooling is no longer in the lurks at a remote slacker’s house.&lt;/p&gt;

&lt;p&gt;Apprenticeship, indeed, is collaboration, aka a win-win and feedback-driven experience. Mentorship, in its own, is not an excuse for asking others to tell you what you have to do. Instead, it is full-blown alternative to the legacy learning methods eschewing media or influential corporations.&lt;/p&gt;

&lt;p&gt;Online first, and physical next. That’s the only way to leverage and scale the potential of network effects. Each individual goes straight and to the the point. If that’s the end of the rabbit hole, what is there for you to lose? Others missed it and will FOMO in later.&lt;/p&gt;

&lt;p&gt;Crypto and full-time don’t add up. It is not about time measurements, nor is it about timing. It is all about building a true inner metaverse inside our minds.&lt;/p&gt;

&lt;p&gt;I don’t know if it is one-dimensional, 3D, or 4K, I know that it comes from within and I want to be surrounded by like-minded people, whether they bought a house in their dream’s land, they are still at their parents’ place, or they are digital nomads.&lt;/p&gt;

&lt;p&gt;Full-time leads to misconceptions, it is time to move on and find apprenticeship in the small sprints that turn our lives into a marathon. One can target the maximalist social movement of the hodlers. Killing the software protocol that guides them seems unfeasible at this point in time.The cyberspace already has a digital currency, when decentralized education? When one-dimensional and transparent switch from work and life?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Microservices Interactions</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Wed, 26 Aug 2020 19:05:44 +0000</pubDate>
      <link>https://dev.to/asdf/microservices-interactions-38g7</link>
      <guid>https://dev.to/asdf/microservices-interactions-38g7</guid>
      <description>&lt;p&gt;First, let's start by describing the differences between Orchestration and Choreography:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Orchestration: An orchestrator controls the interactions between services. This orchestrator dictates the control flow of the business logic and makes sure that everything is executed according to this workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choreography: Every service does its work independently. There are no hard dependencies between services, and they are loosely coupled only through shared events. Each one of the services is interested about certain events and they will only listen to the specific events assigned to them. For example, this would be a good scenario for event-driven architectures that use lambda functions. That is the reason why this approach is extremely popular for serverless projects that benefit from SNS, SQS, Kinesis, Eventbridge...&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we can see, the advantages of a choreography approach are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is no single point of failure&lt;/li&gt;
&lt;li&gt;Each step of the workflow can be modified and scaled independently from the rest of the services
However, it is important to note that it will be very difficult to monitor all of those services and implement timeouts between events.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to solve that problem, one of the main paradigms that comes to mind is the implementation of a state machine, which is also another serverless service. As always, depending on the context one approach is more suitable than the other. It would be a mistake to draw the conclusion that all serverless projects should be inclined towards using a choreography approach.&lt;/p&gt;

&lt;p&gt;By using an orchestrator approach with Step functions and Event Bridge monitoring becomes trivial because of the built-in visualizations and audit histories included with Step functions. It also gets easier to implement timeouts between state transitions, and all the business logic is stored in one central unit, namely a state machine that is easy to maintain and manage.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>microservices</category>
    </item>
    <item>
      <title>In-Demand Skills vs Top IT certs</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Tue, 16 Jun 2020 17:48:20 +0000</pubDate>
      <link>https://dev.to/asdf/in-demand-skills-vs-top-it-certs-57eo</link>
      <guid>https://dev.to/asdf/in-demand-skills-vs-top-it-certs-57eo</guid>
      <description>&lt;p&gt;By chance, I recently stumbled upon a Tech Report carried out by a well-known staffing company. In fact, it was more a salary guide than a tech report. I believe these guides offer valuable insight into the latest trends though.&lt;/p&gt;

&lt;p&gt;Despite not being a networking expert, its underlying principles have always caught my attention, specially when I started digging into cloud computing and setting up VPC's, subnets... For me, it was no surprise that Cisco had 3 certifications in the TOP Certifications list. However, when I took a look at the In-Demand skills list, almost nothing was related to networking except for Cloud Computing. Among the top skills one could find programming languages, cloud computing, IA, Linux...&lt;/p&gt;

&lt;p&gt;I soon thought to myself that the reason behind this was the network automation that is going on nowadays. As a matter of fact, more and more network engineers are starting to incorporate Python as one of their skills. Now, you must be wondering, is cloud computing the end of traditional networking? Do those most valued certifications such as the CCNP, CCIE... hold any value to find a job in the current market share? I would love to see opinions about this in the comment section. We must take into account that underneath the fancy facade of all cloud providers, there is still a real TCP/IP networking going on. This means that the principles of good networking are not going to go away just because cloud vendors are hosting somebody else's infrastructure. Most top-skills were programming-related, which does not necessarily mean that network engineers will soon disappear. What it says is that software developers are in-demand inside the tech industry, even though most of them are not network infrastructure people and have no clue about what a secure, scalable, elastic and resilient environment looks like.&lt;/p&gt;

&lt;p&gt;I have always admired people who go after IT certifications one after the other, specially because they are aware that experience is key in the IT industry. Passing an exam is not the same as having the skills to carry out a certain task. Whilst the certification on a resume will be a boost, there are multiple ways to communicate that exact information to a company (include a section on your resume, build a home lab, work on side projects...). For instance, imagine for a second that you failed your CCNA exam. This can be an opportunity to relearn your skills better and you should not lose your motivation because of that. Wouldn't it also be a boost on your resume to share your experience and the process that you went through for preparing for the exam? Just think about it for a second, your new company might even end up paying for your cert if you can show them that you have the skills!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Network Automation Challenges and the Cloud</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Tue, 16 Jun 2020 17:46:33 +0000</pubDate>
      <link>https://dev.to/asdf/network-automation-challenges-and-the-cloud-1bn5</link>
      <guid>https://dev.to/asdf/network-automation-challenges-and-the-cloud-1bn5</guid>
      <description>&lt;p&gt;With the advent of cloud computing, the promise of huge cost savings, big data processing, new security and networking needs... network automation has become a mandatory solution to allow interaction between services and customers. First, companies' data centers shifted to Infrastructure as a Service. Hence, customers were looking for a self-managed hosted environment and readily available solutions installed automatically on top of an automated infrastructure (Platform as a Service).&lt;/p&gt;

&lt;p&gt;We cannot overlook the importance of Network as a Service on use cases such as network parameters, service-level agreements, bandwidth aggregation, redundant and secure connections, BGP on large-scale networks... While the NaaS (Network as a Service) model is gaining momentum, there is still a need for even more simplicity in the data center cloud facilities. These automation requirements still face challenges providing higher scale and availability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mostly uses basic and common networking features and capabilities&lt;/li&gt;
&lt;li&gt;Highly focused on data center hosting services use cases&lt;/li&gt;
&lt;li&gt;Requires highly flexible networking service, and highly scalable, reproducible protocol capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Among the most important Network automation requirements we find:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic configurations of data center service appliances such as Layer 2 switching services (overlay networks), layer 3 routing services, traffic protection through firewall and authentication services, load-balancing, and IP services such as Network Address Translation (NAT), Dynamic Host Configuration Protocol (DHCP), Domain Name System (DNS), and so on&lt;/li&gt;
&lt;li&gt;Interconnection of virtual appliances to form a data-center virtual topology. For instance, load-balanced networks, network routing according to firewall requirements...
-CRUD (Create, Retrieve, Update and Delete) operations for all data center resources to provide the ability to scale, provide multitenancy, monitoring...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Long story short, the Virtual Data Center Service nowadays can be summarized with network automation requirements that satisfy a VDC (Virtual Data Center) hosted in the cloud that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamically provisions network service appliances and configurations.&lt;/li&gt;
&lt;li&gt;Establishes the required interconnects for end hosts and network services&lt;/li&gt;
&lt;li&gt;Manages and monitors all resources demanded and consumed by users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In contrast to On-Demand services in the cloud which offer great flexibility, cloud network capabilities are more rigid in nature and prevent the offering of more complex applications and VDC requirements. As a matter of fact,VDCs are known as the collection of virtual machines, their compute resources, their applications, but also their networking resources. The success of networking service providers will be measured by the number of end-customer deployments on their infrastructure, which means that in order to attract customers, service providers will need to support all the end customers' variety of application needs and underlying VDC types.&lt;/p&gt;

&lt;p&gt;To conclude, Network automation challenges span the service configurations, interconnections, and the resource management of both physical and virtual resources. To overcome those challenges a cloud services facility should&lt;br&gt;
consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Predictable and replicable PoDs (Point of Delivery)&lt;/li&gt;
&lt;li&gt;A flexible object-based virtual data center model&lt;/li&gt;
&lt;li&gt;Network Service Interface of preference&lt;/li&gt;
&lt;li&gt;A replicable and stable Network Systems Corporations&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Microservices are not the latest fad... but maybe we do not need them</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Mon, 25 May 2020 16:23:47 +0000</pubDate>
      <link>https://dev.to/asdf/microservices-are-not-the-latest-fad-but-maybe-we-do-not-need-them-43c2</link>
      <guid>https://dev.to/asdf/microservices-are-not-the-latest-fad-but-maybe-we-do-not-need-them-43c2</guid>
      <description>&lt;p&gt;Nowadays it is not uncommon to be asked about what microservices are, usually at job interviews. The answer to that should be fairly easy. Perhaps we do not provide the most technical answer that the interviewer might expect from us, but we can get through it just by recalling some experiences, talks, anecdotes... In the end, most of us are overwhelmed with success stories about microservices. Aren't they the 'panacea'?&lt;/p&gt;

&lt;p&gt;Most of the time, after answering the above mentioned question, a software design question follows, usually entailing software architecture concepts. Back to our example the interviewer asks: 'Should we implement microservices for our solution?'&lt;/p&gt;

&lt;p&gt;Perhaps it would be a little bit disappointing for the interviewer to hear an answer that contradicts his/her assumptions. Shouldn't we just nod in agreement and jump into the microservices wagon? What if a microservices solution was implemented in a real project without knowing its future repercussions?&lt;/p&gt;

&lt;p&gt;My first exposure to microservices was listening to a Youtube video at x2 speed on a rainy afternoon. It felt disappointing to be presented with such a strong opinion about something that I had never heard of and that emerged a few years ago. I quickly realized about the effectiveness and the broad possibilities that such a codebase offers. By that time I was afraid of 'DevOps' technical concepts, but, at the same time, I felt intrigued about the modularity of the designs that avoided monolithically-designed software architectures.&lt;/p&gt;

&lt;p&gt;What kind of manager, designer or engineer does not want to develop a modular solution to a large problem or application? Microservices are great for large solutions because they are just a collection of smaller independent services, each serving a different purpose. Ideally, each service would be  a single application in itself. However, what if our application is not large enough as to be spliced into smaller pieces? This is just an example where, even though our codebase might eventually grow, a microservices approach will require a greater computational cost per line of code.&lt;/p&gt;

&lt;p&gt;For instance, microservices are not that good if there is no need to scale each individual component of the application. Would it really make sense to develop a HR application and split it into microservices such as 'Employees', 'Payroll', 'Incidents and Reports'... Will those individual components really end up scaling considerably?&lt;/p&gt;

&lt;p&gt;Let's look at another scenario where there is a need for frequent communication between services. If we had a monolithic application there would be no latency and the communication between modules would take place in memory. However, if we moved from an in-memory transaction to a network-based communication, our latency will increase. This added layer of communication could impact application that deal with real-time data-processing.&lt;/p&gt;

&lt;p&gt;We might even take this further and analyze additional weaknesses of microservices. To name a few, there is added complexity in terms of maintaining and deploying a microservice-based application. Since we are dealing with distributed services, each service needs to be deployed to its own container. If we had a monolithic application this would be as easy as deploying over a large Virtual Machine. Besides, depending on your team, microservices have not always proven to be a feasible solution for teams with little DevOps integrations. In small companies, building a DevOps team can cause more damage than progress. The reason is that microservices cannot be maintained or monitored without a DevOps team. Also, end to end testing would require more time. Also, coping with legacy code is the bread and butter of our routinely activity as developers, which would end up deriving on more debugging distresses for our development team.&lt;/p&gt;

&lt;p&gt;To sum up, we all love integrating the latest technologies into our projects. However, in the real world, ego is the enemy. Being too enthusiastic about a certain technology in an important decision might end up having very negative consequences for our company. &lt;/p&gt;

</description>
      <category>microservices</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Certifications</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Mon, 18 May 2020 18:48:59 +0000</pubDate>
      <link>https://dev.to/asdf/aws-certifications-2gcj</link>
      <guid>https://dev.to/asdf/aws-certifications-2gcj</guid>
      <description>&lt;p&gt;I just finished my second year in college studying Computer Science and due to the current situation I will not be enrolled in any job or internship. Do you recommend getting AWS Certified Cloud Practitioner, AWS Certified Architect Associate and AWS Certified Developer Associate in these 3 months until end August when the Fall Semester restarts?&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Lambda to the rescue of multi-tenant environments</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Sun, 17 May 2020 18:31:27 +0000</pubDate>
      <link>https://dev.to/asdf/lambda-to-the-rescue-of-multi-tenant-environments-4h80</link>
      <guid>https://dev.to/asdf/lambda-to-the-rescue-of-multi-tenant-environments-4h80</guid>
      <description>&lt;p&gt;The architecture of the cloud is built upon a multi-tenant model where a single software instance provides a service for multiple tenants. Multi-tenant solutions enable growth for large businesses that keep on expanding over the years. Before the cloud, it was common for organizations to run into several problems ranging from scalability, reliability and security issued.&lt;/p&gt;

&lt;p&gt;However, with the rise of services such as AWS Lambda, these companies have managed to solve most of their problems on their SAP systems. It turns out that multi-tenancy is a win-win situation since it creates an efficient cost-model for software and cloud vendors while providing customers with a service that allows them to deploy code without internally hosting their own data-centers.&lt;/p&gt;

&lt;p&gt;While most of the PaaS offerings are designed to be running 24/7, Lambda is completely event-driven; it will only run when invoked. Furthermore, Lambda can instantly scale up to a large number of parallel executions. Scaling down is also handled automatically (once a function terminates, all of its associated resources are destroyed). The support for multiple languages and frameworks (Java, Go, PowerShell, Node.js, C#, Python, and Ruby) is quite useful for operating serverless websites, cleaning up data, predictive page rendering based on user preferences, integration with external services, log analysis on the fly... It turns out that this service gives us an unprecedented flexibility. We can also do backend cleaning, real-time data processing, and automated backups.&lt;/p&gt;

&lt;p&gt;Lambda should be our first alternative when it comes to performing repetitive tasks, thus allowing our services to keep performing their job (such as responding rapidly to user requests). In summary, it will offload many processes that would otherwise slow down our system.&lt;/p&gt;

&lt;p&gt;However, we should keep in mind that there are some limitations associated with Lambda. Despite being charged only by the compute time, we should keep in mind that disk space is limited to 500 MB, memory can vary from 128MB to 3GB and there is an execution timeout of 15 minutes for a function.&lt;/p&gt;

&lt;p&gt;Nevertheless, we can reduce the impact of the above-mentioned limitations. For example, if we are running a lambda function for a long time, we should consider moving it to an EC2 instance, or even to Elastic Beanstalk.&lt;/p&gt;

&lt;p&gt;In terms of multi-tenancy, a customer should not think about how their actions might impact other tenants. However, reliability is a primary cause of concern. Take as an example a customer who runs a load test against a production endpoint and causes an outage for the rest of customers on the shared environment.&lt;/p&gt;

&lt;p&gt;Regarding security, since customers are co-hosted on the same virtual hardware, traditional scaling techniques such as spinning up new virtual hosts, might increase the risk of a breach or cyber threat.&lt;/p&gt;

&lt;p&gt;Even the most sophisticated cloud environment could experience a sudden spike in their traffic and the multi-tenant space might become overwhelmed, thus increasing latency or downtime.&lt;/p&gt;

&lt;p&gt;Should companies pay for additional hardware even though it will be sitting idle most of the time? Or should they create an access endpoint where they can upload a ZIP file full of code that will be run via functions? In short, Lambda will provide that isolation that each tenant is looking for. Each customer will benefit from having a unique environment where all of their data will be separated from other customers and data integrity will no longer be a concern. To sum up, with Lambda, each customer will pay for the amount that they demand and use, rather than paying for the maximum amount that they might need in the worst case scenario.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Parallelism != Concurrency</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Sun, 17 May 2020 14:58:04 +0000</pubDate>
      <link>https://dev.to/asdf/parallelism-concurrency-1ha4</link>
      <guid>https://dev.to/asdf/parallelism-concurrency-1ha4</guid>
      <description>&lt;p&gt;Parallel Programming always seems to be an esoteric topic whose complexity does not actually represent the general case of computing. Whenever we hear about parallel programming our mind is flooded with extremely difficult programming paradigms that result from deadlocks, monitors, mutual exclusion... As a consequence we tend to associate 2 concepts that are closely related but that are not the same. As the title states, parallelism and concurrency are 2 different concepts.&lt;/p&gt;

&lt;p&gt;First of all, parallelism has nothing to do with concurrency. In fact, parallelism involves a nondeterministic composition of programs that results in a deterministic behavior. This end result can be achieved through different methods and we want to explore all of the available alternatives to our advantage. On the other hand, concurrency is all about managing the unmanageable (events start arriving for reasons beyond our control and we must react to them).&lt;/p&gt;

&lt;p&gt;And yes, concurrency is a requirement for parallelism. However, concurrency is also a requirement for serialization and sequential programs. Indeed. the timing signal on a processor chip also acts as a synchronization mechanism that we have to coordinate with other processing units. The point is that, even if we have to deal with concurrency, it is not relevant for the implementation of a parallel computation.&lt;/p&gt;

&lt;p&gt;In parallel programming, rather than focusing on the exact order in which tasks will be executed, we think in terms of dependencies among operations and we assign a cost to the steps of the program that we are actually writing. This means that we should not bother thinking about how to schedule the workload onto processors. Instead, we should think about the dependencies among the different stages of a large computation.&lt;/p&gt;

&lt;p&gt;Think about it in terms of functional programming where we transform a given sequence into another sequence, but the original sequence is not being destroyed. Therefore, we need not worry about the interference between 2 different operations. All that we care about is that we can run those operations in parallel. &lt;/p&gt;

&lt;p&gt;To put it in a nutshell, functional programming is one oof the fundamental pillars of parallel programming. The one reason is that the fewer dependencies we have, the more opportunities for parallelism can be exploited.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>TOR? VPNs? Proxies?</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Mon, 11 May 2020 18:35:31 +0000</pubDate>
      <link>https://dev.to/asdf/tor-vpns-proxies-3gpi</link>
      <guid>https://dev.to/asdf/tor-vpns-proxies-3gpi</guid>
      <description>&lt;p&gt;Should you combine TOR withe networks and technologies such as proxies or VPNs? Does that even have negative consequences?&lt;/p&gt;

&lt;p&gt;Tor works by sending network traffic through 3 voluntarily run nodes. Thus, your packets will be encrypted 3 times, but each node only has the key to remove its own layer of encryption. As a consequence, Tor allows to establish a connection to a server without 3rd parties knowing the entire path of our request.&lt;/p&gt;

&lt;p&gt;A proxy, however, acts as intermediary server for requests from our devices to other devices or services. Therefore, the proxy owner knows both, where your are connecting from (your IP address), and what you are connecting to (the destination web server). Besides, proxies do not necessarily protect your connection through SSL encryption and https protocols.&lt;/p&gt;

&lt;p&gt;As far as VPNs go, these are the extension of a private network across a private one. VPNs are encrypted to protect the content in the private network from being accesible outside of the VPN. One example would be a connection to a corporate network over the Internet to access applications and files at your workspace. In fact, the VPNs that we usually use as customers tend to be just groups of encrypted proxied that we can pick from instead of connecting directly to destination sites. Your IP and destination will be hidden from your local network and Internet Service Provider, but not from your VPN provider. Also, remember that https is still required to hide your content from them.&lt;/p&gt;

&lt;p&gt;We can infer that the weakness of proxies and VPNs is that our anonimity will be as strong as our providers promise. Somehow, that is the same as letting someone you don't know put a gun against your head simply because they have promised that will not pull the trigger as long as you pay them a yearly fee. TOR, on the other hand, will hide your entire path. Your trust will be distributed across different nodes. Following with the previous example, that would be as giving one person the gun, another the bullet... and without paying a dime! In fact, there are more than 6000 nodes spread across the globe.&lt;/p&gt;

&lt;p&gt;TOR hides your IP, encrypts traffic, prevents ISP snooping, and is free to use (and consequently leaves no financial trace). It is easy to use and most censorship can be evaded using bridges. The drawbacks could be its slowliness, exit nodes could snoop http traffic, and torrenting breaks your anonimity. TOR does not have UDP support either.&lt;/p&gt;

&lt;p&gt;VPNs are faster in general. They also hide your IP and encrypt traffic to the VPN server, thus preventing local network and ISP snooping. However, your trust will be placed upon a single point of failure. Your provider can still see who you are and where you are going. They can even see your content when https is not enabled.&lt;/p&gt;

&lt;p&gt;Using both VPN and TOR is not something I would encourage doing. In this case, 2 is not better than 1. TOR is not meant to be run in conjuction of a VPN or another service. By doing so you will essentially be creating a permanent entry or exit node, which often has a money trail as well. You would be increasing your risk for no theoretical benefit.&lt;/p&gt;

&lt;p&gt;Instead of using TOR over VPN (connect first to your VPN and then connect to TOR), you could evade censorship by using the bridged that are included in TOR, or even request another bridges, which, contrary to VPNs, do not leave a money trail. Even if you ended up on a watch list, TOR has more than 2 million users a day. Do you really believe that a $5 month VPN service would prevent you from being traced?&lt;/p&gt;

&lt;p&gt;On the contrary, first establishing a connection to TOR before connecting to the VPN service, would allow you to reach services that are blocking TOR nodes. This setup would make it easier to access those services, but it increases your risk for not being anonymous for 2 reasons: VPN providers often know you from your money trail; and TOR splits all data streams across different circuits to prevent correlation of traffic as a means to de-anonymize user, but since all your traffic comes from the VPN provider's IP, that correlation will be a lot easier.&lt;/p&gt;

&lt;p&gt;In conclusion, if privacy is your goal, you should generally be using TOR. After all, an IP address is just one more way to track users. We should still be careful about other threats that come in the form of fingerprinting.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Debunking Myths: Tor - VPN</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Mon, 11 May 2020 18:34:10 +0000</pubDate>
      <link>https://dev.to/asdf/debunking-myths-tor-vpn-1ej4</link>
      <guid>https://dev.to/asdf/debunking-myths-tor-vpn-1ej4</guid>
      <description>&lt;p&gt;The Tor network is a free and decentralized service that was designed to protect the anonimity and privacy of its users.&lt;/p&gt;

&lt;p&gt;Tor works by sending your network traffic over thousands of voluntarily run nodes. Every time you connect to Tor, it will build a path of encrypted layers using 3 nodes. The entry node will see your IP address, but does not see the destination of your connection. The middle node can determine where its traffic came from and where it goes to, but it can't see neither your IP address nor the domain you are trying to access. Finally, the exit node will forward your traffic to the destination domain.&lt;/p&gt;

&lt;p&gt;It is important to note that you will stick to your entry node for 2/3 months, while the middle and exit node will be randomly chosen from the Tor nodes.&lt;/p&gt;

&lt;p&gt;There is the common misconception that, since the US government provides funding for the Tor project, they could have a back door and even have people on a watch list. This idea does not hold because the US government uses Tor to hide its activities online and, if it had a back door, it would not be safe for them to use. Also, analysis shows that around 2 million users access Tor each day. This number is big enough as to justify targeted surveillance. If someone wanted to de-anonimyze people, this person should have control over both the entry and the exit node of a Tor user. In this situation, since the entry node is changed every 2/3 months, it would be too expensive to keep your nodes running for at least 2 months before they get another chance of becoming the entry node.&lt;/p&gt;

&lt;p&gt;The belief that the exit node can access your traffic is partially true. Your traffic is encrypted while entering and travelling through the Tor network, but the connection between the exit node and the website is not. However, the implementation of HTTPS makes that the exit node can only see an encrypted HTTPS packet that needs to be forwarded.&lt;/p&gt;

&lt;p&gt;Another myth is that people use Tor to access the dark web. The truth is that if Tor was to be taken away, these criminals would end up using another medium to conduct their business. It is a reality that certain individuals hide illegal websites, and many of them have been caught doing it. In spite of this, in each and every of these cases, Tor was not the primary cause. This means that even if you get access to an anonymous connection, we must take into account other factors such as software bug, government infiltrations, human errors, data leaks...&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Networking security on IPv4 and IPv6</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Mon, 11 May 2020 18:32:54 +0000</pubDate>
      <link>https://dev.to/asdf/networking-security-on-ipv4-and-ipv6-1db0</link>
      <guid>https://dev.to/asdf/networking-security-on-ipv4-and-ipv6-1db0</guid>
      <description>&lt;p&gt;Imagine yourself working for a big corporation, envision thousands of computers interconnected, and visualize all the data flowing through the network. Having acces to multiple applications makes our lives easier. We can connect with other companies through mail services, buy from our houses, withdraw money from online banks…&lt;/p&gt;

&lt;p&gt;As daily users of the Internet, the truth is that we do not need much imagination to see this functionality that started to be conceived around the 60’s and 70’s. As a matter of fact, the OSI (Open Systems Interconnection) model, forged in 1984, was set as a reference for the future of systems connectivity. However, years later, the TCP/IP model prevailed despite its errors at a theoretical level. Its theoretical bases are the cornerstone of both, local networks, and the global network (the Internet).&lt;/p&gt;

&lt;p&gt;However, due to the relatively slow implementation of new protocols such as IPv6, we must be aware of the existence of new vectors of attack concerning the services and functionality of this protocol. The differences with IPv4 demand a new paradigm of learning instead of a process of adjustment.&lt;/p&gt;

&lt;p&gt;It is likely that most users will not pay attention to the implementation of IPv6. Nonetheless, it is undeniable that if their online communication is intercepted, their data will be leaked and exposed. Any company or user is vulnerable to these attacks. There are several examples of credential theft against web services, anonymous people and big companies such as SONY.&lt;/p&gt;

&lt;p&gt;If we analyze these hacks, we come to the conclusion that most attacks take place within the company, where security measures are looser. Also, many IT technicians focus too much on functionality and end up overlooking security. If we take a look to the past, we realize that network protocols were implemented to make our communications faster, with more services, and with a greater number of PCs, but not necessarily more secure.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Pass by reference vs pass by value</title>
      <dc:creator>asrf</dc:creator>
      <pubDate>Mon, 11 May 2020 18:30:45 +0000</pubDate>
      <link>https://dev.to/asdf/pass-by-reference-vs-pass-by-value-3ggg</link>
      <guid>https://dev.to/asdf/pass-by-reference-vs-pass-by-value-3ggg</guid>
      <description>&lt;p&gt;After being introduced to pointers, structs, arrays… in the C language, you quickly realize that in the strictest sense of the word, everything in C is pass-by-value.&lt;/p&gt;

&lt;p&gt;Pass-by-value means passing a copy of a variable to a function. On the other hand, pass-by-reference means passing and alias of the variable to the function. C can pass a pointer into a function but that is still pass-by-value. When we pass a pointer to a function on C, we are still passing by value because we are copying the value of the pointer (address in memory) into the function. Also, when we pass a pointer, C’s syntax requires a dereference to be applied to get the value. However, in languages like Pascal and C++, that is not a requirement .&lt;/p&gt;

&lt;p&gt;Let’s take swap as an example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;void swap(int a, int b);&lt;br&gt;
    void swap(int a, int b) {&lt;br&gt;
      int tmp = a;&lt;br&gt;
      a = b;&lt;br&gt;
      b = tmp;&lt;br&gt;
    }&lt;br&gt;
    int main()&lt;br&gt;
    {&lt;br&gt;
        int a = 1;&lt;br&gt;
        int b = 2;&lt;br&gt;
        printf("before swap a = %d\n", a);&lt;br&gt;
        printf("before swap b = %d\n", b);&lt;br&gt;
        swap(a, b);&lt;br&gt;
        printf("after swap a = %d\n", a);&lt;br&gt;
        printf("after swap b = %d\n", b);&lt;br&gt;
    }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here is the output: before swap a = 1 before swap b = 2 after swap a = 1 after swap b = 2.&lt;/p&gt;

&lt;p&gt;We can see that the swap has taken no effect on our variables. This is because in pass by value, copies of the values are passed as parameters. This means that we are swapping the copies inside the function instead of the original variables.&lt;/p&gt;

&lt;p&gt;Now, what if we used pointers?&lt;/p&gt;

&lt;p&gt;&lt;code&gt;void swap(int *a, int *b);&lt;br&gt;
    void swap(int *a, int *b) {&lt;br&gt;
      int tmp = *a;&lt;br&gt;
      *a = *b;&lt;br&gt;
      *b = tmp;&lt;br&gt;
    }&lt;br&gt;
    int main()&lt;br&gt;
    {&lt;br&gt;
        int a = 1;&lt;br&gt;
        int b = 2;&lt;br&gt;
        printf("before swap a = %d\n", a);&lt;br&gt;
        printf("before swap b = %d\n", b);&lt;br&gt;
        swap(&amp;amp;a, &amp;amp;b);&lt;br&gt;
        printf("after swap a = %d\n", a);&lt;br&gt;
        printf("after swap b = %d\n", b);&lt;br&gt;
    }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here is the output: before swap a = 1 before swap b = 2 after swap a = 2 after swap b = 1&lt;/p&gt;

&lt;p&gt;The output shows that the original a and b values have been successfully swapped. However, it is important to note that even though we are using pointers, we are still passing by value (we are passing copies of the pointers to a and b into the function). The function itself changes the values at those addresses but it is still being passed a copy of the pointer into the function, which is pass-by-value.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
