Happy Friday, dev.to
Let's get introspective.
We all know this situation: Someone is discussing a tech topic and, suddenly, bam!, the...
For further actions, you may consider blocking this person and/or reporting abuse
I really need to understand software distribution licenses, especially in the context of open-source software. I just don't know what's up with all this MIT, FreeBSD, Apache 2.0, and GPL stuff.
Ooh! This is one I might be able to help with! Do you have any specific questions? Or do you just need a general run-down?
I think the better place to understand about license is: choosealicense.com/licenses/
Easy and quick to understand, without those confusing legal terminology.
Hope it helps.
Ooh that is a great site for comparison! Thanks for the share!
Indeed I do. Thanks! I desperately need a general run-down of how each of them differ from each other.
Sweet! So basically there are three types of software licenses, at least as I see it.
Commercial licenses
These are your standard "pay for it, use it" licenses. They can have lots of different conditions and are as numerous as the companies that use them.
Open, but ideological
These licenses open the source and the use of software, but with extra protections around derivatives or use cases. All GPL variants fall into this - the require any software that uses GPL code to also be GPL licensed. It's a literal viral license - using GPL code makes your code GPL.
Fully open
These licenses exist to share software far and wide, with very little restrictions. I personally believe these to be the only true "free" software licenses. Social cost is still a cost! MIT, WTFPL, BSD, and a lot of others fall into this. Often the protections in these are around copyright (you can't just take this software and claim you made it) and legal protections for misuse (if you use this software and it causes damage to your business, you can't sue the creator). Apache also falls into this category for me, but includes protection for patents.
There are tons of variants on these licenses, for pretty much anything you want to do. For some real world context, here is a great piece on the React license changing from BSD+Patents to MIT.
My recommendation is this: unless you have a great reason to use something else, you probably want MIT. It's the most permissive, has good protections, and is widely used.
If you want to see what other licenses do, check out TL;DRLegal. It is a great site for all things TOS, license, EULA's and TOS.
Hope this helps!
The Saturday before we went open source with DEV, I had the sudden realization of all the last-minute details with our license to get right. I knew enough to know we had to settle on some things, but we had punted on nailing down the details and nobody in our company had a better understanding than me.
And I was at a wedding that weekend. No environment for a lot of last minute research.
And then someone walks into the the wedding ceremony wearing a GNU t-shirt. It's a bold fashion choice for a wedding. Nobody wears a GNU t-shirt to a wedding without also being seriously willing to talk about open source licenses. I knew we wanted to go GLP but I had a lot of questions.
I don't believe in god, but when that man walked into that church I felt some belief wash over me.
I'd love to know more about the decision to use AGPL over MIT! What was the primary motivating feature of AGPL that made you go with it?
User authentication. It just hasn't fully clicked with me yet.
Any specific spot where you hit a snag? Or the technical side of the concept in general?
I get how it's supposed to work (in theory), but I've never fully understood how interfacing with an authentication service (Auth0, Cognito, etc.) works with user roles on the backend. I've been gun shy to start several projects because I psych myself out about dealing with authentication.
I find the hardest part about authentication is that somebody will use a method, and then somebody else will tell you that that method is incredibly insecure.
Like, lots of people do stuff others deem "incredibly insecure". It can be really hard to settle on any understanding of what appropriate levels of security are in this space.
Services and top libraries tend to give some direction, but any time you want to change your approach or try something new, you're met with a lot of competing beliefs.
I'll throw something in.
Something that took a while for me to get, and I think I see some of that in your query, is the difference between authentication and authorization.
Authentication: How the system trusts you are who you say you are (OAuth, user/pass, federation, saml, 2factor, etc)
Authorization: What are you allowed to do (Roles, rules, permissions)
These two concepts are often built together but benefit from being treated independently.
Authentication would give you an identity, and whatever rules you've built in your software would allow or prohibit that identity from doing various things.
Or another way. Dev.To knows who you are because of how you log in (Delegated or user/pass). But the fact that you can edit only your own profile and not mine, is what your identity is authorized to do.
Not sure if this is close to helpful, but its what I'll offer.
It definitely helps! And yes, bridging the gap between authentication and authorization has been a sticking point. I can totally understand how to do local username/password authentication and pair it with authorization... but once we get into Identity/Authentication as a Service, I haven't been able to make the mental connection.
I think I'm with you. I'd bring the whole auth service thing back to trust in this case.
Let me put this on a spectrum. What we're talking about is trusting that someone is who they claim they are.
At the end of the day each one of these makes a claim about who they are. Each one (Hopefully) instills more trust than the one before.
When you use something like Auth0, or Google, or Github to log in, you're essentially saying, "Look, I trust that you solved this problem of knowing who people claim they are, so you tell me who they are and I'll consider that good"
The mechanics of this often obfuscate this, because there's a lot of back and forth. That's part of the dance of trust. Each step in the mechanisms exists to prohibit untrusted sources from getting through.
Excellent! That's enough of a good start to give me the confidence to try and implement something this weekend. Thanks!
Keep at it. You'll knock it out in no time. When you do you'll seem like you're a wizard to everyone.
I'm following yours - I think I'm pretty good on Monads, finally. That understanding came thorugh use alone, though, not blog posts. That said, have you seen Functors, Applicatives, And Monads In Pictures?
I still can't quite figure out how to use lenses. They look like something I should be able to get my head around but I'm not quite there yet.
A lens is just a way to get and set a part of a data structure. Seems like a "why would anyone need this?" situation at first. However, they can be composed to provide shortcut access to deeply nested states. The main place I found this useful is in doing functional UIs. Because I end up with deep state hierarchy to represent UI pieces, consisting of both records and unions. Even if it was just records, updating a nested property is pretty gross with immutability. E.g.
So if this is something you do a lot, you can construct a lens to simplify updating R. A lens has
get
andset
operations, but this is onlyset
.And you could even compose from other lenses. Here is a naive example of only the
set
part:Of course lens libraries exist to make it nicer to construct these with both
get
andset
operations. However, I find I do not really use lens libraries. I usually just construct my own helper function "lenses" as needed.Wow, thank you! This is really helpful. I've definitely run into nested property hell. I think you're right, I've been overthinking it. I'm pretty sure I could apply this pattern to a recent project of mine - practical use is the best way to learn it.
I really appreciate the examples!
Where did you use Functors, Applicative and Monads patterns? Can you give some examples as well.
Here's a great overview of what a parser could look like using Applicatives, for example.
As a tester and someone involved with tech, I should understand performance and security testing...
But... uhh... I'm good at making sure features meet business requirements, at least?
What is it that escapes you about such types of testing? Maybe I can help with a general idea.
I'll preface this with that in my 4-year career, I've never had non-functional requirements. Beyond "make it work", there's never been an official guideline for how secure or performant an app I was working on had to be. Until it isn't, then everyone's pissed off that users are complaining despite the app working to spec.
Performance
What does it mean to be performant? How do you know if it can scale well? I think vaguely in my current project (2 years) we're now starting to look at benchmarking our current database (x1) and x2, x5, and x10 by making those data sets and letting JMeter run loose on all our endpoints. But it's not like we have a goal of being able to get back all the responses in under 500ms with the x10 set.
Security
Aside from hearing something like OWASP once, I have no clue how to check that an app is secure against common attacks or how to prevent that. I suppose I could Bobby Tables all our inputs, but that seems... naive? Like I'm trying to do something without actually knowing how to do it.
Like I said at the start, everything I've worked with at this point has been very "I want it to do x" "K, it does x" "Cool, ship it" but I've never had that set up with performance or security so I have no clue what I don't know.
This is the million dollar question. If you can't answer it maybe the system is already performant enough. You'll know if the performance is not satisfactory (user frustration in case the system has a UI or slow processing time to obtain a response from an inquiry).
You measure it, you'll never know with 100% certainty but, for example, if you have a 1000 users per minute and your system is at peak CPU most of that time and the RAM baseline isn't great either, you know it's not going to perform well if you double that.
You also obviously need to be realistic, especially if you control how many users use your system (if it has a subscription model and you know the subscription rate). Yeah, one day you might be hit with 10x lots of traffic but if you optimize for 2, 3, 4 or 5x you are already learning something about your system.
I don't know if this system we are talking about is a webapp or something else but you need to figure out the correct metrics and decide what's acceptable for you. This is a good intro:
The case for performance (Part 1)
Joshua Nelson ✨
If you're using a decent ORM you already protected by a few of those vulnerabilities (mainly sql injection).
The OWASP Top Ten PDF is the most boring thing ever to read but you're lucky (😂): Troy Hunt, a security specialist and the author of have i been pwned? has a 1-hour course on the OWASP top 10. His blog, troyhunt.com, is also quite interesting.
I haven't watched it but Hunt knows his stuff.
Also check this "checklist" out:
Web Developer Security Checklist V2
Michael O'Brien
Oh man, awesome response! Lots of things I can look into the next time I get a chance.
Though I just found out I need to stopwatch every call in our app because the CEO complained during a demo that it was slow, so... yeah... performance? :P But manual testing is both my least favorite thing and a huge timesink, so I won't have time for a while.
That's why you need to start automating this stuff :)
The timeline for when this needs to be done is Tuesday, apparently, to prioritize performance fixes. And they already barely approve of me reading my JMeter book.
And even though it literally has "testing" in the name, the project manager is making the devs do the automated testing of it. Maybe so the "real" engineers will do it right? Whatever. But when shit breaks, I suddenly need to drop all the real work and manually click things for a week.
What's the easiest thing to do that can give you some value? Going through the whole JMeter book is going to take time.
What about employing a third party load testing tool or a tool like wrk to have a baseline of a few endpoints?
Maybe you can also use Cypress to record those use cases you usually do manually...
I've written apps with sessions and cookies, and seem to figure it out just enough to get it to work, but the connection between sessions and cookies still baffles me. I've had it explained to me probably a dozen times and it just won't click. For example, both expire. Why?
Are you talking about browser side session storage or server side session storage?
So, I get that a session gets created server-side and a cookie gets placed client-side. The cookie has an ID that authenticates you to the server so you don't have to login. Is the session data stored in the session or the cookie? Is it fetched from the server every time you need it? Does the session expire? Does the cookie expire? How are cookies shared securely? I realize this is a lot of questions, but I feel like I'm fundamentally missing the underlying process.
Hi Max,
I'll try to give you an answer.
A cookie is just a particular type of HTTP header:
Cookie:
. Cookie were invented to store a little bit of information sent back and forth between stateless clients and servers (HTTP holds no state).So, now you have this header where you can store stuff into: the list of items you added to your cart, a username, whatever. Cookies have no actual knowledge of what they carry.
In addition to the actual data, a cookie has a bunch of optional metadata (attributes): expiration date, path, domain, security, http only and others.
There's a bit of a misplaced naming convention because technically a "session cookie" is a cookie that dies after the user has finished their own session (closed the tab or the window of the browser). These cookies do not have an expiration date, which signals the browser to delete them after the user's session is finished.
Unfortunately, because we're programmers and we're trash at naming things, other people started using the words "session cookie" also to identify the particular set of value and attributes to recognize a user's session against a webserver in a span of time. To make it simple: that thing that keeps you identified to a single website even when you close the browser, turn off the computer, come back two days later, reopen the browser and voila, you're still known by the server.
This is not magic, it's just using a couple of tricks to overcome the fact that HTTP is stateless.
Let's go over the simplest scenario, using a session cookie to login. You go over website.com for the first time, type username and password, hit submit and then the server (other than checking the credentials) does one particular thing. Adds the following header to their HTTP response:
Set-Cookie: token=supersecret; Expires=Wed, 19 Nov 2019 12:00:00 GMT
What this tells the browser is the following: store a cookie, for website.com, with these values (token=supersecret) until a year from now.
In addition to sending the cookie to the client, stores somewhere (usually a database, but not necessarily) a sort of key value to know that this particular token is associated to the newly identified user.
Then you go about your business. The nextime you go to website.com the browser checks its storage, see it has a cookie, sends it with the HTTP request. The server verifies it's still valid and then lets you in.
Once the cookie expires, the server will bounce you :)
The identification, the storage of the token on the client and the identifying info on the server is what constitutes the concept of "session".
This is a way to build state on top of a stateless protocol.
I'll try to answer your questions now:
You can, but IMHO you shouldn't store the actual session data in the cookie, you should store an identifier and the data in a temporary store (a cache) or a persistent one (a database)
It's the client that keeps sending to the server the cookie everytime you hit a URL. Keep in mind that HTTP is stateless, so the server has no idea that you are the same user that asked for a page two seconds ago.
So, if the session is stored in the cookie, it will disappear when the cookie expires. If you store it on the server in a time limited key in a cache, the cache server will remove it for you. If you store it in a database, make sure your server framework has the ability to clean up stale sessions.
Yes
The cookie specification has at least a couple of countermeasures:
Secure
andHttpOnly
. The first one means the cookie can only be transmitted over HTTPS, the second one will make the cookie invisible to JavaScript. A newer option is to set theSameSite
attribute to prevent cross site attempts to steal the cookieHope this helps! Let me know if you have any more questions!
Thank you so much! This was a seriously fantastic explanation at just the right level of complexity. I'm going to go over it a couple of times when I get home from work, but here's the ELI5 version, if I understand correctly: a cookie is a special HTTP request header that contains a secret string and acts like a sort of VIP pass. Every time we send a request to the server, as long as the cookie hasn't expired yet, this "pass" is shown to the server, which grants us user data without having to supply user authentication info.
When we talk about session data, is that just the data that gets returned from the server? Is it a subset of all user data?
Your description is correct.
If you store actual session data related to the user inside the cookie, then the session data is that. It's not a great idea because if the cookie gets spoofed the third party will know information about your user.
If instead you only store the token (the VIP pass) in the cookie, the session data resides on the server and it will be used to populate the next HTML response (eg. Hi Max, access your profile) or any other response format the client ask for. That's when the info about the user gets sent back to the server.
You, in your application, decide what "user data" is. It could be just a name, it could be anything attached to a single user
When people ask me if something is scalable, I don't know what they mean.
They probably don't either. There are many levels at which something can be scalable. An algorithm can be scalable (to processing high numbers of items in a given time period) if it has a low Big-O score. A service can be scale vertically if you can get increased performance by adding more resources (cpu/memory/storage). Or horizontally if you can deploy more copies of the service to improve performance. An architecture can be scalable in some ways but not others. Usually the person has to tell you which performance goals are important to be able to judge whether or not something scales in a way that is meaningful to the problem.
ahhaha most definitely
@socratesdz : scalability is a little bit of a buzzword, as Kasey said, it really depends on the context
As a predominantly backend developer, I do not get a lot of frontend technologies. Here are a few:
It's a simulation of the DOM that isn't immediately rendered. By not rendering immediately you can make changes quicker and more selectively.
Essentially assembly code for the web. Other languages might compile to this.
Predecessor of wasm that uses much of the same principles but is just JS. Uses some tricks to optimize code. No longer relevant.
Afaik just non-memory sharing threads for the web.
I find my relationship with versioning, versioned releases, etc. to be pretty wonky.
As a developer mostly of web apps which had no practical use of named released and versions, I feel a sense of unknown unknowns with version best practices.
I like to use semantic versioning. I feel as a developer (and also a user) it's the most straightforward
semver.org/
This is actually somewhat controversial. E.g. semver considers certain things backwards compatible, but how does it prove this? (it doesn't)
So unknown unknowns is technically the right feeling to have :-)
I recently dug into versioning because I was working on a Chrome Extension, which requires versioning. All I can say is that I definitely feel the sense of unknown unknowns and that my short deep dive left me feeling like everyone feels that way about versioning. It's a black box of magic and mystery.
git! Too often if I'm trying to do something tricky beyond the basics in git, it will destroy code somehow.
I feel like I kind of get git, but I've been mostly using it as a glorified backup and "work on the same code on different machines" kind of tool, and definitely not to it's full potential.
I feel like I would need to actually use it as a collaborative coding tool before being able to fully grasp more of it.
I agree with this. Once joining a project of 30+ devs, gits real power shines and I am slowly getting better at the more complex stuff.
Have you seen The Git Parable? It's a smooth read that explains why git does what it does pretty good. With that knowledge, you can use git more consciously i.e. less destructively.
I think next step to unlocking more of the power of git would be learning about branching strategies. Atlassian's and GitHub's articles are very informative. These are must-reads though:
Good luck!
This is a big challenge! It takes a while to navigate
git
un-destructively! There are some good resources out there, but the gist, at least of how I use git, is:push -f
, it's dangerous!I'd love to know if you have any specific cases where your code gets clobbered by git. I run into them every now and then, but for the most part I manage to avoid it!
I think rebasing always feels painfully awkward and my brain doesn't quite understand what git is trying to do. Like having to fix the same conflicts over and over. Especially when you have multiple people working on the same branches or base branches it gets to be overwhelming fast.
While its true that git kind of demands you understand it on some level before it plays nice with you there are basically 3 things that will destroy work (And even then a wizard can perform a ritual to bring it back).
push -f
If you get asked to do that its because you've done something to re-write history in a way that is inconsistent. Stop there.rebase
rebase is a command that allows you to re-write history. You'll see lots of advice to use it to clean up commits, or to do apull --rebase
. Either way, history is re-written.pull --rebase
is actually pretty benign and can be used most of the time. When you get a conflict though, it can get messy as you'll be re-writing your already finished commits.reset --hard <commit>
This little one basically will have you go back in time. Harmless when you are using it on a commit that you haven't pushed. Uncool when you do it to commits that have already been pushed/pulled. It'll likely trigger a prompt to do apush -f
Git does take time to understand and get comfortable with. It supports a variety of workflows and styles. That flexibility often means its pretty confusing to get used to. For the overwhelming majority of use cases you'll rarely need any of the commands I mentioned.
Promises vs Callbacks. Mostly I Google my way out of callback hell, or save the stuff I need from API calls straight to the state in React. And I really don't even know what callback hell is, so also 'Callback Hell'. I am proud that I do know that callback hell exists, even though I don't know what it is.
Oh hey, I can actually answer this! Callback Hell is pretty simple - it's when you have a bunch of nested callbacks, making difficult to read the code. Here is an example of what it looks like:
The main difference between callbacks and promises is how the render in the call stack. In that example above, the call stack looks kinda like this:
Each function has to finish running before the next one will complete. Each of those is a frame in the call stack.
Promises make functions asynchronous and chainable, which is what happens when you call
.then()
. A promise tells the browser: "Hey, this function will return at some point in the future. When it does, run this next function that I defined in.then
.Does that make sense?
Indeed it does. Functions in a callback chain finish inside to outward. When doing async API calls, I've usually been saving the response (after JSON.parse() ing it) to a global variable like the State in React.
In your example if you want funcWithCallBack to return the results of yetAnotherFuncWithCallback, do you just chain return statements from the inside function to the calling functions?
Like:
funcWithCallback(someVar, () => {
anotherFuncWithCallback(someOtherVar, () => {
yetAnotherFuncWithCallback(aDifferentVar, () => {
return "result";
});
});
});
How do I get result from the innermost function to the top?
You can make each callback function return the function before it - like this:
And then those functions would need to return their callback:
Then you should have it outside of the callbacks!
Wow thanks for taking the time to explain that. Much clearer now!
Just to briefly answer two of the design patterns you mentioned:
new
keyword is not needed to instantiate objects because the factory internally does that for you.Derek Banas has a playlist on YouTube about design patterns. Its in java but the concept is the same with most languages. It greatly helped me through some coursework. Best thing is, you could just watch and not practice along and you'd grasp it like that
OOP, until know that exist something called Design Patterns and Uncle Bob's clean code.
I'm steel fighting with University 'knowledge' learn years ago vs all this 'new' patterns.
Sure I have better understanding of OOP, but feel I'm not already thinking on POO naturally as Design Patterns say.
What's POO?
Sorry, my mistake. I had written in Spanish, is OOP (Object Oriented Programming)
ahah don't worry. This other answer of mine might help:
Maybe your brain was made for functional programming :)
I remember that one of the inventors of OOP regretted the name, and said that he should have called it "object message passing" or something like that.
OOP at its core is that, sending messages to objects so they can perform actions on their state.
Inheritance is a way for a family of objects to share some of that behavior (and/or some of that state). There's not a single way to implement inheritance of behavior/state.
Encapsulation is a way to hide away some of that state from prying eyes. Not all languages effectively have encapsulation.
I would say the tenent of OOP is really what the author said: sending messages to objects so that they can act.
But if you have questions, ask them :)
The synonym for monad is callback. You can start by thinking of JS promises where you specify callbacks based on the future result state. However, JS promises are actually a monad (success or failure) within a monad (evaluated or not yet evaluated). Some languages represent these as separate concepts that you can combine yourself.
Another kind of monad is nullability. For example, C# has
?.
to only access a property or call a method on an object if it is not null.Typed FP languages usually make you explicitly represent nullability with Maybe or Option types and otherwise disallow it... so you don't have null guard clauses everywhere, but you are required to handle null cases where you have explicitly allowed it.
Other things can also be represented as monads, such as Lists (run a callback for each item in the list -- like
Array.map
in JS).I assume you are not interested in the precise mathematical definition. Because most of what I said above is not that, although true in spirit. Also many attempts by OO languages to use monads seem to mix different operations. For example,
?.
in C# accepts return values that are nullable (reference types) and not nullable (value types). In FP, these are considered two different operations:flatMap
andmap
. For a perhaps more-relateable illustration of the difference, compare JS Array.flatmap vs Array.map.For me the biggest thing has to be RegExp, like I've seen it being discussed and have read a tuto or two about the subject but I can't quite fully grasp it and together with the fact that I don't really need it or use it as much makes it even less likely I will "get" anytime soon.
If you do need it in the future, The Coding Train has a great series on it. It's not so bad despite it being the most difficult thing in the world to read. I mean just look at this Stack Overflow discussion regarding email validation.
Thanks mate! The coding train definitely has some good vids. And I agree with you, reading those expressions is probably the most alien thing there is in programming.
Each language has a mechanism by which you can write function
foo()
, so thatlet bar = foo("bar")
and you can testbar = "blee"
and fail if it doesn't.And, by each language, I am including bash shell scripting. There, it's called bats.
What I don't get is how to write functions so that they're testable like that. I see the extremes of
let x = make_an_h1_tag("test") ; if ( x != "<h1>test</test>" ) { console.log("WRONG!")
and cases where you have to stub a DB driver and two calls to REST APIs.Once I have a sense of what's reasonable to test, I can use the tools available to write decent ones, but right now, I don't have that.
One good advice I can give you is to never test the library (unless you don't really trust it, but then you should probably change library :D).
So, if have a pure fuction that adds two numbers, your test should make sure that by giving 3 and 2 you get 5, maybe also test what happens with 0, negative numbers, floating numbers and if you want to feel sure, see how it behaves with something that's not a number (if you are operating with a loosely typed language).
If your function calls a database and a REST API the first thing you have to ask yourself is: am I here to test the database and the API or the function itself? I guess in this case you're trying to test the function. So DB and API should be black boxes in this case. You should stub those two, make the stubs return reasonable values and see what happens to your own function.
Not really sure if it falls under a topic but I really want to be able to understand Ruby. From what I can tell it a great and powerful language that is widely used in a lot of areas, but I haven't had the time to teach myself while having a basic understanding of the language I'd like to expand it by a lot.
I learned Ruby by doing, which was a good way, but I had some great people to ask questions to. You might check out Upcase by Thoughtbot. They used to charge for the service, but all lessons are free now! They have a lot of a good RoR stuff, but if you are just starting out it might be a bit above your understand currently. Definitely recommend them when you feel comfortable with Ruby though!
I would start with the small poignant guide to Ruby: poignant.guide/book/chapter-1.html
:-)
So, for monads, I wrote up How to think about monads some time back. Here's the gist of it:
In most imperative programming languages, I would expect that writing
a = 5; f(a);
is the same as writingf(5);
. And, a little more obscurely, thatg = f; g(5);
is the same asf(5);
.There are lots of particular languages that obey those rules, but, just like Java has an interface
Map
and lots of particular classes that implement it, we want a name for the abstract thing that is a language with those properties. That name is "monad."So, networking is the interfacing of devices to share information.
Ports are very much like street numbers (or even better apartment numbers). Fun fact: in Italian the word for "door" and "port" are the same word.
Packets go out from one device which has an address and a port, to a destination address and a port.
The combination address and port identify a particular application on the sending or receiving end (the socket).
Good object oriented architecture. Countless articles and discussions later I still struggle finding ways to put togethee classes without overusing singletons etc
Maybe your brain was made for functional programming :)
I remember that one of the inventors of OOP regretted the name, and said that he should have called it "object message passing" or something like that.
OOP at its core is that, sending messages to objects so they can perform actions on their state.
Inheritance is a way for a family of objects to share some of that behavior (and/or some of that state). There's not a single way to implement inheritance of behavior/state.
Encapsulation is a way to hide away some of that state from prying eyes. Not all languages effectively have encapsulation.
I would say the tenent of OOP is really what the author said: sending messages to objects so that they can act.
I really need to understand Regex
Skiplists are an interesting data structure. I don't understand how they help.
Git rebase - when I have currently checkout out branch1 and I do git rebase branch2, please explain which branch's code is copied and applied into which branch.
So, you're working on branch2 and you make commits, A, B, and C. You create branch1 and make commits D and E. You checkout branch2 and make commits F and G. You checkout branch1 and rebase branch2.
This means that you will make it so that branch1 contains all of the changes on branch2 plus the changes you made at the point you created the branch.
So originally, branch1 would be A, B, C, D, E. And branch2 would be A, B, C, F, G. After rebasing branch1 would be A, B, C, F, G, D, E.
In my head I think of it as git going back through your commits till it finds the point you branched, applying all the commits from the specified branch, then re-applying all of your branch's commits.
P.S.
git help rebase
has a really nice illustration of this I now realize.Networking. I am a veteran programmer of 22+ years and expert level in tons of things.
I still struggle with HTTP Headers and Requests.