I sat down with some Ruby friends in Hiroshima last year to have a conversation about just-in-time compilation for Ruby, specifically the new MJIT method-based implementation. Those of you who are already familiar with JITs and how they work might want to skip directly to the interview, the rest of us are going to hang out for a minute and learn about how things presently work in Ruby, and what it is exactly that the MJIT would change.
Computers don’t speak Ruby or any other high-level language, they only speak machine language. In a compiled language like C++ you use a compiler to convert all of your C++ code into machine language directly before you run your program.
When you’re using an interpreted language like Ruby, your code is compiled into some other intermediary language, which is then run by the language virtual machine (VM).
You can think of these language-specific VMs as computers that have learned another, slightly more complicated language. The creators of the language built a new, easier to use language, and then built another computer, the VM, which is capable of running those instructions directly, because it already includes machine language for all possible VM instructions. These languages are known as intermediate representations (IRs).
In the specific case of Ruby, the intermediate representation is written with YARV instructions.
YARV is a stack-oriented virtual machine written by Koichi Sasada and merged into Ruby back in 2007, for version 1.9. Ruby code is interpreted into YARV instructions, and the Ruby VM is able to run those instructions directly, because every VM instruction has already been translated into machine language, and those bits of translation code are included in the VM.
A stack-oriented VM architecture, as opposed to a register-oriented architecture, uses a Last-In-First-Out (LIFO) stack to store operands. Operations are handled by popping data off of the stack, processing them, and pushing back the result.
It can be difficult to reason about something so abstract, so let’s start with some example code and take a look at how Ruby would turn that code into a YARV instruction sequence.
18 + 24
So we have our code, how do we get the YARV instructions? Thankfully Ruby as a language makes introspection fairly easy. We can use a
RubyVM::InstructionSequence to compile our Ruby code in an IRB console and see the resulting YARV instructions.
The code below compiles our Ruby program into an instruction sequence and then disassembles it into human readable instructions:
$> puts RubyVM::InstructionSequence.compile("18 + 24").disasm == disasm: <RubyVM::InstructionSequence:<compiled>@<compiled>>========== 0000 trace 1 ( 1) 0002 putobject 18 0004 putobject 24 0006 opt_plus <callinfo!mid:+, argc:1, ARGS_SIMPLE> 0008 leave => nil
We can ignore the trace and leave instructions for our example, so these are the relevant lines of code:
putobject 18 putobject 24 opt_plus
These instructions tell the Ruby interpreter how to calculate our result. The first
putobject tells the interpreter to put
18 on the stack, the next one will then put
24 on the stack and finally we see an
opt_plus instruction, telling the interpreter to add the previous two objects together.
This stack-oriented architecture is a very common way to build virtual machines; the JVM is another popular example of a stack-oriented VM. The alternative to a stack-oriented architecture is called a register-oriented architecture, and it takes an entirely different approach.
CPUs include a small number of onboard memory locations called registers. They’re the fastest storage locations available to a CPU as they are physically closest to the execution engines. Accessing data from a register is 2 or 3 times faster than accessing it from the next closest storage location, the L1 cache, which takes about half a nanosecond; orders of magnitude faster than going out to RAM.
In order to take advantage of faster access times, the developers of GCC and similar compilers are able to assign commonly accessed data to registers. In a register-oriented architecture, the compiled instructions then reference the data in the CPU registers directly.
This is an example of what our previous code might look like using a register-oriented architecture:
plus <operand 1 address> <operand 2 address> <result address>
The instruction above starts with the operation “plus” and supplies memory addresses for our two operands (18 and 24), along with an address to store the result. We’re able to accomplish what we did previously with a single instruction, instead of the 3 instructions required with a stack-based architecture.
While we generate fewer instructions with this approach they are individually longer than the stack-based instructions; in certain circumstances this could lengthen the compilation step, as the longer instructions increase decoding overhead and take up more space in memory, decreasing data locality. Keeping data close together in memory has significant advantages, as modern processors are better able to make use of caching to accelerate compilation.
Even though the individual instructions may compile more quickly using a stack-oriented IR, there will be more of them, which is in part why a register-oriented IR will be faster in most cases.
An important benefit of using a register-oriented architecture is that an optimizing compiler such as GCC is able to more effectively optimize those instructions, as these compilers are designed to work with register-based IRs. During compilation GCC makes additional passes over your instructions once they make it to an intermediate representation, so the CPU will be able to execute your instructions more quickly.
A good example of one of these optimization steps is the mode switching optimization. Modern processors operate in a number of distinct modes that place restrictions on the types of operations that can be performed, so any given operation requires that the processor be in the correct mode to execute that instruction.
The CPU can change modes while executing our compiled program, but it accrues overhead every time it’s required to change modes. GCC optimizes instructions to minimize the number of mode changes, by grouping and rearranging instructions that are able to be executed in the same mode.
Internally GCC uses an IR called Register Transfer Language (RTL), and after converting your instructions to RTL, GCC will perform more than 20 different optimization passes that take advantage of opportunities like CPU mode switching to speed up your compiled code.
If you weren’t paying close attention to Ruby’s progress over the last year, you may not yet have heard of Vladimir Marakov. Vladimir is a developer at RedHat working in the tools group, primarily focusing on register allocation for GCC; the process by which a compiler determines which variables in a bit of code will end up stored in CPU registers during compilation.
At RubyKaigi in September of last year Vladimir detailed some of the work he’d been doing with MJIT in his presentation Towards Ruby 3x3 Performance. As a part of that work Vlad created an IR for Ruby named RTL, named after the GCC IR that it closely resembles. While the name is the same, the RTL that Vlad proposed for Ruby is distinct from the representation GCC uses internally.
The RTL Vlad has created is generated by the existing Ruby interpreter, Matz's Ruby Interpreter (MRI), where it would have previously generated YARV instructions.
Using Vlad’s RTL generator allows MRI to make speculative optimizations on your Ruby code; it can make assumptions about the operands that generate operand specific RTL instructions.
As an example, if a method is run the first several times with only integer operands, MRI can replace the
plus instruction from the RTL with an integer-specific version
iplus instruction will run faster than a universal
plus instruction. If the instruction is later surprised to find it’s been given float operands, the RTL will revert back to the universal
plus instruction, and it won’t attempt to speculate on that portion of code again.
These operand specific RTL instructions can later be converted into C code that runs faster than its universal counterpart once that C is compiled.
The generation and compilation of the C code in Vlad’s proposal is accomplished by a just-in-time compiler named MJIT.
During his RubyKaigi presentation Vlad proposed that Ruby merge a just-in-time compiler called MJIT that uses GCC, so named because it’s a method-based JIT: a JIT that optimizes hot code paths using a method as the smallest optimization target. A method-based JIT will recognize when a method is being called repeatedly and optimize that path. In Vlad’s proposed MJIT that optimization generates C code.
Vlad’s vision for Ruby is one where MRI converts Ruby code into RTL (the RTL Vlad wrote for Ruby, NOT the version in GCC), which is then converted by MJIT into C code to be compiled by GCC. GCC compiles the C code into a C shared object file (an “.so” file), which is linked to MRI. The end result of this entire process is that MRI is able to use the C code that MJIT already compiled the next time that method is called, significantly speeding up execution.
While Vlad’s proposal does speed up Ruby, it does so with some pretty hefty changes to MRI itself. This is perceived by some as an introduction of risk, primarily because it increases the surface area of the feature.
In the time since RubyKaigi an alternative JIT implementation was developed to address these concerns, one that generates C code from the existing YARV instructions, omitting Vlad’s RTL changes to MRI. This JIT implementation by Takashi Kokubun [k0kubun] is known as YARV-MJIT and it was merged in Ruby 2.6.
Takashi Kokubun’s YARV-MJIT compiler uses some of the JIT infrastructure introduced from Vlad’s proposed MJIT, but it was otherwise largely developed in parallel.
Kokubun’s JIT increases the speed of many Ruby programs, but for the time-being Rails applications do not seem to enjoy that benefit. For this reason, among others, the JIT is not enabled by default in Ruby 2.6.
If you’d like to play with it you can enable it with the
ruby --jit your_program.rb
In his pull request Kokubun shows the results of an optcarrot benchmark indicating a speed increase of about 27%. In a completely unscientific experiment where I paid no attention to the other work my Macbook was doing (3.5 GHz Intel Core i7, 16 GB 2133 MHz LPDDR3) I was able to get about 16% running locally:
> ruby -v -Ilib -r./tools/shim bin/optcarrot --benchmark examples/Lan_Master.nes fps: 34.79269354352727 > ruby --jit -v -Ilib -r./tools/shim bin/optcarrot --benchmark examples/Lan_Master.nes fps: 41.55450796571314
While Kokubun’s JIT implementation clearly has some room for growth, especially for Rails, it seems likely to significantly accelerate progress towards the Ruby team’s 3x3 goal.
Jonan: We've just finished RubyKaigi 2017, and I'm sitting here with Matz (Yukihiro Matsumoto) and our friend Vlad (Vladimir Makarov). We also have Aaron Patterson, sometimes known as Tenderlove, and Koichi Sasada with us today. We're sitting down to talk about implementing a JIT for Ruby, and specifically how that's going to impact progress for Ruby 3x3; the announced plan for Ruby 3.0 to be three times faster than 2.0. So by perfect chance, we've discovered Vlad, a very helpful man who has been working on GCC for about 20 years. Vlad popped into the community about two years ago and started contributing in a way that makes it very likely that we will achieve our 3x3 goal, I think, but I'm curious to hear what you all think, because I am not a committer, and it's not up to me. Matz, do you think that we're going to make our 3x3 goal?
Matz: I really, really hope so, and Vlad's work looks pretty promising. We set some criteria for Ruby 3x3, one of which is that we don't want use an existing JIT library; LLVM or something like that, because we don't control those things. Their API might change, or the project may be abandoned. Another criteria for the 3x3 project was to be as portable as possible, which is kinda difficult. You know, the other languages like, say, Crystal or Julia are using LLVM, but we cannot use that kind of thing if we want portability. The Ruby language itself is a pretty long-running project. Ruby itself is 25 years old, we can't rely on comparatively short-lived projects.
Jonan: Some of the other solutions, like LLRB, show how you could do what Evan Phoenix proposed a couple years ago, using LLVM to accomplish just-in-time compilation. Do you think that projects like that are unlikely to succeed for Ruby because we need to preserve the portability of the language? Do you think maybe something closer to Vlad's approach is likely to get merged instead of another approach?
Matz: Yes, that's my opinion.
Jonan: What do you think, Koichi?
Koichi: I like this approach, as Matz says, it's portable and it doesn't require us to use additional software we can't control. We tried to write a Ruby to C translator (Ruby2c, a project by Ryan Davis and Eric Hodel), but it didn't work out on paper, so we stopped. I'm very happy to see such a practical use case. Now Vlad is working on a product-ready JIT compiler. I'm very happy to see such great software.
Jonan: So Vlad, I was surprised to see so much progress so quickly after your recent hash changes. You're spending about half of your time on Ruby right now, correct?
Vlad: That's right. I still need to maintain a lot of very complicated code in GCC's register allocator. I'm in maintenance mode right now, so I don't develop new optimizations for the register allocator. I have an agreement with my manager that allows me to work on Ruby. Our tools group focuses on static languages, not on dynamic languages. That's another department in Red Hat.
Jonan: So your manager at RedHat is kind enough to give you the opportunity to work on projects that you find interesting. You were talking a little bit earlier about how you chose Ruby. Could you recap that for us?
Koichi: Yeah, he chose Ruby mostly for the code.
Jonan: The code by Koichi resembles the code you've worked with in GCC? It's the style of C that you prefer?
Vlad: That's true. Actually, someone asked me at this conference why I chose MRI, because C code is horrible.
Koichi: It is horrible, yeah.
Vlad: But I told him it's quite the opposite. The code is very good, so that's why I like it.
Jonan: Let's talk about your current implementation. I understand it's quite early in the MJIT project. MJIT is what it's called, right?
Vlad: Yes, Method JIT.
Jonan: It's been about six months that you've been working on this.
Vlad: No, it's about a year. I started last summer. I've been working on RTL and MJIT because it's a related project.
Jonan: Aaron, could you explain what makes this is a Method JIT specifically?
Aaron: It's only JITing methods. So basically what it does is it just says, "Okay, I'm gonna take a method and convert it into whatever, in this case, C code. I'll compile it, and then use that."
Matz: In contrast to a Tracing JIT; there are two kinds of JIT. One is the Method JIT, the other is the Tracing JIT. So a Tracing JIT traces the execution path inside of the method, and those sometimes go away, so you compile these things into machine language at runtime. A Method JIT uses the method as a unit of compilation.
Aaron: It could include method calls or whatever, but it's actually recording the path through the code that gets executed and using that as a way to determine what to JIT. In a Method JIT, the method is the unit that gets compiled.
Vlad: The Method JIT is a superior JIT, but it's harder to implement. It's easier to optimize linear code, and therefore more popular, because you need fewer parts to implement optimization on linear code.
Matz: Yeah, that's the reason some JIT compilers are switching from a Tracing JIT to a Method JIT. For example, the JIT team at Firefox just abandoned a Tracing JIT and are moving toward a Method JIT.
Jonan: So a Method JIT is a strictly superior method of accomplishing this?
Matz: Yes. Sometimes a Tracing JIT is faster, but it consumes more memory, and sometimes it is more difficult to optimize.
Jonan: So the downside in using a Method JIT is just the complexity. Koichi, do you feel like a Method JIT is the way to go?
Koichi: Yeah, it is much simpler, so I think it is good to try. Peak performance for a Tracing JIT is with something like an inner loop; it will be one instruction and it will be high performance. There are only a few places to get such peak performance with Ruby, so I think a Method JIT is better. For example, in a Ruby on Rails application, we don't have a single hot spot, so a Method JIT is more straightforward.
Jonan: I see, so this is a better approach for Ruby and hopefully for Rails as well. One of your goals with this particular implementation of the Method JIT is simplicity; your goal is not necessarily to have the fastest JIT, but to have the easiest-to-understand JIT.
Vlad: Yeah, because we have no resources. I mean that the Ruby community doesn't likely have the resources to implement something complex. We would need a lot of compiler specialists.
Jonan: So if we could just get 50 or 100 compiler specialists to come and volunteer on Ruby, we'd be all set. I think that's all of them in the world?
Vlad: Intel has several hundred people working on their compilers. I heard of one optimization from a friend at Intel, 1 optimization out of maybe 300 or 500, and there are 3 people working on that software optimization. It's not even the most important optimization.
Matz: You talked to someone at ICC (Intel C++ Compiler)?
Vlad: Yeah, sure. They're always very cautious because they can't share all the information. At Intel the compiler specialists have an advantage because they know the internals of the processor. Modern processors are very complicated interpreters, and even the specialists don't generally know the internals. Even if you are very familiar with an interpreter, it's hard to predict its behavior.
Jonan: So you're often working against a black-box implementation to try and make things run, experimenting a lot of the time?
Vlad: Yes. There's a guide called the Intel Optimization Guide, and they have some recommendations for what optimizations should be implemented in what way, but this is approximate. Intel employees themselves have much more information, therefore the Intel compiler is the fastest compiler for Intel processors.
Jonan: That's cheating.
Vlad: And they actually have been sued and lost. In that case they implemented a special check in the compiler for AMD processors, and they switched off some optimizations. (Intel's "cripple AMD" function described by Agner Fog)
Aaron: I think I remember this.
Vlad: AMD sued them for this.
Jonan: So that would make any code run on an AMD processor slower no matter what?
Aaron: Slower, yep.
Jonan: ...than the same code running on an Intel processor?
Vlad: Yeah. I don't remember exactly but that was about 10 years ago. I don't know the current situation.
Jonan: Aaron, do you think that the American Ruby community is interested in helping with this kind of work? Contributing to JIT implementations? I know that Vlad has been working very hard on this, but at this point it's still very early, and I think he may need help to make this the best it can be, especially if it gets merged.
Vlad: First of all, I need to stabilize this. It's actually in the very early stages of development. I was quite surprised that the Ruby community took it so seriously so quickly.
Jonan: Japanese Rubyists are very excited about this, and I think Rubyists around the world are as well. It's a very timely change.
Aaron: I think everybody in the Ruby community is interested in a JIT, and after your hash patches, they absolutely take your work seriously.
Vlad: Actually, I picked the hash tables to introduce myself. You can't start working on something like MJIT when you've never worked on a project.
Jonan: It was a good introduction.
Vlad: I'm not sure about that, as you know, there was serious competition from Yura Sokolov. (Yura and Vlad had competing hash table implementations)
Jonan: Is there a history there? Did you know him before that interaction?
Vlad: No, I didn't know him.
Jonan: Well, I think you both, ultimately, handled yourselves very gracefully with that competition. I think a lot of times in programming communities you can see discussions like that devolve very quickly, and I appreciated the professionalism in reading through those posts. I thought you both handled yourselves quite well. I'm very curious about when this work might be merged, but I know it's all very early. Out of the existing potential JITs that are out there now, is this the approach that you like the most, Matz?
Matz: As far as I know, yeah.
Jonan: Koichi, you feel the same?
Jonan: Aaron, you think this is the best of the options we have right now?
Aaron: Yeah, I think so. I've seen some people complaining that it generates C code, but I think that's a good idea.
Jonan: You like that it generates C code?
Vlad: I think that might be beneficial for GCC, too. I'm already thinking about some new extensions for GCC that could help the MJIT implementation.
Koichi: Is it an optimization around strings?
Vlad: It's some inlining stuff, so...
Matz: I see. Wow.
Vlad: Right now, there is huge competition between GCC and LLVM. Everything implemented in GCC is then implemented in LLVM right away, and vice-versa. So if I implement this for GCC, most probably it will be implemented in LLVM.
Jonan: I think we would all like that very much. You're an incredibly valuable contributor to have stumbled upon our project. I'm very thankful that you chose Ruby. If MJIT continues to evolve as it has, it looks likely to someday be a part of Ruby. You've said you want to keep it simple because it needs to be maintained by the existing committers in the community. Are there people who have reached out to you to talk about the work you've done so far, or maybe contributed to it in conversation at least?
Vlad: Sure. We had talks with Koichi about this, and of course Koichi is very interested in this. Another committer, Kokubun, has asked me a lot of questions. I didn't know why at the time, but I checked yesterday and I found out that he's author of LLRB.
Jonan: Yes, the LLVM based JIT for Ruby. So I think Koichi has been quite busy with another project lately, he had his first child about a year ago. He has just-in-time children to interpret right now, but I'm glad to hear that you've had some time to help with the project. Koichi having children is really inconvenient for Ruby as a language. I think just one is good Koichi, maybe no more children. I'm joking of course, please have all of the children you want. We need more Rubyists in the future.
Matz: Yeah, we do.
Jonan: Do any of you have anything you'd like to share?
Aaron: I have so many questions!
Jonan: Please, go ahead.
Aaron: This isn't about MJIT in particular, but there are so many people working on optimization passes in GCC. Does it ever happen that one team's optimization messes up somebody else's optimization?
Vlad: Oh, every time.
Aaron: Every time?
Vlad: There are very many conflicts between, for example, the instruction scheduling and the register allocator. Conflicts that will never be solved.
Jonan: So you just end up racing against each other, trying to optimize over the other?
Matz: So you see fighting amongst teams?
Vlad: Actually, I also work on instruction scheduling. Some optimizations are simply contradictory. If, for example, the instruction scheduling team implements code for an optimization, it can actually negate some of the register allocator optimizations.
Jonan: I see. What other questions do you have, Aaron?
Aaron: I know you said that RTL has speculative instructions, so do you have to perform deoptimizations if you make a mistake?
Vlad: Sure, yeah.
Aaron: So first off, what is that process? Second, how much is the overhead in terms of speed and memory for doing that?
Vlad: The process is simple, actually. When instructions change after we go from a speculative instruction to a universal instruction, we need to generate a new version of machine code. That's how it works. As for the overhead, there should not be significantly increased overhead when MJIT is used. Of course, there is some small overhead, but it should be tiny. When MJIT is used, we shouldn't see performance degradation in comparison to when MJIT is not used.
Jonan: So MJIT would always be faster in every case if we were to do this well? For example, sometimes a particular hotspot in the code ends up being deoptimized rather frequently and gets locked out of this type of Method JIT.
Vlad: The switching is very fast right now. If we execute only part of this code, and that part is much faster than interpretation, it will consume this overhead for switching from a JITed code execution to RTL interpretation. Of course, there are always corner cases.
Jonan: Do you think it's possible that the MJIT would not always be faster Koichi?
Koichi: We can always find some edge case.
Jonan: I see.
Koichi: Always is not a technical term.
Jonan: Right, always is not a technical term, we'll just say "often" then. We should talk about benchmarking! I'm just kidding, I don't want to talk about benchmarking. There are a lot of benchmarking discussions around this though, and I thought you addressed them very well in your talk today, by saying that there are different ways of benchmarking everything. No matter how you come up with your benchmarks, there will always be distrust around those. I personally think that's healthy, it's good for the community to have different approaches and to highlight different pieces of the data.
Aaron: I think benchmarking JITs is hard. It's a hard thing to do. Some benchmarks can favor one implementation over another. So, of course, the other implementation is going to complain if you don't use their way of doing things.
Jonan: Then they'll post back to you, and you'll go back and forth forever. So instead of writing benchmarking blog posts, I think that we should write JITs.
Aaron: I think part of the problem is that when you're doing the benchmarking you have to apply the same tests to everything, right? So you have to say "I'm gonna run this one test, and this is how I'm gonna run it. And I'm gonna run it in that same way across all of these versions", but that one way that you run it could be bad news for one implementation. If you run that same test in a slightly different way, it may be really good news for that implementation. So, of course, they're going to complain, but you have to apply that test the same way. Otherwise, it's not really a good comparison.
Matz: Yeah, not a benchmark.
Jonan: So the other competing implementations may, for example, have a slow startup time, kind of a burn-in period when they start a benchmark, but then, overall, come out ahead of MJIT. That doesn't necessarily mean that it's a better JIT; it just means that it is faster under those circumstances.
Aaron: Honestly, those type of tests make me wonder; how long is acceptable for a warm-up time for a particular application server. At GitHub, we restart our application server maybe every 10 minutes. It's not because we have to restart it every 10 minutes, it's because we're deploying code maybe every 10 or 15 minutes, so we have to restart the server. Of course at night it stays up longer, but during the daytime it's always rolling over. So it makes me wonder, how beneficial would that really be?
Jonan: I see. I was thinking about this in the terms of short-lived processes, maybe a background process, where the time to warm up is more impactful. With the movement to cloud services now, you may be spinning up servers on demand, dynamically in response to traffic needs. Each of those new Ruby processes on those servers is going to need this start-up time. I think some of the benchmarking I was looking at was in the microseconds range, some very small period of time, so maybe the impact wouldn't be that large. Since you've stated that performance is not necessarily the number one goal of MJIT, maybe the primary goal of making it simple and easy to maintain means we don't need to pay that much attention to that style of measurement?
Koichi: Yeah, actually for me the JVM or JRuby is not really our competitor. The one comparison that matters is the comparison between the Ruby 2.0, which is the virtual machine, and the MJIT, the compilation. That is the real competition. We should not focus too much on comparisons with the JVM.
Aaron: Correct me if I'm wrong, but I think the point of MJIT is essentially that we can achieve a 3x speed improvement without giving up simplicity or ease of maintenance.
Vlad: Yeah. This is the simplest way to implement a high-quality JIT.
Jonan: That's exactly how Ruby has gotten to where it is now, right? Many times, we make decisions for simplicity's sake or ease of use over speed. I think Ruby programmers embrace that choice when they use this language. I know it's hard to predict what will happen in a year or two, but do you think that this MJIT will approximately meet our 3x3 goal by itself? Will we need other changes to get us there? Vlad, what do you think?
Vlad: It's, again, about benchmarking. If you ask me about optcarrot, it's already there, but some applications will never be 3 times faster.
Jonan: I've heard of this one project called Rails...
Vlad: I don't know. I didn't try it.
Aaron: Do we even have benchmarks for Rails, besides rubybench.org?
Matz: Yeah, Noah is working on it as a standard benchmark for Rails applications.
Jonan: Which I think the community desperately needs, right? It would at least be valuable to have. I know it's frustrating sometimes to benchmark using Rails, but given that there are so many Rails programmers out there in the world who are looking for ways to make their stack faster, it could be a big help. I did promise you all that I only needed 30 minutes of your time, and we are at the end of it. Do you want to share some final thoughts?
Matz: Yeah. MJIT is pretty promising, so we can expect the future Ruby 3 will be three times faster than Ruby 2.0, as predicted, due to Vlad's work. We have a bright future before us, the future of Ruby.
Jonan: Anything you'd like to share Koichi?
Koichi: I'm very happy to see such a great project. I want to collaborate with Vlad to evaluate it. I have question for Vlad, when will you become a committer?
Vlad: I am not a committer.
Koichi: No, so when will you become one?
Matz: When are you going to? Do you have any preference? Right now?
Vlad: When I implement MJIT I will be ready.
Matz: By the way, do you accept pull requests to the MJIT repository on GitHub?
Vlad: I don't know. Actually, I'm an SVN programmer because that's what GCC uses.
Matz: Yeah, older projects.
Jonan: So if I were to go and make a pull request on GitHub, that would not be the ideal method for you. You'd rather someone sent you a patch?
Vlad: I need to look into accepting pull requests.
Jonan: There's a chance that someday in the future you will be a committer Vlad. Is that what you were saying Matz?
Jonan: That will be a good day. We'll get a cake. Aaron, what do you think about MJIT?
Aaron: I think that this design is a "le-JIT" approach.
Jonan: I should have known better than to ask you about it. I teed that up nicely for you.
Aaron: Thank you. I really like the internals of it, I can "C" what he did there.
Jonan: You can "C" it...
Jonan: Vlad, do you have anything else you'd like to add for the Ruby programmers of the world?
Vlad: Actually, I'm new to the Ruby world, but I've already found that the Ruby community gives a very strong first impression; it's a pleasure.
Jonan: Well, it's a pleasure to have you. Thank you so much for all of your help.
Aaron: I think you're probably the first person to join the Ruby project because you like the C code.
Matz: He probably knows this.
Jonan: I think he might. Thank you all for your time.
If you enjoyed this post, check out Heroku's Engineering Blog for more from the Heroku developer community.