Skip to content

[pull] master from ruby:master#859

Merged
pull[bot] merged 33 commits intoturkdevops:masterfrom
ruby:master
Mar 17, 2026
Merged

[pull] master from ruby:master#859
pull[bot] merged 33 commits intoturkdevops:masterfrom
ruby:master

Conversation

@pull
Copy link

@pull pull bot commented Mar 17, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

Nuzair46 and others added 30 commits March 17, 2026 23:50
nobu and others added 3 commits March 18, 2026 02:20
* add rpo to LIR cfg

* add instruction ids to instructions along with start / end indexes on blocks

* Analyze liveness of vregs

* We don't need to check kill set before adding to gen set

Since we're processing instructions in reverse and our IR is SSA, we
can't have entries in the kill set

* make assertions against LIR output

* Add live ranges and a function to get output vregs

* filter out vregs from block params

* add an iterator for iterating over each ON bit in a bitset

* Extract VRegId from a usize

We would like to do type matching on the VRegId.  Extracting the VRegID
from a usize makes the code a bit easier to understand and refactor.
MemBase uses a VReg, and there is also a VReg in Opnd.  We should be
sharing types between these two, so this is a step in the direction of
sharing a type

* add build_intervals and tests for it

* reduce diff

* live range wip

* fix up live range debugging output

* print comments

* fix up live range debugging output

* split blocks in check ints

* we have split special guards in to basic blocks

* we are pushing block parameters as vregs now

* WIP

* wipwipwip linear scan somewhat working

* register allocation seems to be working (without spills)

* add test for spilling with linear scan

* porting spill less

* adding resolve_ssa function

* add a comment

* rewrite instructions to use pregs

* registers seem to be working somewhat

* clear block edges after inserting movs

* fix debug printer

* take memory operands in to account when calculating live ranges

* add missing label

* add assertion message

* put markers around ccalls. Dummy blocks are not part of a CFG, return empty edges

* make sure all dummy blocks start with a label

* handle MemBase::Stack in arm64_scratch_split and fix spare register in parcopy

* fix spill code

* Immediate moves to memory or regs need to happen before moves to
registers

* fixing scratch split

* add some debugging output

* refactor parallel movs

* remove Insn::ParallelMov

* remove some prints

* Use CARG regs instead of alloc regs when calling C funcs

Also convert vregs to pregs before parallel copying them to c func args

* Use JIT regs instead of CC regs when preserving registers

We were accidentally indexing in to the CC regs when trying to preserve
JIT regs.

* unvibing this code

* fix c calling convention regs

* make sure to parcopy rewritten opnds so we do not look at vregs

* assert that we only ever pass non-vregs to parcopy

* print vregs at the top of each block

* Don't rewrite jump params

Jump params are handled by parallel copy and critical edge splitting

* btest is passing, thanks Claude

* correctly append exit code after scratch split

* wipwipwip

* fix csel and loads between jumps

* make sure immediates fit in the operand on x86, otherwise emit movabs

* fix split

* fix alignment for calls on x86

* Fix output register for ccall alignment

When we pop an output register for alignment, make sure we're not
clobbering anything. When we pop for alignment, we have to pop it
somewhere, so make sure it's not clobbering anything

* fix pops around ccalls

* fix survivors / alignment around calls

* [TODO] Refuse to compile if we want to allocate too many stack slots

If a method needs too many stack slots, then refuse to compile it. We're
getting stack misalignment errors on rosetta.

* fix zjit-check under rosetta

Update backend tests and snapshots to match current allocator/SSA behavior and restore strict checks where possible.

test_build_intervals numbering changed because block traversal order changed in 8761a33 (po_from now visits edge1 first), which changes block_order -> number_instructions IDs.

Also document why linear_scan handles num_registers == 0: several backend tests intentionally exercise all-stack allocation paths.

* fix warnings

* fix the rustdoc warning

* make sure we have labels

* Implement register preferences and skip useless copies

This patch implements register preferences.  We're adding preferred
registers for very short lived intervals that move to a physical
register.

For example

```
1: sub v0, sp, 123
2: mov sp, v0
```

We teach the allocator that v0 prefers `sp` because v0 ends up in `sp`
and comes to life at instruction 1 and dies at instruction 2

* remove useless copies before calling parcopy

* Fix register preservation around ccall

We need to push pairs of registers so that code is more compact.  Also
don't try to preserve the return value of a ccall if the live range is
dead

* great job

* refactoring on sequentialize, remove intermediate vec

* check born / dies

* use LoadInto instead of Mov for VALUE operands

* update encoding

* update encoding

* ws

* remove old allocator

* fix clippy

* deal with memory operands on block edges

* Update zjit/src/codegen.rs

Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>

* Update zjit/Cargo.toml

Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>

* address PR review on split jumps and C call helpers

* address more feedback

---------

Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
@pull pull bot locked and limited conversation to collaborators Mar 17, 2026
@pull pull bot added the ⤵️ pull label Mar 17, 2026
@pull pull bot merged commit b7e4d57 into turkdevops:master Mar 17, 2026
1 of 2 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants