diff options
author | Mike Danes <onemihaid@hotmail.com> | 2018-01-20 14:08:37 +0200 |
---|---|---|
committer | Mike Danes <onemihaid@hotmail.com> | 2018-01-20 14:08:37 +0200 |
commit | ca397e5f57a649ad3bcc621cbb02354670f87a08 (patch) | |
tree | 4a494f97f3fdf53603483950aa61dd103dc62a8d /src/vm/i386 | |
parent | c7c2869ca0def15c25b8043ac78a378e0145bac8 (diff) | |
download | coreclr-ca397e5f57a649ad3bcc621cbb02354670f87a08.tar.gz coreclr-ca397e5f57a649ad3bcc621cbb02354670f87a08.tar.bz2 coreclr-ca397e5f57a649ad3bcc621cbb02354670f87a08.zip |
Fix 64 bit shift inconsistencies (on 32 bit targets)
Recent shift changes made the JIT_LLsh helper mask the shift count to 6 bits. The other 2 helpers (JIT_LRsh and JIT_LRsz) so now we get inconsistencies such as `(x >> 64) != (x << 64)`.
The ECMA spec says that "the return value is unspecified if shiftAmount is greater than or equal to the width of value" so the JIT has no obligation to implement a particular behavior. But it seems preferable to have all shift instructions behave similarly, it avoids complications and reduces risks.
This also changes `ValueNumStore::EvalOpIntegral` to mask the shift count for 64 bit shifts so it matches `gtFoldExprConst`. Otherwise the produced value depends on the C/C++ compiler's behavior.
Diffstat (limited to 'src/vm/i386')
-rw-r--r-- | src/vm/i386/jithelp.asm | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/src/vm/i386/jithelp.asm b/src/vm/i386/jithelp.asm index a4bbe1ccf7..27b866881a 100644 --- a/src/vm/i386/jithelp.asm +++ b/src/vm/i386/jithelp.asm @@ -520,6 +520,8 @@ JIT_LLsh ENDP ALIGN 16 PUBLIC JIT_LRsh JIT_LRsh PROC +; Reduce shift amount mod 64 + and ecx, 63 ; Handle shifts of between bits 0 and 31 cmp ecx, 32 jae short LRshMORE32 @@ -554,6 +556,8 @@ JIT_LRsh ENDP ALIGN 16 PUBLIC JIT_LRsz JIT_LRsz PROC +; Reduce shift amount mod 64 + and ecx, 63 ; Handle shifts of between bits 0 and 31 cmp ecx, 32 jae short LRszMORE32 |