Fix bug when atomic load is fused with an arith op (and not in the entry BB)
Normally, the FakeUse for preserving the atomic load ends
up on the load's Dest. However, for fused load+add, the load
is deleted, and its Dest is no longer defined. This trips
up the liveness analysis when it happens on a non-entry
block. So the FakeUse should be for the add's dest instead,
in that case.
We have no access to the add, so introduce a
getLastInserted() helper. A couple of ways to do that:
- modify insert() to track explicitly
- rewind from Next one step
Either that, or we disable the fusing for atomic loads.
BUG= https://code.google.com/p/nativeclient/issues/detail?id=3882
R=stichnot@chromium.org
Review URL: https://codereview.chromium.org/417353003
diff --git a/src/IceTargetLoweringX8632.cpp b/src/IceTargetLoweringX8632.cpp
index 2db795b..00db25a 100644
--- a/src/IceTargetLoweringX8632.cpp
+++ b/src/IceTargetLoweringX8632.cpp
@@ -2724,15 +2724,18 @@
// Then cast the bits back out of the XMM register to the i64 Dest.
InstCast *Cast = InstCast::create(Func, InstCast::Bitcast, Dest, T);
lowerCast(Cast);
- // Make sure that the atomic load isn't elided.
+ // Make sure that the atomic load isn't elided when unused.
Context.insert(InstFakeUse::create(Func, Dest->getLo()));
Context.insert(InstFakeUse::create(Func, Dest->getHi()));
return;
}
InstLoad *Load = InstLoad::create(Func, Dest, Instr->getArg(0));
lowerLoad(Load);
- // Make sure the atomic load isn't elided.
- Context.insert(InstFakeUse::create(Func, Dest));
+ // Make sure the atomic load isn't elided when unused, by adding a FakeUse.
+ // Since lowerLoad may fuse the load w/ an arithmetic instruction,
+ // insert the FakeUse on the last-inserted instruction's dest.
+ Context.insert(InstFakeUse::create(Func,
+ Context.getLastInserted()->getDest()));
return;
}
case Intrinsics::AtomicRMW: