Eliminate Subzero sandboxing support

Subzero supported sandboxing for the PNaCl platform. Reactor does not
support sandboxing at the JIT level, and we don't have a need for it
since Chromium provides sandboxing as part of the "GPU process". Thus
we can remove it and reduce code complexity.

Note that Subzero's sandboxing implementation comes at a performance
penalty. Project Bunker provides a better solution for SwiftShader,
which is probably also more secure in light of speculative execution
vulnerabilities.

If we ever do need sandboxing support in Reactor itself (e.g. outside of
SwiftShader, when process isolation is not feasible), it is best to
use an actively developed JIT-compiler where security always takes
priority over peformance, like Chromium's WebAssembly JIT.

Bug: b/179832693
Change-Id: I7364d22183e123c5145caae9f546d3855012d73e
Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/55488
Kokoro-Result: kokoro <noreply+kokoro@google.com>
Tested-by: Nicolas Capens <nicolascapens@google.com>
Reviewed-by: Alexis Hétu <sugoi@google.com>
Commit-Queue: Nicolas Capens <nicolascapens@google.com>
diff --git a/third_party/subzero/docs/DESIGN.rst b/third_party/subzero/docs/DESIGN.rst
index bf3e905..50f6548 100644
--- a/third_party/subzero/docs/DESIGN.rst
+++ b/third_party/subzero/docs/DESIGN.rst
@@ -252,8 +252,7 @@
 low-level instructions correspond to individual machine instructions.  The
 high-level ICE instruction space includes a few additional instruction kinds
 that are not part of LLVM but are generally useful (e.g., an Assignment
-instruction), or are useful across targets (e.g., BundleLock and BundleUnlock
-instructions for sandboxing).
+instruction), or are useful across targets.
 
 Specifically, high-level ICE instructions that derive from LLVM (but with PNaCl
 ABI restrictions as documented in the `PNaCl Bitcode Reference Manual
@@ -299,10 +298,6 @@
 - Assign: a simple ``A=B`` assignment.  This is useful for e.g. lowering Phi
   instructions to non-SSA assignments, before lowering to machine code.
 
-- BundleLock, BundleUnlock.  These are markers used for sandboxing, but are
-  common across all targets and so they are elevated to the high-level
-  instruction set.
-
 - FakeDef, FakeUse, FakeKill.  These are tools used to preserve consistency in
   liveness analysis, elevated to the high-level because they are used by all
   targets.  They are described in more detail at the end of this section.
@@ -954,36 +949,6 @@
 consumers include branch, select (the ternary operator from the C language), and
 sign-extend and zero-extend when the source has bool type.
 
-Sandboxing
-^^^^^^^^^^
-
-Native Client's sandbox model uses software fault isolation (SFI) to provide
-safety when running untrusted code in a browser or other environment.  Subzero
-implements Native Client's `sandboxing
-<https://developer.chrome.com/native-client/reference/sandbox_internals/index>`_
-to enable Subzero-translated executables to be run inside Chrome.  Subzero also
-provides a fairly simple framework for investigating alternative sandbox models
-or other restrictions on the sandbox model.
-
-Sandboxing in Subzero is not actually implemented as a separate pass, but is
-integrated into lowering and assembly.
-
-- Indirect branches, including the ret instruction, are masked to a bundle
-  boundary and bundle-locked.
-
-- Call instructions are aligned to the end of the bundle so that the return
-  address is bundle-aligned.
-
-- Indirect branch targets, including function entry and targets in a switch
-  statement jump table, are bundle-aligned.
-
-- The intrinsic for reading the thread pointer is inlined appropriately.
-
-- For x86-64, non-stack memory accesses are with respect to the reserved sandbox
-  base register.  We reduce the aggressiveness of address mode inference to
-  leave room for the sandbox base register during lowering.  There are no memory
-  sandboxing changes for x86-32.
-
 Code emission
 -------------
 
diff --git a/third_party/subzero/src/IceAssemblerARM32.cpp b/third_party/subzero/src/IceAssemblerARM32.cpp
index 0557e21..26cf64d 100644
--- a/third_party/subzero/src/IceAssemblerARM32.cpp
+++ b/third_party/subzero/src/IceAssemblerARM32.cpp
@@ -686,10 +686,8 @@
                                                       const Constant *Value) {
   MoveRelocatableFixup *F =
       new (allocate<MoveRelocatableFixup>()) MoveRelocatableFixup();
-  F->set_kind(IsMovW ? (IsNonsfi ? llvm::ELF::R_ARM_MOVW_PREL_NC
-                                 : llvm::ELF::R_ARM_MOVW_ABS_NC)
-                     : (IsNonsfi ? llvm::ELF::R_ARM_MOVT_PREL
-                                 : llvm::ELF::R_ARM_MOVT_ABS));
+  F->set_kind(IsMovW ? llvm::ELF::R_ARM_MOVW_ABS_NC
+                     : llvm::ELF::R_ARM_MOVT_ABS);
   F->set_value(Value);
   Buffer.installFixup(F);
   return F;
diff --git a/third_party/subzero/src/IceAssemblerARM32.h b/third_party/subzero/src/IceAssemblerARM32.h
index 43c3f56..2c0a089 100644
--- a/third_party/subzero/src/IceAssemblerARM32.h
+++ b/third_party/subzero/src/IceAssemblerARM32.h
@@ -95,11 +95,7 @@
     const RegNumT FrameOrStackReg;
   };
 
-  explicit AssemblerARM32(bool IsNonsfi, bool use_far_branches = false)
-      : Assembler(Asm_ARM32), IsNonsfi(IsNonsfi) {
-    // TODO(kschimpf): Add mode if needed when branches are handled.
-    (void)use_far_branches;
-  }
+  AssemblerARM32() : Assembler(Asm_ARM32) {}
   ~AssemblerARM32() override {
     if (BuildDefs::asserts()) {
       for (const Label *Label : CfgNodeLabels) {
@@ -678,8 +674,6 @@
 private:
   ENABLE_MAKE_UNIQUE;
 
-  const bool IsNonsfi;
-
   // A vector of pool-allocated x86 labels for CFG nodes.
   using LabelVector = std::vector<Label *>;
   LabelVector CfgNodeLabels;
diff --git a/third_party/subzero/src/IceAssemblerX86BaseImpl.h b/third_party/subzero/src/IceAssemblerX86BaseImpl.h
index aa3da61..1774409 100644
--- a/third_party/subzero/src/IceAssemblerX86BaseImpl.h
+++ b/third_party/subzero/src/IceAssemblerX86BaseImpl.h
@@ -3490,13 +3490,6 @@
     intptr_t offset = label->getPosition() - Buffer.size();
     assert(offset <= 0);
     if (Utils::IsInt(8, offset - kShortSize)) {
-      // TODO(stichnot): Here and in jmp(), we may need to be more
-      // conservative about the backward branch distance if the branch
-      // instruction is within a bundle_lock sequence, because the
-      // distance may increase when padding is added. This isn't an issue for
-      // branches outside a bundle_lock, because if padding is added, the retry
-      // may change it to a long backward branch without affecting any of the
-      // bookkeeping.
       emitUint8(0x70 + condition);
       emitUint8((offset - kShortSize) & 0xFF);
     } else {
diff --git a/third_party/subzero/src/IceCfg.cpp b/third_party/subzero/src/IceCfg.cpp
index 2bd610d..570987f 100644
--- a/third_party/subzero/src/IceCfg.cpp
+++ b/third_party/subzero/src/IceCfg.cpp
@@ -1608,12 +1608,6 @@
   }
 }
 
-void Cfg::markNodesForSandboxing() {
-  for (const InstJumpTable *JT : JumpTables)
-    for (SizeT I = 0; I < JT->getNumTargets(); ++I)
-      JT->getTarget(I)->setNeedsAlignment();
-}
-
 // ======================== Dump routines ======================== //
 
 // emitTextHeader() is not target-specific (apart from what is abstracted by
@@ -1670,7 +1664,6 @@
   OstreamLocker L(Ctx);
   Ostream &Str = Ctx->getStrEmit();
   const Assembler *Asm = getAssembler<>();
-  const bool NeedSandboxing = getFlags().getUseSandboxing();
 
   emitTextHeader(FunctionName, Ctx, Asm);
   if (getFlags().getDecorateAsm()) {
@@ -1682,10 +1675,6 @@
     }
   }
   for (CfgNode *Node : Nodes) {
-    if (NeedSandboxing && Node->needsAlignment()) {
-      Str << "\t" << Asm->getAlignDirective() << " "
-          << Asm->getBundleAlignLog2Bytes() << "\n";
-    }
     Node->emit(this);
   }
   emitJumpTables();
@@ -1696,10 +1685,7 @@
   TimerMarker T(TimerStack::TT_emitAsm, this);
   // The emitIAS() routines emit into the internal assembler buffer, so there's
   // no need to lock the streams.
-  const bool NeedSandboxing = getFlags().getUseSandboxing();
   for (CfgNode *Node : Nodes) {
-    if (NeedSandboxing && Node->needsAlignment())
-      getAssembler()->alignCfgNode();
     Node->emitIAS(this);
   }
   emitJumpTables();
diff --git a/third_party/subzero/src/IceCfg.h b/third_party/subzero/src/IceCfg.h
index 3729e3c..c06fb99 100644
--- a/third_party/subzero/src/IceCfg.h
+++ b/third_party/subzero/src/IceCfg.h
@@ -219,7 +219,6 @@
   bool validateLiveness() const;
   void contractEmptyNodes();
   void doBranchOpt();
-  void markNodesForSandboxing();
 
   /// \name  Manage the CurrentNode field.
   /// CurrentNode is used for validating the Variable::DefNode field during
diff --git a/third_party/subzero/src/IceCfgNode.cpp b/third_party/subzero/src/IceCfgNode.cpp
index 26991bb..cb8ddce 100644
--- a/third_party/subzero/src/IceCfgNode.cpp
+++ b/third_party/subzero/src/IceCfgNode.cpp
@@ -1127,136 +1127,9 @@
   }
 }
 
-// Helper class for emitIAS().
-namespace {
-class BundleEmitHelper {
-  BundleEmitHelper() = delete;
-  BundleEmitHelper(const BundleEmitHelper &) = delete;
-  BundleEmitHelper &operator=(const BundleEmitHelper &) = delete;
-
-public:
-  BundleEmitHelper(Assembler *Asm, const InstList &Insts)
-      : Asm(Asm), End(Insts.end()), BundleLockStart(End),
-        BundleSize(1 << Asm->getBundleAlignLog2Bytes()),
-        BundleMaskLo(BundleSize - 1), BundleMaskHi(~BundleMaskLo) {}
-  // Check whether we're currently within a bundle_lock region.
-  bool isInBundleLockRegion() const { return BundleLockStart != End; }
-  // Check whether the current bundle_lock region has the align_to_end option.
-  bool isAlignToEnd() const {
-    assert(isInBundleLockRegion());
-    return llvm::cast<InstBundleLock>(getBundleLockStart())->getOption() ==
-           InstBundleLock::Opt_AlignToEnd;
-  }
-  bool isPadToEnd() const {
-    assert(isInBundleLockRegion());
-    return llvm::cast<InstBundleLock>(getBundleLockStart())->getOption() ==
-           InstBundleLock::Opt_PadToEnd;
-  }
-  // Check whether the entire bundle_lock region falls within the same bundle.
-  bool isSameBundle() const {
-    assert(isInBundleLockRegion());
-    return SizeSnapshotPre == SizeSnapshotPost ||
-           (SizeSnapshotPre & BundleMaskHi) ==
-               ((SizeSnapshotPost - 1) & BundleMaskHi);
-  }
-  // Get the bundle alignment of the first instruction of the bundle_lock
-  // region.
-  intptr_t getPreAlignment() const {
-    assert(isInBundleLockRegion());
-    return SizeSnapshotPre & BundleMaskLo;
-  }
-  // Get the bundle alignment of the first instruction past the bundle_lock
-  // region.
-  intptr_t getPostAlignment() const {
-    assert(isInBundleLockRegion());
-    return SizeSnapshotPost & BundleMaskLo;
-  }
-  // Get the iterator pointing to the bundle_lock instruction, e.g. to roll
-  // back the instruction iteration to that point.
-  InstList::const_iterator getBundleLockStart() const {
-    assert(isInBundleLockRegion());
-    return BundleLockStart;
-  }
-  // Set up bookkeeping when the bundle_lock instruction is first processed.
-  void enterBundleLock(InstList::const_iterator I) {
-    assert(!isInBundleLockRegion());
-    BundleLockStart = I;
-    SizeSnapshotPre = Asm->getBufferSize();
-    Asm->setPreliminary(true);
-    assert(isInBundleLockRegion());
-  }
-  // Update bookkeeping when the bundle_unlock instruction is processed.
-  void enterBundleUnlock() {
-    assert(isInBundleLockRegion());
-    SizeSnapshotPost = Asm->getBufferSize();
-  }
-  // Update bookkeeping when we are completely finished with the bundle_lock
-  // region.
-  void leaveBundleLockRegion() { BundleLockStart = End; }
-  // Check whether the instruction sequence fits within the current bundle, and
-  // if not, add nop padding to the end of the current bundle.
-  void padToNextBundle() {
-    assert(isInBundleLockRegion());
-    if (!isSameBundle()) {
-      intptr_t PadToNextBundle = BundleSize - getPreAlignment();
-      Asm->padWithNop(PadToNextBundle);
-      SizeSnapshotPre += PadToNextBundle;
-      SizeSnapshotPost += PadToNextBundle;
-      assert((Asm->getBufferSize() & BundleMaskLo) == 0);
-      assert(Asm->getBufferSize() == SizeSnapshotPre);
-    }
-  }
-  // If align_to_end is specified, add padding such that the instruction
-  // sequences ends precisely at a bundle boundary.
-  void padForAlignToEnd() {
-    assert(isInBundleLockRegion());
-    if (isAlignToEnd()) {
-      if (intptr_t Offset = getPostAlignment()) {
-        Asm->padWithNop(BundleSize - Offset);
-        SizeSnapshotPre = Asm->getBufferSize();
-      }
-    }
-  }
-  // If pad_to_end is specified, add padding such that the first instruction
-  // after the instruction sequence starts at a bundle boundary.
-  void padForPadToEnd() {
-    assert(isInBundleLockRegion());
-    if (isPadToEnd()) {
-      if (intptr_t Offset = getPostAlignment()) {
-        Asm->padWithNop(BundleSize - Offset);
-        SizeSnapshotPre = Asm->getBufferSize();
-      }
-    }
-  } // Update bookkeeping when rolling back for the second pass.
-  void rollback() {
-    assert(isInBundleLockRegion());
-    Asm->setBufferSize(SizeSnapshotPre);
-    Asm->setPreliminary(false);
-  }
-
-private:
-  Assembler *const Asm;
-  // End is a sentinel value such that BundleLockStart==End implies that we are
-  // not in a bundle_lock region.
-  const InstList::const_iterator End;
-  InstList::const_iterator BundleLockStart;
-  const intptr_t BundleSize;
-  // Masking with BundleMaskLo identifies an address's bundle offset.
-  const intptr_t BundleMaskLo;
-  // Masking with BundleMaskHi identifies an address's bundle.
-  const intptr_t BundleMaskHi;
-  intptr_t SizeSnapshotPre = 0;
-  intptr_t SizeSnapshotPost = 0;
-};
-
-} // end of anonymous namespace
-
 void CfgNode::emitIAS(Cfg *Func) const {
   Func->setCurrentNode(this);
   Assembler *Asm = Func->getAssembler<>();
-  // TODO(stichnot): When sandboxing, defer binding the node label until just
-  // before the first instruction is emitted, to reduce the chance that a
-  // padding nop is a branch target.
   Asm->bindCfgNodeLabel(this);
   for (const Inst &I : Phis) {
     if (I.isDeleted())
@@ -1265,99 +1138,12 @@
     I.emitIAS(Func);
   }
 
-  // Do the simple emission if not sandboxed.
-  if (!getFlags().getUseSandboxing()) {
-    for (const Inst &I : Insts) {
-      if (!I.isDeleted() && !I.isRedundantAssign()) {
-        I.emitIAS(Func);
-        updateStats(Func, &I);
-      }
-    }
-    return;
-  }
-
-  // The remainder of the function handles emission with sandboxing. There are
-  // explicit bundle_lock regions delimited by bundle_lock and bundle_unlock
-  // instructions. All other instructions are treated as an implicit
-  // one-instruction bundle_lock region. Emission is done twice for each
-  // bundle_lock region. The first pass is a preliminary pass, after which we
-  // can figure out what nop padding is needed, then roll back, and make the
-  // final pass.
-  //
-  // Ideally, the first pass would be speculative and the second pass would
-  // only be done if nop padding were needed, but the structure of the
-  // integrated assembler makes it hard to roll back the state of label
-  // bindings, label links, and relocation fixups. Instead, the first pass just
-  // disables all mutation of that state.
-
-  BundleEmitHelper Helper(Asm, Insts);
-  InstList::const_iterator End = Insts.end();
-  // Retrying indicates that we had to roll back to the bundle_lock instruction
-  // to apply padding before the bundle_lock sequence.
-  bool Retrying = false;
-  for (InstList::const_iterator I = Insts.begin(); I != End; ++I) {
-    if (I->isDeleted() || I->isRedundantAssign())
-      continue;
-
-    if (llvm::isa<InstBundleLock>(I)) {
-      // Set up the initial bundle_lock state. This should not happen while
-      // retrying, because the retry rolls back to the instruction following
-      // the bundle_lock instruction.
-      assert(!Retrying);
-      Helper.enterBundleLock(I);
-      continue;
-    }
-
-    if (llvm::isa<InstBundleUnlock>(I)) {
-      Helper.enterBundleUnlock();
-      if (Retrying) {
-        // Make sure all instructions are in the same bundle.
-        assert(Helper.isSameBundle());
-        // If align_to_end is specified, make sure the next instruction begins
-        // the bundle.
-        assert(!Helper.isAlignToEnd() || Helper.getPostAlignment() == 0);
-        Helper.padForPadToEnd();
-        Helper.leaveBundleLockRegion();
-        Retrying = false;
-      } else {
-        // This is the first pass, so roll back for the retry pass.
-        Helper.rollback();
-        // Pad to the next bundle if the instruction sequence crossed a bundle
-        // boundary.
-        Helper.padToNextBundle();
-        // Insert additional padding to make AlignToEnd work.
-        Helper.padForAlignToEnd();
-        // Prepare for the retry pass after padding is done.
-        Retrying = true;
-        I = Helper.getBundleLockStart();
-      }
-      continue;
-    }
-
-    // I points to a non bundle_lock/bundle_unlock instruction.
-    if (Helper.isInBundleLockRegion()) {
-      I->emitIAS(Func);
-      // Only update stats during the final pass.
-      if (Retrying)
-        updateStats(Func, iteratorToInst(I));
-    } else {
-      // Treat it as though there were an implicit bundle_lock and
-      // bundle_unlock wrapping the instruction.
-      Helper.enterBundleLock(I);
-      I->emitIAS(Func);
-      Helper.enterBundleUnlock();
-      Helper.rollback();
-      Helper.padToNextBundle();
-      I->emitIAS(Func);
-      updateStats(Func, iteratorToInst(I));
-      Helper.leaveBundleLockRegion();
+  for (const Inst &I : Insts) {
+    if (!I.isDeleted() && !I.isRedundantAssign()) {
+      I.emitIAS(Func);
+      updateStats(Func, &I);
     }
   }
-
-  // Don't allow bundle locking across basic blocks, to keep the backtracking
-  // mechanism simple.
-  assert(!Helper.isInBundleLockRegion());
-  assert(!Retrying);
 }
 
 void CfgNode::dump(Cfg *Func) const {
diff --git a/third_party/subzero/src/IceClFlags.def b/third_party/subzero/src/IceClFlags.def
index 02469de..0a4b7b6 100644
--- a/third_party/subzero/src/IceClFlags.def
+++ b/third_party/subzero/src/IceClFlags.def
@@ -28,7 +28,7 @@
 // Multi-value flag, not available in a non-LLVM_CL build.
 struct dev_list_flag {};
 
-} // end of namespace detail
+} // namespace cl_detail
 
 #define COMMAND_LINE_FLAGS                                                     \
   /* Name, Type, ClType, <<flag declaration ctor arguments>> */                \
@@ -65,7 +65,7 @@
         clEnumValN(Ice::Target_ARM64, "arm64", "arm64"),                       \
         clEnumValN(Ice::Target_MIPS32, "mips", "mips32"),                      \
         clEnumValN(Ice::Target_MIPS32, "mips32", "mips32 (same as mips)")      \
-        CLENUMVALEND))                                                         \
+            CLENUMVALEND))                                                     \
                                                                                \
   /* The following are development flags, and ideally should not appear in a   \
    * release build. */                                                         \
@@ -134,10 +134,8 @@
   X(DumpStats, bool, dev_opt_flag, "szstats",                                  \
     cl::desc("Print statistics after translating each function"))              \
                                                                                \
-  X(DumpStrings, bool, dev_opt_flag,                                           \
-    "dump-strings",                                                            \
-    cl::desc("Dump string pools during compilation"),                          \
-    cl::init(false))                                                           \
+  X(DumpStrings, bool, dev_opt_flag, "dump-strings",                           \
+    cl::desc("Dump string pools during compilation"), cl::init(false))         \
                                                                                \
   X(EnableBlockProfile, bool, dev_opt_flag, "enable-block-profile",            \
     cl::desc("Instrument basic blocks, and output profiling "                  \
@@ -147,11 +145,10 @@
   X(LocalCSE, Ice::LCSEOptions, dev_opt_flag, "lcse",                          \
     cl::desc("Local common subexpression elimination"),                        \
     cl::init(Ice::LCSE_EnabledSSA),                                            \
-    cl::values(                                                                \
-      clEnumValN(Ice::LCSE_Disabled, "0", "disabled"),                         \
-      clEnumValN(Ice::LCSE_EnabledSSA, "enabled", "assume-ssa"),               \
-      clEnumValN(Ice::LCSE_EnabledNoSSA, "no-ssa", "no-assume-ssa")            \
-      CLENUMVALEND))                                                           \
+    cl::values(clEnumValN(Ice::LCSE_Disabled, "0", "disabled"),                \
+               clEnumValN(Ice::LCSE_EnabledSSA, "enabled", "assume-ssa"),      \
+               clEnumValN(Ice::LCSE_EnabledNoSSA, "no-ssa", "no-assume-ssa")   \
+                   CLENUMVALEND))                                              \
                                                                                \
   X(EmitRevision, bool, dev_opt_flag, "emit-revision",                         \
     cl::desc("Emit Subzero revision string into the output"), cl::init(true))  \
@@ -183,14 +180,13 @@
     cl::init(false))                                                           \
                                                                                \
   X(SplitGlobalVars, bool, dev_opt_flag, "split-global-vars",                  \
-    cl::desc("Global live range splitting"),                                   \
-    cl::init(false))                                                           \
+    cl::desc("Global live range splitting"), cl::init(false))                  \
                                                                                \
   X(InputFileFormat, llvm::NaClFileFormat, dev_opt_flag, "bitcode-format",     \
     cl::desc("Define format of input file:"),                                  \
     cl::values(clEnumValN(llvm::LLVMFormat, "llvm", "LLVM file (default)"),    \
                clEnumValN(llvm::PNaClFormat, "pnacl", "PNaCl bitcode file")    \
-               CLENUMVALEND),                                                  \
+                   CLENUMVALEND),                                              \
     cl::init(llvm::LLVMFormat))                                                \
                                                                                \
   X(KeepDeletedInsts, bool, dev_opt_flag, "keep-deleted-insts",                \
@@ -202,7 +198,7 @@
              "building LLVM IR first"),                                        \
     cl::init(false))                                                           \
                                                                                \
-   X(LocalCseMaxIterations, uint32_t, dev_opt_flag, "lcse-max-iters",          \
+  X(LocalCseMaxIterations, uint32_t, dev_opt_flag, "lcse-max-iters",           \
     cl::desc("Number of times local-cse is run on a block"), cl::init(1))      \
                                                                                \
   X(LoopInvariantCodeMotion, bool, dev_opt_flag, "licm",                       \
@@ -267,7 +263,7 @@
                    "Enable ARM Neon instructions"),                            \
         clEnumValN(Ice::ARM32InstructionSet_HWDivArm, "hwdiv-arm",             \
                    "Enable ARM integer divide instructions in ARM mode")       \
-        CLENUMVALEND))                                                         \
+            CLENUMVALEND))                                                     \
                                                                                \
   X(TestPrefix, std::string, dev_opt_flag, "prefix",                           \
     cl::desc("Prepend a prefix to symbol names for testing"), cl::init(""),    \
@@ -291,15 +287,11 @@
   X(TranslateOnlyString, std::string, dev_opt_flag, "translate-only",          \
     cl::desc("Translate only the given functions"), cl::init(":"))             \
                                                                                \
-  X(UseNonsfi, bool, dev_opt_flag, "nonsfi", cl::desc("Enable Non-SFI mode"))  \
-                                                                               \
   X(UseRestrictedRegisters, std::string, dev_list_flag, "reg-use",             \
     cl::CommaSeparated,                                                        \
     cl::desc("Only use specified registers for corresponding register "        \
              "classes"))                                                       \
                                                                                \
-  X(UseSandboxing, bool, dev_opt_flag, "sandbox", cl::desc("Use sandboxing"))  \
-                                                                               \
   X(Verbose, Ice::VerboseItem, dev_list_flag, "verbose", cl::CommaSeparated,   \
     cl::desc("Verbose options (can be comma-separated):"),                     \
     cl::values(                                                                \
@@ -340,8 +332,7 @@
     cl::init(":"))                                                             \
                                                                                \
   X(WasmBoundsCheck, bool, dev_opt_flag, "wasm-bounds-check",                  \
-    cl::desc("Add bounds checking code in WASM frontend"),                     \
-    cl::init(true))
+    cl::desc("Add bounds checking code in WASM frontend"), cl::init(true))
 
 //#define X(Name, Type, ClType, ...)
 
diff --git a/third_party/subzero/src/IceFixups.cpp b/third_party/subzero/src/IceFixups.cpp
index b7e8031..1aa6743 100644
--- a/third_party/subzero/src/IceFixups.cpp
+++ b/third_party/subzero/src/IceFixups.cpp
@@ -57,12 +57,6 @@
     Symbol = symbol().toString();
     Str << Symbol;
     assert(!ValueIsSymbol);
-    if (const auto *CR = llvm::dyn_cast<ConstantRelocatable>(ConstValue)) {
-      if (!Asm.fixupIsPCRel(kind()) && getFlags().getUseNonsfi() &&
-          CR->getName().toString() != GlobalOffsetTable) {
-        Str << "@GOTOFF";
-      }
-    }
   }
 
   assert(Asm.load<RelocOffsetT>(position()) == 0);
diff --git a/third_party/subzero/src/IceInst.cpp b/third_party/subzero/src/IceInst.cpp
index 556965c..4887cfd 100644
--- a/third_party/subzero/src/IceInst.cpp
+++ b/third_party/subzero/src/IceInst.cpp
@@ -101,8 +101,6 @@
     X(Switch, "switch");
     X(Assign, "assign");
     X(Breakpoint, "break");
-    X(BundleLock, "bundlelock");
-    X(BundleUnlock, "bundleunlock");
     X(FakeDef, "fakedef");
     X(FakeUse, "fakeuse");
     X(FakeKill, "fakekill");
@@ -551,13 +549,6 @@
 InstUnreachable::InstUnreachable(Cfg *Func)
     : InstHighLevel(Func, Inst::Unreachable, 0, nullptr) {}
 
-InstBundleLock::InstBundleLock(Cfg *Func, InstBundleLock::Option BundleOption)
-    : InstHighLevel(Func, Inst::BundleLock, 0, nullptr),
-      BundleOption(BundleOption) {}
-
-InstBundleUnlock::InstBundleUnlock(Cfg *Func)
-    : InstHighLevel(Func, Inst::BundleUnlock, 0, nullptr) {}
-
 InstFakeDef::InstFakeDef(Cfg *Func, Variable *Dest, Variable *Src)
     : InstHighLevel(Func, Inst::FakeDef, Src ? 1 : 0, Dest) {
   assert(Dest);
@@ -946,58 +937,6 @@
   Str << "unreachable";
 }
 
-void InstBundleLock::emit(const Cfg *Func) const {
-  if (!BuildDefs::dump())
-    return;
-  Ostream &Str = Func->getContext()->getStrEmit();
-  Str << "\t.bundle_lock";
-  switch (BundleOption) {
-  case Opt_None:
-    break;
-  case Opt_AlignToEnd:
-    Str << "\t"
-           "align_to_end";
-    break;
-  case Opt_PadToEnd:
-    Str << "\t"
-           "align_to_end /* pad_to_end */";
-    break;
-  }
-  Str << "\n";
-}
-
-void InstBundleLock::dump(const Cfg *Func) const {
-  if (!BuildDefs::dump())
-    return;
-  Ostream &Str = Func->getContext()->getStrDump();
-  Str << "bundle_lock";
-  switch (BundleOption) {
-  case Opt_None:
-    break;
-  case Opt_AlignToEnd:
-    Str << " align_to_end";
-    break;
-  case Opt_PadToEnd:
-    Str << " pad_to_end";
-    break;
-  }
-}
-
-void InstBundleUnlock::emit(const Cfg *Func) const {
-  if (!BuildDefs::dump())
-    return;
-  Ostream &Str = Func->getContext()->getStrEmit();
-  Str << "\t.bundle_unlock";
-  Str << "\n";
-}
-
-void InstBundleUnlock::dump(const Cfg *Func) const {
-  if (!BuildDefs::dump())
-    return;
-  Ostream &Str = Func->getContext()->getStrDump();
-  Str << "bundle_unlock";
-}
-
 void InstFakeDef::emit(const Cfg *Func) const {
   if (!BuildDefs::dump())
     return;
diff --git a/third_party/subzero/src/IceInst.h b/third_party/subzero/src/IceInst.h
index 0e2c039..ab48656 100644
--- a/third_party/subzero/src/IceInst.h
+++ b/third_party/subzero/src/IceInst.h
@@ -64,8 +64,6 @@
     Switch,
     Assign,        // not part of LLVM/PNaCl bitcode
     Breakpoint,    // not part of LLVM/PNaCl bitcode
-    BundleLock,    // not part of LLVM/PNaCl bitcode
-    BundleUnlock,  // not part of LLVM/PNaCl bitcode
     FakeDef,       // not part of LLVM/PNaCl bitcode
     FakeUse,       // not part of LLVM/PNaCl bitcode
     FakeKill,      // not part of LLVM/PNaCl bitcode
@@ -841,55 +839,6 @@
   explicit InstUnreachable(Cfg *Func);
 };
 
-/// BundleLock instruction.  There are no operands. Contains an option
-/// indicating whether align_to_end is specified.
-class InstBundleLock : public InstHighLevel {
-  InstBundleLock() = delete;
-  InstBundleLock(const InstBundleLock &) = delete;
-  InstBundleLock &operator=(const InstBundleLock &) = delete;
-
-public:
-  enum Option { Opt_None, Opt_AlignToEnd, Opt_PadToEnd };
-  static InstBundleLock *create(Cfg *Func, Option BundleOption) {
-    return new (Func->allocate<InstBundleLock>())
-        InstBundleLock(Func, BundleOption);
-  }
-  void emit(const Cfg *Func) const override;
-  void emitIAS(const Cfg * /* Func */) const override {}
-  bool isMemoryWrite() const override { return false; }
-  void dump(const Cfg *Func) const override;
-  Option getOption() const { return BundleOption; }
-  static bool classof(const Inst *Instr) {
-    return Instr->getKind() == BundleLock;
-  }
-
-private:
-  Option BundleOption;
-  InstBundleLock(Cfg *Func, Option BundleOption);
-};
-
-/// BundleUnlock instruction. There are no operands.
-class InstBundleUnlock : public InstHighLevel {
-  InstBundleUnlock() = delete;
-  InstBundleUnlock(const InstBundleUnlock &) = delete;
-  InstBundleUnlock &operator=(const InstBundleUnlock &) = delete;
-
-public:
-  static InstBundleUnlock *create(Cfg *Func) {
-    return new (Func->allocate<InstBundleUnlock>()) InstBundleUnlock(Func);
-  }
-  void emit(const Cfg *Func) const override;
-  void emitIAS(const Cfg * /* Func */) const override {}
-  bool isMemoryWrite() const override { return false; }
-  void dump(const Cfg *Func) const override;
-  static bool classof(const Inst *Instr) {
-    return Instr->getKind() == BundleUnlock;
-  }
-
-private:
-  explicit InstBundleUnlock(Cfg *Func);
-};
-
 /// FakeDef instruction. This creates a fake definition of a variable, which is
 /// how we represent the case when an instruction produces multiple results.
 /// This doesn't happen with high-level ICE instructions, but might with lowered
diff --git a/third_party/subzero/src/IceInstARM32.cpp b/third_party/subzero/src/IceInstARM32.cpp
index 2320c79..353e737 100644
--- a/third_party/subzero/src/IceInstARM32.cpp
+++ b/third_party/subzero/src/IceInstARM32.cpp
@@ -2438,9 +2438,6 @@
   if (auto *CR = llvm::dyn_cast<ConstantRelocatable>(Src0)) {
     Str << "#:lower16:";
     CR->emitWithoutPrefix(Func->getTarget());
-    if (getFlags().getUseNonsfi()) {
-      Str << " - .";
-    }
   } else {
     Src0->emit(Func);
   }
@@ -2467,9 +2464,6 @@
   if (auto *CR = llvm::dyn_cast<ConstantRelocatable>(Src1)) {
     Str << "#:upper16:";
     CR->emitWithoutPrefix(Func->getTarget());
-    if (getFlags().getUseNonsfi()) {
-      Str << " - .";
-    }
   } else {
     Src1->emit(Func);
   }
diff --git a/third_party/subzero/src/IceInstARM32.h b/third_party/subzero/src/IceInstARM32.h
index add44fb..29e435d 100644
--- a/third_party/subzero/src/IceInstARM32.h
+++ b/third_party/subzero/src/IceInstARM32.h
@@ -108,8 +108,7 @@
     return new (Func->allocate<OperandARM32Mem>())
         OperandARM32Mem(Func, Ty, Base, ImmOffset, Mode);
   }
-  /// (2) Reg +/- Reg with an optional shift of some kind and amount. Note that
-  /// this mode is disallowed in the NaCl sandbox.
+  /// (2) Reg +/- Reg with an optional shift of some kind and amount.
   static OperandARM32Mem *create(Cfg *Func, Type Ty, Variable *Base,
                                  Variable *Index, ShiftKind ShiftOp = kNoShift,
                                  uint16_t ShiftAmt = 0,
diff --git a/third_party/subzero/src/IceInstX8632.cpp b/third_party/subzero/src/IceInstX8632.cpp
index aba0a27..fb284ef 100644
--- a/third_party/subzero/src/IceInstX8632.cpp
+++ b/third_party/subzero/src/IceInstX8632.cpp
@@ -106,8 +106,7 @@
   return Disp;
 }
 
-void validateMemOperandPIC(const TargetX8632Traits::X86OperandMem *Mem,
-                           bool UseNonsfi) {
+void validateMemOperandPIC(const TargetX8632Traits::X86OperandMem *Mem) {
   if (!BuildDefs::asserts())
     return;
   const bool HasCR =
@@ -115,10 +114,7 @@
   (void)HasCR;
   const bool IsRebased = Mem->getIsRebased();
   (void)IsRebased;
-  if (UseNonsfi)
-    assert(HasCR == IsRebased);
-  else
-    assert(!IsRebased);
+  assert(!IsRebased);
 }
 
 } // end of anonymous namespace
@@ -126,8 +122,7 @@
 void TargetX8632Traits::X86OperandMem::emit(const Cfg *Func) const {
   if (!BuildDefs::dump())
     return;
-  const bool UseNonsfi = getFlags().getUseNonsfi();
-  validateMemOperandPIC(this, UseNonsfi);
+  validateMemOperandPIC(this);
   const auto *Target =
       static_cast<const ::Ice::X8632::TargetX8632 *>(Func->getTarget());
   // If the base is rematerializable, we need to replace it with the correct
@@ -161,7 +156,7 @@
     // TODO(sehr): ConstantRelocatable still needs updating for
     // rematerializable base/index and Disp.
     assert(Disp == 0);
-    CR->emitWithoutPrefix(Target, UseNonsfi ? "@GOTOFF" : "");
+    CR->emitWithoutPrefix(Target);
   } else {
     llvm_unreachable("Invalid offset type for x86 mem operand");
   }
@@ -258,8 +253,7 @@
     const Ice::TargetLowering *TargetLowering, bool /*IsLeaAddr*/) const {
   const auto *Target =
       static_cast<const ::Ice::X8632::TargetX8632 *>(TargetLowering);
-  const bool UseNonsfi = getFlags().getUseNonsfi();
-  validateMemOperandPIC(this, UseNonsfi);
+  validateMemOperandPIC(this);
   int32_t Disp = 0;
   if (getBase() && getBase()->isRematerializable()) {
     Disp += getRematerializableOffset(getBase(), Target);
diff --git a/third_party/subzero/src/IceInstX8664.cpp b/third_party/subzero/src/IceInstX8664.cpp
index 1a330e1..303859d 100644
--- a/third_party/subzero/src/IceInstX8664.cpp
+++ b/third_party/subzero/src/IceInstX8664.cpp
@@ -104,7 +104,6 @@
       static_cast<const ::Ice::X8664::TargetX8664 *>(Func->getTarget());
   // If the base is rematerializable, we need to replace it with the correct
   // physical register (stack or base pointer), and update the Offset.
-  const bool NeedSandboxing = Target->needSandboxing();
   int32_t Disp = 0;
   if (getBase() && getBase()->isRematerializable()) {
     Disp += getRematerializableOffset(getBase(), Target);
@@ -129,16 +128,10 @@
     // TODO(sehr): ConstantRelocatable still needs updating for
     // rematerializable base/index and Disp.
     assert(Disp == 0);
-    const bool UseNonsfi = getFlags().getUseNonsfi();
-    CR->emitWithoutPrefix(Target, UseNonsfi ? "@GOTOFF" : "");
-    assert(!UseNonsfi);
+    CR->emitWithoutPrefix(Target);
     if (Base == nullptr && Index == nullptr) {
       // rip-relative addressing.
-      if (NeedSandboxing) {
-        Str << "(%rip)";
-      } else {
-        Str << "(%eip)";
-      }
+      Str << "(%rip)";
     }
   } else {
     llvm_unreachable("Invalid offset type for x86 mem operand");
@@ -258,12 +251,6 @@
 
   // Now convert to the various possible forms.
   if (getBase() && getIndex()) {
-    const bool NeedSandboxing = Target->needSandboxing();
-    (void)NeedSandboxing;
-    assert(!NeedSandboxing || IsLeaAddr ||
-           (getBase()->getRegNum() == Traits::RegisterSet::Reg_r15) ||
-           (getBase()->getRegNum() == Traits::RegisterSet::Reg_rsp) ||
-           (getBase()->getRegNum() == Traits::RegisterSet::Reg_rbp));
     return X8664::Traits::Address(getEncodedGPR(getBase()->getRegNum()),
                                   getEncodedGPR(getIndex()->getRegNum()),
                                   X8664::Traits::ScaleFactor(getShift()), Disp,
@@ -282,10 +269,6 @@
   }
 
   if (Fixup == nullptr) {
-    // Absolute addresses are not allowed in Nexes -- they must be rebased
-    // w.r.t. %r15.
-    // Exception: LEAs are fine because they do not touch memory.
-    assert(!Target->needSandboxing() || IsLeaAddr);
     return X8664::Traits::Address::Absolute(Disp);
   }
 
diff --git a/third_party/subzero/src/IceInstX86Base.h b/third_party/subzero/src/IceInstX86Base.h
index 1a13358..c20f394 100644
--- a/third_party/subzero/src/IceInstX86Base.h
+++ b/third_party/subzero/src/IceInstX86Base.h
@@ -101,7 +101,6 @@
       FakeRMW,
       Fld,
       Fstp,
-      GetIP,
       Icmp,
       Idiv,
       Imul,
@@ -284,26 +283,6 @@
                    InstArithmetic::OpKind Op, Variable *Beacon);
   };
 
-  class InstX86GetIP final : public InstX86Base {
-    InstX86GetIP() = delete;
-    InstX86GetIP(const InstX86GetIP &) = delete;
-    InstX86GetIP &operator=(const InstX86GetIP &) = delete;
-
-  public:
-    static InstX86GetIP *create(Cfg *Func, Variable *Dest) {
-      return new (Func->allocate<InstX86GetIP>()) InstX86GetIP(Func, Dest);
-    }
-    void emit(const Cfg *Func) const override;
-    void emitIAS(const Cfg *Func) const override;
-    void dump(const Cfg *Func) const override;
-    static bool classof(const Inst *Instr) {
-      return InstX86Base::isClassof(Instr, InstX86Base::GetIP);
-    }
-
-  private:
-    InstX86GetIP(Cfg *Func, Variable *Dest);
-  };
-
   /// InstX86Label represents an intra-block label that is the target of an
   /// intra-block branch. The offset between the label and the branch must be
   /// fit into one byte (considered "near"). These are used for lowering i1
@@ -3189,7 +3168,6 @@
 ///
 /// using Insts = ::Ice::X86NAMESPACE::Insts<TraitsType>;
 template <typename TraitsType> struct Insts {
-  using GetIP = typename InstImpl<TraitsType>::InstX86GetIP;
   using FakeRMW = typename InstImpl<TraitsType>::InstX86FakeRMW;
   using Label = typename InstImpl<TraitsType>::InstX86Label;
 
diff --git a/third_party/subzero/src/IceInstX86BaseImpl.h b/third_party/subzero/src/IceInstX86BaseImpl.h
index 20636ea..99e5993 100644
--- a/third_party/subzero/src/IceInstX86BaseImpl.h
+++ b/third_party/subzero/src/IceInstX86BaseImpl.h
@@ -58,10 +58,6 @@
 }
 
 template <typename TraitsType>
-InstImpl<TraitsType>::InstX86GetIP::InstX86GetIP(Cfg *Func, Variable *Dest)
-    : InstX86Base(Func, InstX86Base::GetIP, 0, Dest) {}
-
-template <typename TraitsType>
 InstImpl<TraitsType>::InstX86Mul::InstX86Mul(Cfg *Func, Variable *Dest,
                                              Variable *Source1,
                                              Operand *Source2)
@@ -421,38 +417,6 @@
 }
 
 template <typename TraitsType>
-void InstImpl<TraitsType>::InstX86GetIP::emit(const Cfg *Func) const {
-  if (!BuildDefs::dump())
-    return;
-  const auto *Dest = this->getDest();
-  assert(Dest->hasReg());
-  Ostream &Str = Func->getContext()->getStrEmit();
-  Str << "\t"
-         "call"
-         "\t";
-  auto *Target = static_cast<TargetLowering *>(Func->getTarget());
-  Target->emitWithoutPrefix(Target->createGetIPForRegister(Dest));
-}
-
-template <typename TraitsType>
-void InstImpl<TraitsType>::InstX86GetIP::emitIAS(const Cfg *Func) const {
-  const auto *Dest = this->getDest();
-  Assembler *Asm = Func->getAssembler<Assembler>();
-  assert(Dest->hasReg());
-  Asm->call(static_cast<TargetLowering *>(Func->getTarget())
-                ->createGetIPForRegister(Dest));
-}
-
-template <typename TraitsType>
-void InstImpl<TraitsType>::InstX86GetIP::dump(const Cfg *Func) const {
-  if (!BuildDefs::dump())
-    return;
-  Ostream &Str = Func->getContext()->getStrDump();
-  this->getDest()->dump(Func);
-  Str << " = call getIP";
-}
-
-template <typename TraitsType>
 void InstImpl<TraitsType>::InstX86Label::emit(const Cfg *Func) const {
   if (!BuildDefs::dump())
     return;
diff --git a/third_party/subzero/src/IceIntrinsics.h b/third_party/subzero/src/IceIntrinsics.h
index 447ac4d..b18eef4 100644
--- a/third_party/subzero/src/IceIntrinsics.h
+++ b/third_party/subzero/src/IceIntrinsics.h
@@ -49,7 +49,6 @@
   Memcpy,
   Memmove,
   Memset,
-  NaClReadTP,
   Setjmp,
   Sqrt,
   Stacksave,
diff --git a/third_party/subzero/src/IceTargetLowering.cpp b/third_party/subzero/src/IceTargetLowering.cpp
index 1a7500e..ab09b48 100644
--- a/third_party/subzero/src/IceTargetLowering.cpp
+++ b/third_party/subzero/src/IceTargetLowering.cpp
@@ -312,39 +312,8 @@
   }
 }
 
-TargetLowering::SandboxType
-TargetLowering::determineSandboxTypeFromFlags(const ClFlags &Flags) {
-  assert(!Flags.getUseSandboxing() || !Flags.getUseNonsfi());
-  if (Flags.getUseNonsfi()) {
-    return TargetLowering::ST_Nonsfi;
-  }
-  if (Flags.getUseSandboxing()) {
-    return TargetLowering::ST_NaCl;
-  }
-  return TargetLowering::ST_None;
-}
-
 TargetLowering::TargetLowering(Cfg *Func)
-    : Func(Func), Ctx(Func->getContext()),
-      SandboxingType(determineSandboxTypeFromFlags(getFlags())) {}
-
-TargetLowering::AutoBundle::AutoBundle(TargetLowering *Target,
-                                       InstBundleLock::Option Option)
-    : Target(Target), NeedSandboxing(getFlags().getUseSandboxing()) {
-  assert(!Target->AutoBundling);
-  Target->AutoBundling = true;
-  if (NeedSandboxing) {
-    Target->_bundle_lock(Option);
-  }
-}
-
-TargetLowering::AutoBundle::~AutoBundle() {
-  assert(Target->AutoBundling);
-  Target->AutoBundling = false;
-  if (NeedSandboxing) {
-    Target->_bundle_unlock();
-  }
-}
+    : Func(Func), Ctx(Func->getContext()) {}
 
 void TargetLowering::genTargetHelperCalls() {
   TimerMarker T(TimerStack::TT_genHelpers, Func);
@@ -1029,12 +998,9 @@
   Str << "\t.type\t" << Name << ",%object\n";
 
   const bool UseDataSections = getFlags().getDataSections();
-  const bool UseNonsfi = getFlags().getUseNonsfi();
   const std::string Suffix =
       dataSectionSuffix(SectionSuffix, Name, UseDataSections);
-  if (IsConstant && UseNonsfi)
-    Str << "\t.section\t.data.rel.ro" << Suffix << ",\"aw\",%progbits\n";
-  else if (IsConstant)
+  if (IsConstant)
     Str << "\t.section\t.rodata" << Suffix << ",\"a\",%progbits\n";
   else if (HasNonzeroInitializer)
     Str << "\t.section\t.data" << Suffix << ",\"aw\",%progbits\n";
diff --git a/third_party/subzero/src/IceTargetLowering.h b/third_party/subzero/src/IceTargetLowering.h
index a62e26e..d55e48c 100644
--- a/third_party/subzero/src/IceTargetLowering.h
+++ b/third_party/subzero/src/IceTargetLowering.h
@@ -94,7 +94,7 @@
   InstList::iterator getNext() const { return Next; }
   InstList::iterator getEnd() const { return End; }
   void insert(Inst *Instr);
-  template <typename Inst, typename... Args> Inst *insert(Args &&... A) {
+  template <typename Inst, typename... Args> Inst *insert(Args &&...A) {
     auto *New = Inst::create(Node->getCfg(), std::forward<Args>(A)...);
     insert(New);
     return New;
@@ -343,46 +343,11 @@
   virtual ~TargetLowering() = default;
 
 private:
-  // This control variable is used by AutoBundle (RAII-style bundle
-  // locking/unlocking) to prevent nested bundles.
-  bool AutoBundling = false;
-
   /// This indicates whether we are in the genTargetHelperCalls phase, and
   /// therefore can do things like scalarization.
   bool GeneratingTargetHelpers = false;
 
-  // _bundle_lock(), and _bundle_unlock(), were made private to force subtargets
-  // to use the AutoBundle helper.
-  void
-  _bundle_lock(InstBundleLock::Option BundleOption = InstBundleLock::Opt_None) {
-    Context.insert<InstBundleLock>(BundleOption);
-  }
-  void _bundle_unlock() { Context.insert<InstBundleUnlock>(); }
-
 protected:
-  /// AutoBundle provides RIAA-style bundling. Sub-targets are expected to use
-  /// it when emitting NaCl Bundles to ensure proper bundle_unlocking, and
-  /// prevent nested bundles.
-  ///
-  /// AutoBundle objects will emit a _bundle_lock during construction (but only
-  /// if sandboxed code generation was requested), and a bundle_unlock() during
-  /// destruction. By carefully scoping objects of this type, Subtargets can
-  /// ensure proper bundle emission.
-  class AutoBundle {
-    AutoBundle() = delete;
-    AutoBundle(const AutoBundle &) = delete;
-    AutoBundle &operator=(const AutoBundle &) = delete;
-
-  public:
-    explicit AutoBundle(TargetLowering *Target, InstBundleLock::Option Option =
-                                                    InstBundleLock::Opt_None);
-    ~AutoBundle();
-
-  private:
-    TargetLowering *const Target;
-    const bool NeedSandboxing;
-  };
-
   explicit TargetLowering(Cfg *Func);
   // Applies command line filters to TypeToRegisterSet array.
   static void filterTypeToRegisterSet(
@@ -501,7 +466,7 @@
   template <typename... Operands,
             typename F = std::function<Inst *(Variable *, Operands *...)>>
   void scalarizeInstruction(Variable *Dest, F insertScalarInstruction,
-                            Operands *... Srcs) {
+                            Operands *...Srcs) {
     assert(GeneratingTargetHelpers &&
            "scalarizeInstruction called during incorrect phase");
     const Type DestTy = Dest->getType();
@@ -580,15 +545,6 @@
     return insertScalarInstruction(Res, Src0, Src1, Src2);
   }
 
-  /// SandboxType enumerates all possible sandboxing strategies that
-  enum SandboxType {
-    ST_None,
-    ST_NaCl,
-    ST_Nonsfi,
-  };
-
-  static SandboxType determineSandboxTypeFromFlags(const ClFlags &Flags);
-
   Cfg *Func;
   GlobalContext *Ctx;
   bool HasComputedFrame = false;
@@ -596,9 +552,6 @@
   SizeT NextLabelNumber = 0;
   SizeT NextJumpTableNumber = 0;
   LoweringContext Context;
-  const SandboxType SandboxingType = ST_None;
-
-  const static constexpr char *H_getIP_prefix = "__Sz_getIP_";
 };
 
 /// TargetDataLowering is used for "lowering" data including initializers for
diff --git a/third_party/subzero/src/IceTargetLoweringARM32.cpp b/third_party/subzero/src/IceTargetLoweringARM32.cpp
index c479feb..ef588ec 100644
--- a/third_party/subzero/src/IceTargetLoweringARM32.cpp
+++ b/third_party/subzero/src/IceTargetLoweringARM32.cpp
@@ -52,14 +52,6 @@
 
 void staticInit(::Ice::GlobalContext *Ctx) {
   ::Ice::ARM32::TargetARM32::staticInit(Ctx);
-  if (Ice::getFlags().getUseNonsfi()) {
-    // In nonsfi, we need to reference the _GLOBAL_OFFSET_TABLE_ for accessing
-    // globals. The GOT is an external symbol (i.e., it is not defined in the
-    // pexe) so we need to register it as such so that ELF emission won't barf
-    // on an "unknown" symbol. The GOT is added to the External symbols list
-    // here because staticInit() is invoked in a single-thread context.
-    Ctx->getConstantExternSym(Ctx->getGlobalString(::Ice::GlobalOffsetTable));
-  }
 }
 
 bool shouldBePooled(const ::Ice::Constant *C) {
@@ -304,8 +296,7 @@
 } // end of anonymous namespace
 
 TargetARM32::TargetARM32(Cfg *Func)
-    : TargetLowering(Func), NeedSandboxing(SandboxingType == ST_NaCl),
-      CPUFeatures(getFlags()) {}
+    : TargetLowering(Func), CPUFeatures(getFlags()) {}
 
 void TargetARM32::staticInit(GlobalContext *Ctx) {
   RegNumT::setLimit(RegARM32::Reg_NUM);
@@ -806,21 +797,6 @@
       Instr->setDeleted();
       return;
     }
-    case Intrinsics::NaClReadTP: {
-      if (SandboxingType == ST_NaCl) {
-        return;
-      }
-      static constexpr SizeT MaxArgs = 0;
-      Operand *TargetHelper =
-          SandboxingType == ST_Nonsfi
-              ? Ctx->getConstantExternSym(
-                    Ctx->getGlobalString("__aeabi_read_tp"))
-              : Ctx->getRuntimeHelperFunc(RuntimeHelper::H_call_read_tp);
-      Context.insert<InstCall>(MaxArgs, Dest, TargetHelper, NoTailCall,
-                               IsTargetHelperCall);
-      Instr->setDeleted();
-      return;
-    }
     case Intrinsics::Setjmp: {
       static constexpr SizeT MaxArgs = 1;
       Operand *TargetHelper =
@@ -855,34 +831,6 @@
   }
 }
 
-void TargetARM32::createGotPtr() {
-  if (SandboxingType != ST_Nonsfi) {
-    return;
-  }
-  GotPtr = Func->makeVariable(IceType_i32);
-}
-
-void TargetARM32::insertGotPtrInitPlaceholder() {
-  if (SandboxingType != ST_Nonsfi) {
-    return;
-  }
-  assert(GotPtr != nullptr);
-  // We add the two placeholder instructions here. The first fakedefs T, an
-  // infinite-weight temporary, while the second fakedefs the GotPtr "using" T.
-  // This is needed because the GotPtr initialization, if needed, will require
-  // a register:
-  //
-  //   movw     reg, _GLOBAL_OFFSET_TABLE_ - 16 - .
-  //   movt     reg, _GLOBAL_OFFSET_TABLE_ - 12 - .
-  //   add      reg, pc, reg
-  //   mov      GotPtr, reg
-  //
-  // If GotPtr is not used, then both these pseudo-instructions are dce'd.
-  Variable *T = makeReg(IceType_i32);
-  Context.insert<InstFakeDef>(T);
-  Context.insert<InstFakeDef>(GotPtr, T);
-}
-
 GlobalString
 TargetARM32::createGotoffRelocation(const ConstantRelocatable *CR) {
   GlobalString CRName = CR->getName();
@@ -910,95 +858,9 @@
   return CRGotoffName;
 }
 
-void TargetARM32::materializeGotAddr(CfgNode *Node) {
-  if (SandboxingType != ST_Nonsfi) {
-    return;
-  }
-
-  // At first, we try to find the
-  //    GotPtr = def T
-  // pseudo-instruction that we placed for defining the got ptr. That
-  // instruction is not just a place-holder for defining the GotPtr (thus
-  // keeping liveness consistent), but it is also located at a point where it is
-  // safe to materialize the got addr -- i.e., before loading parameters to
-  // registers, but after moving register parameters from their home location.
-  InstFakeDef *DefGotPtr = nullptr;
-  for (auto &Inst : Node->getInsts()) {
-    auto *FakeDef = llvm::dyn_cast<InstFakeDef>(&Inst);
-    if (FakeDef != nullptr && FakeDef->getDest() == GotPtr) {
-      DefGotPtr = FakeDef;
-      break;
-    }
-  }
-
-  if (DefGotPtr == nullptr || DefGotPtr->isDeleted()) {
-    return;
-  }
-
-  // The got addr needs to be materialized at the same point where DefGotPtr
-  // lives.
-  Context.setInsertPoint(instToIterator(DefGotPtr));
-  assert(DefGotPtr->getSrcSize() == 1);
-  auto *T = llvm::cast<Variable>(DefGotPtr->getSrc(0));
-  loadNamedConstantRelocatablePIC(Ctx->getGlobalString(GlobalOffsetTable), T,
-                                  [this, T](Variable *PC) { _add(T, PC, T); });
-  _mov(GotPtr, T);
-  DefGotPtr->setDeleted();
-}
-
-void TargetARM32::loadNamedConstantRelocatablePIC(
-    GlobalString Name, Variable *Register,
-    std::function<void(Variable *PC)> Finish) {
-  assert(SandboxingType == ST_Nonsfi);
-  // We makeReg() here instead of getPhysicalRegister() because the latter ends
-  // up creating multi-blocks temporaries that liveness fails to validate.
-  auto *PC = makeReg(IceType_i32, RegARM32::Reg_pc);
-
-  auto *AddPcReloc = RelocOffset::create(Ctx);
-  AddPcReloc->setSubtract(true);
-  auto *AddPcLabel = InstARM32Label::create(Func, this);
-  AddPcLabel->setRelocOffset(AddPcReloc);
-
-  auto *MovwReloc = RelocOffset::create(Ctx);
-  auto *MovwLabel = InstARM32Label::create(Func, this);
-  MovwLabel->setRelocOffset(MovwReloc);
-
-  auto *MovtReloc = RelocOffset::create(Ctx);
-  auto *MovtLabel = InstARM32Label::create(Func, this);
-  MovtLabel->setRelocOffset(MovtReloc);
-
-  // The EmitString for these constant relocatables have hardcoded offsets
-  // attached to them. This could be dangerous if, e.g., we ever implemented
-  // instruction scheduling but llvm-mc currently does not support
-  //
-  //   movw reg, #:lower16:(Symbol - Label - Number)
-  //   movt reg, #:upper16:(Symbol - Label - Number)
-  //
-  // relocations.
-  static constexpr RelocOffsetT PcOffset = -8;
-  auto *CRLower = Ctx->getConstantSymWithEmitString(
-      PcOffset, {MovwReloc, AddPcReloc}, Name, Name + " -16");
-  auto *CRUpper = Ctx->getConstantSymWithEmitString(
-      PcOffset, {MovtReloc, AddPcReloc}, Name, Name + " -12");
-
-  Context.insert(MovwLabel);
-  _movw(Register, CRLower);
-  Context.insert(MovtLabel);
-  _movt(Register, CRUpper);
-  // PC = fake-def to keep liveness consistent.
-  Context.insert<InstFakeDef>(PC);
-  Context.insert(AddPcLabel);
-  Finish(PC);
-}
-
 void TargetARM32::translateO2() {
   TimerMarker T(TimerStack::TT_O2, Func);
 
-  // TODO(stichnot): share passes with other targets?
-  // https://code.google.com/p/nativeclient/issues/detail?id=4094
-  if (SandboxingType == ST_Nonsfi) {
-    createGotPtr();
-  }
   genTargetHelperCalls();
   findMaxStackOutArgsSize();
 
@@ -1046,9 +908,6 @@
     return;
   Func->dump("After ARM32 address mode opt");
 
-  if (SandboxingType == ST_Nonsfi) {
-    insertGotPtrInitPlaceholder();
-  }
   Func->genCode();
   if (Func->hasError())
     return;
@@ -1108,11 +967,6 @@
 void TargetARM32::translateOm1() {
   TimerMarker T(TimerStack::TT_Om1, Func);
 
-  // TODO(stichnot): share passes with other targets?
-  if (SandboxingType == ST_Nonsfi) {
-    createGotPtr();
-  }
-
   genTargetHelperCalls();
   findMaxStackOutArgsSize();
 
@@ -1134,9 +988,6 @@
 
   Func->doArgLowering();
 
-  if (SandboxingType == ST_Nonsfi) {
-    insertGotPtrInitPlaceholder();
-  }
   Func->genCode();
   if (Func->hasError())
     return;
@@ -1526,10 +1377,6 @@
   // because their uses are recorded as S regs uses.
   SmallBitVector ToPreserve(RegARM32::Reg_NUM);
   for (SizeT i = 0; i < CalleeSaves.size(); ++i) {
-    if (NeedSandboxing && i == RegARM32::Reg_r9) {
-      // r9 is never updated in sandboxed code.
-      continue;
-    }
     if (CalleeSaves[i] && RegsUsed[i]) {
       ToPreserve |= RegisterAliases[i];
     }
@@ -1611,12 +1458,13 @@
 
   // Generate "sub sp, SpillAreaSizeBytes"
   if (SpillAreaSizeBytes) {
+    Variable *SP = getPhysicalRegister(RegARM32::Reg_sp);
     // Use the scratch register if needed to legalize the immediate.
     Operand *SubAmount = legalize(Ctx->getConstantInt32(SpillAreaSizeBytes),
                                   Legal_Reg | Legal_Flex, getReservedTmpReg());
-    Sandboxer(this).sub_sp(SubAmount);
+    _sub(SP, SP, SubAmount);
     if (FixedAllocaAlignBytes > ARM32_STACK_ALIGNMENT_BYTES) {
-      Sandboxer(this).align_sp(FixedAllocaAlignBytes);
+      alignRegisterPow2(SP, FixedAllocaAlignBytes);
     }
   }
 
@@ -1630,8 +1478,6 @@
   if (!UsesFramePointer)
     BasicFrameOffset += SpillAreaSizeBytes;
 
-  materializeGotAddr(Node);
-
   const VarList &Args = Func->getArgs();
   size_t InArgsSizeBytes = 0;
   TargetARM32::CallingConv CC;
@@ -1710,7 +1556,7 @@
     // use of SP before the assignment of SP=FP keeps previous SP adjustments
     // from being dead-code eliminated.
     Context.insert<InstFakeUse>(SP);
-    Sandboxer(this).reset_sp(FP);
+    _mov_redefined(SP, FP);
   } else {
     // add SP, SpillAreaSizeBytes
     if (SpillAreaSizeBytes) {
@@ -1718,7 +1564,7 @@
       Operand *AddAmount =
           legalize(Ctx->getConstantInt32(SpillAreaSizeBytes),
                    Legal_Reg | Legal_Flex, getReservedTmpReg());
-      Sandboxer(this).add_sp(AddAmount);
+      _add(SP, SP, AddAmount);
     }
   }
 
@@ -1726,27 +1572,6 @@
     _pop(PreservedGPRs);
   if (!PreservedSRegs.empty())
     _pop(PreservedSRegs);
-
-  if (!getFlags().getUseSandboxing())
-    return;
-
-  // Change the original ret instruction into a sandboxed return sequence.
-  //
-  // bundle_lock
-  // bic lr, #0xc000000f
-  // bx lr
-  // bundle_unlock
-  //
-  // This isn't just aligning to the getBundleAlignLog2Bytes(). It needs to
-  // restrict to the lower 1GB as well.
-  Variable *LR = getPhysicalRegister(RegARM32::Reg_lr);
-  Variable *RetValue = nullptr;
-  if (RI->getSrcSize())
-    RetValue = llvm::cast<Variable>(RI->getSrc(0));
-
-  Sandboxer(this).ret(LR, RetValue);
-
-  RI->setDeleted();
 }
 
 bool TargetARM32::isLegalMemOffset(Type Ty, int32_t Offset) const {
@@ -1869,9 +1694,8 @@
     assert(!SrcR->isRematerializable());
     const int32_t Offset = Dest->getStackOffset();
     // This is a _mov(Mem(), Variable), i.e., a store.
-    TargetARM32::Sandboxer(Target).str(
-        SrcR, createMemOperand(DestTy, StackOrFrameReg, Offset),
-        MovInstr->getPredicate());
+    Target->_str(SrcR, createMemOperand(DestTy, StackOrFrameReg, Offset),
+                 MovInstr->getPredicate());
     // _str() does not have a Dest, so we add a fake-def(Dest).
     Target->Context.insert<InstFakeDef>(Dest);
     Legalized = true;
@@ -1894,9 +1718,8 @@
       if (!Var->hasReg()) {
         // This is a _mov(Variable, Mem()), i.e., a load.
         const int32_t Offset = Var->getStackOffset();
-        TargetARM32::Sandboxer(Target).ldr(
-            Dest, createMemOperand(DestTy, StackOrFrameReg, Offset),
-            MovInstr->getPredicate());
+        Target->_ldr(Dest, createMemOperand(DestTy, StackOrFrameReg, Offset),
+                     MovInstr->getPredicate());
         Legalized = true;
       }
     }
@@ -1965,7 +1788,7 @@
     Legalized = true;
   }
 
-  if (!Legalized && !Target->NeedSandboxing) {
+  if (!Legalized) {
     return nullptr;
   }
 
@@ -1973,10 +1796,6 @@
     return createMemOperand(Mem->getType(), Base, Offset, AllowOffsets);
   }
 
-  if (Target->NeedSandboxing) {
-    llvm::report_fatal_error("Reg-Reg address mode is not allowed.");
-  }
-
   assert(MemTraits[Mem->getType()].CanHaveIndex);
 
   if (Offset != 0) {
@@ -2048,8 +1867,7 @@
       } else if (auto *LdrInstr = llvm::dyn_cast<InstARM32Ldr>(CurInstr)) {
         if (OperandARM32Mem *LegalMem = Legalizer.legalizeMemOperand(
                 llvm::cast<OperandARM32Mem>(LdrInstr->getSrc(0)))) {
-          Sandboxer(this).ldr(CurInstr->getDest(), LegalMem,
-                              LdrInstr->getPredicate());
+          _ldr(CurInstr->getDest(), LegalMem, LdrInstr->getPredicate());
           CurInstr->setDeleted();
         }
       } else if (auto *LdrexInstr = llvm::dyn_cast<InstARM32Ldrex>(CurInstr)) {
@@ -2057,15 +1875,14 @@
         if (OperandARM32Mem *LegalMem = Legalizer.legalizeMemOperand(
                 llvm::cast<OperandARM32Mem>(LdrexInstr->getSrc(0)),
                 DisallowOffsetsBecauseLdrex)) {
-          Sandboxer(this).ldrex(CurInstr->getDest(), LegalMem,
-                                LdrexInstr->getPredicate());
+          _ldrex(CurInstr->getDest(), LegalMem, LdrexInstr->getPredicate());
           CurInstr->setDeleted();
         }
       } else if (auto *StrInstr = llvm::dyn_cast<InstARM32Str>(CurInstr)) {
         if (OperandARM32Mem *LegalMem = Legalizer.legalizeMemOperand(
                 llvm::cast<OperandARM32Mem>(StrInstr->getSrc(1)))) {
-          Sandboxer(this).str(llvm::cast<Variable>(CurInstr->getSrc(0)),
-                              LegalMem, StrInstr->getPredicate());
+          _str(llvm::cast<Variable>(CurInstr->getSrc(0)), LegalMem,
+               StrInstr->getPredicate());
           CurInstr->setDeleted();
         }
       } else if (auto *StrexInstr = llvm::dyn_cast<InstARM32Strex>(CurInstr)) {
@@ -2073,9 +1890,8 @@
         if (OperandARM32Mem *LegalMem = Legalizer.legalizeMemOperand(
                 llvm::cast<OperandARM32Mem>(StrexInstr->getSrc(1)),
                 DisallowOffsetsBecauseStrex)) {
-          Sandboxer(this).strex(CurInstr->getDest(),
-                                llvm::cast<Variable>(CurInstr->getSrc(0)),
-                                LegalMem, StrexInstr->getPredicate());
+          _strex(CurInstr->getDest(), llvm::cast<Variable>(CurInstr->getSrc(0)),
+                 LegalMem, StrexInstr->getPredicate());
           CurInstr->setDeleted();
         }
       }
@@ -2230,7 +2046,7 @@
 
   Variable *SP = getPhysicalRegister(RegARM32::Reg_sp);
   if (OverAligned) {
-    Sandboxer(this).align_sp(Alignment);
+    alignRegisterPow2(SP, Alignment);
   }
 
   Variable *Dest = Instr->getDest();
@@ -2255,7 +2071,7 @@
     // in Dest.
     Operand *SubAmountRF =
         legalize(Ctx->getConstantInt32(Value), Legal_Reg | Legal_Flex);
-    Sandboxer(this).sub_sp(SubAmountRF);
+    _sub(SP, SP, SubAmountRF);
   } else {
     // Non-constant sizes need to be adjusted to the next highest multiple of
     // the required alignment at runtime.
@@ -2265,7 +2081,7 @@
     Operand *AddAmount = legalize(Ctx->getConstantInt32(Alignment - 1));
     _add(T, T, AddAmount);
     alignRegisterPow2(T, Alignment);
-    Sandboxer(this).sub_sp(T);
+    _sub(SP, SP, T);
   }
 
   // Adds back a few bytes to SP to account for the out args area.
@@ -3837,8 +3653,7 @@
     Context.insert<InstFakeUse>(RegArg);
   }
 
-  InstARM32Call *NewCall =
-      Sandboxer(this, InstBundleLock::Opt_AlignToEnd).bl(ReturnReg, CallTarget);
+  InstARM32Call *NewCall = Context.insert<InstARM32Call>(ReturnReg, CallTarget);
 
   if (ReturnRegHi)
     Context.insert<InstFakeDef>(ReturnRegHi);
@@ -5282,16 +5097,6 @@
   case Intrinsics::Memset: {
     llvm::report_fatal_error("memmove should have been prelowered.");
   }
-  case Intrinsics::NaClReadTP: {
-    if (SandboxingType != ST_NaCl) {
-      llvm::report_fatal_error("nacl-read-tp should have been prelowered.");
-    }
-    Variable *TP = legalizeToReg(OperandARM32Mem::create(
-        Func, getPointerType(), getPhysicalRegister(RegARM32::Reg_r9),
-        llvm::cast<ConstantInteger32>(Ctx->getConstantZero(IceType_i32))));
-    _mov(Dest, TP);
-    return;
-  }
   case Intrinsics::Setjmp: {
     llvm::report_fatal_error("setjmp should have been prelowered.");
   }
@@ -5308,8 +5113,9 @@
     return;
   }
   case Intrinsics::Stackrestore: {
+    Variable *SP = getPhysicalRegister(RegARM32::Reg_sp);
     Variable *Val = legalizeToReg(Instr->getArg(0));
-    Sandboxer(this).reset_sp(Val);
+    _mov_redefined(SP, Val);
     return;
   }
   case Intrinsics::Trap:
@@ -5783,9 +5589,8 @@
   (void)MemTraitsSize;
   assert(Ty < MemTraitsSize);
   auto *TypeTraits = &MemTraits[Ty];
-  const bool CanHaveIndex = !NeedSandboxing && TypeTraits->CanHaveIndex;
-  const bool CanHaveShiftedIndex =
-      !NeedSandboxing && TypeTraits->CanHaveShiftedIndex;
+  const bool CanHaveIndex = TypeTraits->CanHaveIndex;
+  const bool CanHaveShiftedIndex = TypeTraits->CanHaveShiftedIndex;
   const bool CanHaveImm = TypeTraits->CanHaveImm;
   const int32_t ValidImmMask = TypeTraits->ValidImmMask;
   (void)ValidImmMask;
@@ -6228,71 +6033,8 @@
   _trap();
 }
 
-namespace {
-// Returns whether Opnd needs the GOT address. Currently, ConstantRelocatables,
-// and fp constants will need access to the GOT address.
-bool operandNeedsGot(const Operand *Opnd) {
-  if (llvm::isa<ConstantRelocatable>(Opnd)) {
-    return true;
-  }
-
-  if (llvm::isa<ConstantFloat>(Opnd)) {
-    uint32_t _;
-    return !OperandARM32FlexFpImm::canHoldImm(Opnd, &_);
-  }
-
-  const auto *F64 = llvm::dyn_cast<ConstantDouble>(Opnd);
-  if (F64 != nullptr) {
-    uint32_t _;
-    return !OperandARM32FlexFpImm::canHoldImm(Opnd, &_) &&
-           !isFloatingPointZero(F64);
-  }
-
-  return false;
-}
-
-// Returns whether Phi needs the GOT address (which it does if any of its
-// operands needs the GOT address.)
-bool phiNeedsGot(const InstPhi *Phi) {
-  if (Phi->isDeleted()) {
-    return false;
-  }
-
-  for (SizeT I = 0; I < Phi->getSrcSize(); ++I) {
-    if (operandNeedsGot(Phi->getSrc(I))) {
-      return true;
-    }
-  }
-
-  return false;
-}
-
-// Returns whether **any** phi in Node needs the GOT address.
-bool anyPhiInNodeNeedsGot(CfgNode *Node) {
-  for (auto &Inst : Node->getPhis()) {
-    if (phiNeedsGot(llvm::cast<InstPhi>(&Inst))) {
-      return true;
-    }
-  }
-  return false;
-}
-
-} // end of anonymous namespace
-
 void TargetARM32::prelowerPhis() {
   CfgNode *Node = Context.getNode();
-
-  if (SandboxingType == ST_Nonsfi) {
-    assert(GotPtr != nullptr);
-    if (anyPhiInNodeNeedsGot(Node)) {
-      // If any phi instruction needs the GOT address, we place a
-      //   fake-use GotPtr
-      // in Node to prevent the GotPtr's initialization from being dead code
-      // eliminated.
-      Node->getInsts().push_front(InstFakeUse::create(Func, GotPtr));
-    }
-  }
-
   PhiLowering::prelowerPhis32Bit(this, Node, Func);
 }
 
@@ -6456,18 +6198,8 @@
       }
     } else if (auto *C = llvm::dyn_cast<ConstantRelocatable>(From)) {
       Variable *Reg = makeReg(Ty, RegNum);
-      if (SandboxingType != ST_Nonsfi) {
-        _movw(Reg, C);
-        _movt(Reg, C);
-      } else {
-        auto *GotAddr = legalizeToReg(GotPtr);
-        GlobalString CGotoffName = createGotoffRelocation(C);
-        loadNamedConstantRelocatablePIC(
-            CGotoffName, Reg, [this, Reg](Variable *PC) {
-              _ldr(Reg, OperandARM32Mem::create(Func, IceType_i32, PC, Reg));
-            });
-        _add(Reg, GotAddr, Reg);
-      }
+      _movw(Reg, C);
+      _movt(Reg, C);
       return Reg;
     } else {
       assert(isScalarFloatingType(Ty));
@@ -6492,17 +6224,9 @@
       auto *CFrom = llvm::cast<Constant>(From);
       assert(CFrom->getShouldBePooled());
       Constant *Offset = Ctx->getConstantSym(0, CFrom->getLabelName());
-      Variable *BaseReg = nullptr;
-      if (SandboxingType == ST_Nonsfi) {
-        // vldr does not support the [base, index] addressing mode, so we need
-        // to legalize Offset to a register. Otherwise, we could simply
-        //   vldr dest, [got, reg(Offset)]
-        BaseReg = legalizeToReg(Offset);
-      } else {
-        BaseReg = makeReg(getPointerType());
-        _movw(BaseReg, Offset);
-        _movt(BaseReg, Offset);
-      }
+      Variable *BaseReg = makeReg(getPointerType());
+      _movw(BaseReg, Offset);
+      _movt(BaseReg, Offset);
       From = formMemoryOperand(BaseReg, Ty);
       return copyToReg(From, RegNum);
     }
@@ -7059,149 +6783,12 @@
   }
 }
 
-TargetARM32::Sandboxer::Sandboxer(TargetARM32 *Target,
-                                  InstBundleLock::Option BundleOption)
-    : Target(Target), BundleOption(BundleOption) {}
-
-TargetARM32::Sandboxer::~Sandboxer() {}
-
-namespace {
-OperandARM32FlexImm *indirectBranchBicMask(Cfg *Func) {
-  constexpr uint32_t Imm8 = 0xFC; // 0xC000000F
-  constexpr uint32_t RotateAmt = 2;
-  return OperandARM32FlexImm::create(Func, IceType_i32, Imm8, RotateAmt);
-}
-
-OperandARM32FlexImm *memOpBicMask(Cfg *Func) {
-  constexpr uint32_t Imm8 = 0x0C; // 0xC0000000
-  constexpr uint32_t RotateAmt = 2;
-  return OperandARM32FlexImm::create(Func, IceType_i32, Imm8, RotateAmt);
-}
-
-static bool baseNeedsBic(Variable *Base) {
-  return Base->getRegNum() != RegARM32::Reg_r9 &&
-         Base->getRegNum() != RegARM32::Reg_sp;
-}
-} // end of anonymous namespace
-
-void TargetARM32::Sandboxer::createAutoBundle() {
-  Bundler = makeUnique<AutoBundle>(Target, BundleOption);
-}
-
-void TargetARM32::Sandboxer::add_sp(Operand *AddAmount) {
-  Variable *SP = Target->getPhysicalRegister(RegARM32::Reg_sp);
-  if (!Target->NeedSandboxing) {
-    Target->_add(SP, SP, AddAmount);
-    return;
-  }
-  createAutoBundle();
-  Target->_add(SP, SP, AddAmount);
-  Target->_bic(SP, SP, memOpBicMask(Target->Func));
-}
-
-void TargetARM32::Sandboxer::align_sp(size_t Alignment) {
-  Variable *SP = Target->getPhysicalRegister(RegARM32::Reg_sp);
-  if (!Target->NeedSandboxing) {
-    Target->alignRegisterPow2(SP, Alignment);
-    return;
-  }
-  createAutoBundle();
-  Target->alignRegisterPow2(SP, Alignment);
-  Target->_bic(SP, SP, memOpBicMask(Target->Func));
-}
-
-InstARM32Call *TargetARM32::Sandboxer::bl(Variable *ReturnReg,
-                                          Operand *CallTarget) {
-  if (Target->NeedSandboxing) {
-    createAutoBundle();
-    if (auto *CallTargetR = llvm::dyn_cast<Variable>(CallTarget)) {
-      Target->_bic(CallTargetR, CallTargetR,
-                   indirectBranchBicMask(Target->Func));
-    }
-  }
-  return Target->Context.insert<InstARM32Call>(ReturnReg, CallTarget);
-}
-
-void TargetARM32::Sandboxer::ldr(Variable *Dest, OperandARM32Mem *Mem,
-                                 CondARM32::Cond Pred) {
-  Variable *MemBase = Mem->getBase();
-  if (Target->NeedSandboxing && baseNeedsBic(MemBase)) {
-    createAutoBundle();
-    assert(!Mem->isRegReg());
-    Target->_bic(MemBase, MemBase, memOpBicMask(Target->Func), Pred);
-  }
-  Target->_ldr(Dest, Mem, Pred);
-}
-
-void TargetARM32::Sandboxer::ldrex(Variable *Dest, OperandARM32Mem *Mem,
-                                   CondARM32::Cond Pred) {
-  Variable *MemBase = Mem->getBase();
-  if (Target->NeedSandboxing && baseNeedsBic(MemBase)) {
-    createAutoBundle();
-    assert(!Mem->isRegReg());
-    Target->_bic(MemBase, MemBase, memOpBicMask(Target->Func), Pred);
-  }
-  Target->_ldrex(Dest, Mem, Pred);
-}
-
-void TargetARM32::Sandboxer::reset_sp(Variable *Src) {
-  Variable *SP = Target->getPhysicalRegister(RegARM32::Reg_sp);
-  if (!Target->NeedSandboxing) {
-    Target->_mov_redefined(SP, Src);
-    return;
-  }
-  createAutoBundle();
-  Target->_mov_redefined(SP, Src);
-  Target->_bic(SP, SP, memOpBicMask(Target->Func));
-}
-
-void TargetARM32::Sandboxer::ret(Variable *RetAddr, Variable *RetValue) {
-  if (Target->NeedSandboxing) {
-    createAutoBundle();
-    Target->_bic(RetAddr, RetAddr, indirectBranchBicMask(Target->Func));
-  }
-  Target->_ret(RetAddr, RetValue);
-}
-
-void TargetARM32::Sandboxer::str(Variable *Src, OperandARM32Mem *Mem,
-                                 CondARM32::Cond Pred) {
-  Variable *MemBase = Mem->getBase();
-  if (Target->NeedSandboxing && baseNeedsBic(MemBase)) {
-    createAutoBundle();
-    assert(!Mem->isRegReg());
-    Target->_bic(MemBase, MemBase, memOpBicMask(Target->Func), Pred);
-  }
-  Target->_str(Src, Mem, Pred);
-}
-
-void TargetARM32::Sandboxer::strex(Variable *Dest, Variable *Src,
-                                   OperandARM32Mem *Mem, CondARM32::Cond Pred) {
-  Variable *MemBase = Mem->getBase();
-  if (Target->NeedSandboxing && baseNeedsBic(MemBase)) {
-    createAutoBundle();
-    assert(!Mem->isRegReg());
-    Target->_bic(MemBase, MemBase, memOpBicMask(Target->Func), Pred);
-  }
-  Target->_strex(Dest, Src, Mem, Pred);
-}
-
-void TargetARM32::Sandboxer::sub_sp(Operand *SubAmount) {
-  Variable *SP = Target->getPhysicalRegister(RegARM32::Reg_sp);
-  if (!Target->NeedSandboxing) {
-    Target->_sub(SP, SP, SubAmount);
-    return;
-  }
-  createAutoBundle();
-  Target->_sub(SP, SP, SubAmount);
-  Target->_bic(SP, SP, memOpBicMask(Target->Func));
-}
-
 TargetDataARM32::TargetDataARM32(GlobalContext *Ctx)
     : TargetDataLowering(Ctx) {}
 
 void TargetDataARM32::lowerGlobals(const VariableDeclarationList &Vars,
                                    const std::string &SectionSuffix) {
-  const bool IsPIC = getFlags().getUseNonsfi();
+  const bool IsPIC = false;
   switch (getFlags().getOutFileType()) {
   case FT_Elf: {
     ELFObjectWriter *Writer = Ctx->getObjectWriter();
diff --git a/third_party/subzero/src/IceTargetLoweringARM32.h b/third_party/subzero/src/IceTargetLoweringARM32.h
index 6ae3055..d938f79 100644
--- a/third_party/subzero/src/IceTargetLoweringARM32.h
+++ b/third_party/subzero/src/IceTargetLoweringARM32.h
@@ -76,8 +76,7 @@
   }
 
   std::unique_ptr<::Ice::Assembler> createAssembler() const override {
-    const bool IsNonsfi = SandboxingType == ST_Nonsfi;
-    return makeUnique<ARM32::AssemblerARM32>(IsNonsfi);
+    return makeUnique<ARM32::AssemblerARM32>();
   }
 
   void initNodeForLowering(CfgNode *Node) override {
@@ -991,16 +990,6 @@
 
   void postLowerLegalization();
 
-  /// Manages the GotPtr variable, which is used for Nonsfi sandboxing.
-  /// @{
-  void createGotPtr();
-  void insertGotPtrInitPlaceholder();
-  VariableDeclaration *createGotRelocation(RelocOffset *AddPcReloc);
-  void materializeGotAddr(CfgNode *Node);
-  Variable *GotPtr = nullptr;
-  // TODO(jpp): use CfgLocalAllocator.
-  /// @}
-
   /// Manages the Gotoff relocations created during the function lowering. A
   /// single Gotoff relocation is created for each global variable used by the
   /// function being lowered.
@@ -1011,156 +1000,6 @@
   CfgUnorderedSet<GlobalString> KnownGotoffs;
   /// @}
 
-  /// Loads the constant relocatable Name to Register. Then invoke Finish to
-  /// finish the relocatable lowering. Finish **must** use PC in its first
-  /// emitted instruction, or the relocatable in Register will contain the wrong
-  /// value.
-  //
-  // Lowered sequence:
-  //
-  // Movw:
-  //     movw Register, #:lower16:Name - (End - Movw) - 8 .
-  // Movt:
-  //     movt Register, #:upper16:Name - (End - Movt) - 8 .
-  //     PC = fake-def
-  // End:
-  //     Finish(PC)
-  //
-  // The -8 in movw/movt above is to account for the PC value that the first
-  // instruction emitted by Finish(PC) will read.
-  void
-  loadNamedConstantRelocatablePIC(GlobalString Name, Variable *Register,
-                                  std::function<void(Variable *PC)> Finish);
-
-  /// Sandboxer defines methods for ensuring that "dangerous" operations are
-  /// masked during sandboxed code emission. For regular, non-sandboxed code
-  /// emission, its methods are simple pass-through methods.
-  ///
-  /// The Sandboxer also emits BundleLock/BundleUnlock pseudo-instructions
-  /// in the constructor/destructor during sandboxed code emission. Therefore,
-  /// it is a bad idea to create an object of this type and "keep it around."
-  /// The recommended usage is:
-  ///
-  /// AutoSandboxing(this).<<operation>>(...);
-  ///
-  /// This usage ensures that no other instructions are inadvertently added to
-  /// the bundle.
-  class Sandboxer {
-    Sandboxer() = delete;
-    Sandboxer(const Sandboxer &) = delete;
-    Sandboxer &operator=(const Sandboxer &) = delete;
-
-  public:
-    explicit Sandboxer(
-        TargetARM32 *Target,
-        InstBundleLock::Option BundleOption = InstBundleLock::Opt_None);
-    ~Sandboxer();
-
-    /// Increments sp:
-    ///
-    ///   add sp, sp, AddAmount
-    ///   bic sp, sp, 0xc0000000
-    ///
-    /// (for the rationale, see the ARM 32-bit Sandbox Specification.)
-    void add_sp(Operand *AddAmount);
-
-    /// Emits code to align sp to the specified alignment:
-    ///
-    ///   bic/and sp, sp, Alignment
-    ///   bic, sp, sp, 0xc0000000
-    void align_sp(size_t Alignment);
-
-    /// Emits a call instruction. If CallTarget is a Variable, it emits
-    ///
-    ///   bic CallTarget, CallTarget, 0xc000000f
-    ///   bl CallTarget
-    ///
-    /// Otherwise, it emits
-    ///
-    ///   bl CallTarget
-    ///
-    /// Note: in sandboxed code calls are always emitted in addresses 12 mod 16.
-    InstARM32Call *bl(Variable *ReturnReg, Operand *CallTarget);
-
-    /// Emits a load:
-    ///
-    ///   bic rBase, rBase, 0xc0000000
-    ///   ldr rDest, [rBase, #Offset]
-    ///
-    /// Exception: if rBase is r9 or sp, then the load is emitted as:
-    ///
-    ///   ldr rDest, [rBase, #Offset]
-    ///
-    /// because the NaCl ARM 32-bit Sandbox Specification guarantees they are
-    /// always valid.
-    void ldr(Variable *Dest, OperandARM32Mem *Mem, CondARM32::Cond Pred);
-
-    /// Emits a load exclusive:
-    ///
-    ///   bic rBase, rBase, 0xc0000000
-    ///   ldrex rDest, [rBase]
-    ///
-    /// Exception: if rBase is r9 or sp, then the load is emitted as:
-    ///
-    ///   ldrex rDest, [rBase]
-    ///
-    /// because the NaCl ARM 32-bit Sandbox Specification guarantees they are
-    /// always valid.
-    void ldrex(Variable *Dest, OperandARM32Mem *Mem, CondARM32::Cond Pred);
-
-    /// Resets sp to Src:
-    ///
-    ///   mov sp, Src
-    ///   bic sp, sp, 0xc0000000
-    void reset_sp(Variable *Src);
-
-    /// Emits code to return from a function:
-    ///
-    ///   bic lr, lr, 0xc000000f
-    ///   bx lr
-    void ret(Variable *RetAddr, Variable *RetValue);
-
-    /// Emits a store:
-    ///
-    ///   bic rBase, rBase, 0xc0000000
-    ///   str rSrc, [rBase, #Offset]
-    ///
-    /// Exception: if rBase is r9 or sp, then the store is emitted as:
-    ///
-    ///   str rDest, [rBase, #Offset]
-    ///
-    /// because the NaCl ARM 32-bit Sandbox Specification guarantees they are
-    /// always valid.
-    void str(Variable *Src, OperandARM32Mem *Mem, CondARM32::Cond Pred);
-
-    /// Emits a store exclusive:
-    ///
-    ///   bic rBase, rBase, 0xc0000000
-    ///   strex rDest, rSrc, [rBase]
-    ///
-    /// Exception: if rBase is r9 or sp, then the store is emitted as:
-    ///
-    ///   strex rDest, rSrc, [rBase]
-    ///
-    /// because the NaCl ARM 32-bit Sandbox Specification guarantees they are
-    /// always valid.
-    void strex(Variable *Dest, Variable *Src, OperandARM32Mem *Mem,
-               CondARM32::Cond Pred);
-
-    /// Decrements sp:
-    ///
-    ///   sub sp, sp, SubAmount
-    ///   bic sp, sp, 0xc0000000
-    void sub_sp(Operand *SubAmount);
-
-  private:
-    TargetARM32 *const Target;
-    const InstBundleLock::Option BundleOption;
-    std::unique_ptr<AutoBundle> Bundler;
-
-    void createAutoBundle();
-  };
-
   class PostLoweringLegalizer {
     PostLoweringLegalizer() = delete;
     PostLoweringLegalizer(const PostLoweringLegalizer &) = delete;
@@ -1217,7 +1056,6 @@
     int32_t TempBaseOffset = 0;
   };
 
-  const bool NeedSandboxing;
   TargetARM32Features CPUFeatures;
   bool UsesFramePointer = false;
   bool NeedsStackAlignment = false;
diff --git a/third_party/subzero/src/IceTargetLoweringMIPS32.cpp b/third_party/subzero/src/IceTargetLoweringMIPS32.cpp
index 0ceb41e..94bf0c0 100644
--- a/third_party/subzero/src/IceTargetLoweringMIPS32.cpp
+++ b/third_party/subzero/src/IceTargetLoweringMIPS32.cpp
@@ -109,8 +109,7 @@
 
 } // end of anonymous namespace
 
-TargetMIPS32::TargetMIPS32(Cfg *Func)
-    : TargetLowering(Func), NeedSandboxing(SandboxingType == ST_NaCl) {}
+TargetMIPS32::TargetMIPS32(Cfg *Func) : TargetLowering(Func) {}
 
 void TargetMIPS32::assignVarStackSlots(VarList &SortedSpilledVariables,
                                        size_t SpillAreaPaddingBytes,
@@ -803,19 +802,6 @@
       Instr->setDeleted();
       return;
     }
-    case Intrinsics::NaClReadTP: {
-      if (SandboxingType == ST_NaCl) {
-        return;
-      }
-      static constexpr SizeT MaxArgs = 0;
-      assert(SandboxingType != ST_Nonsfi);
-      Operand *TargetHelper =
-          Ctx->getRuntimeHelperFunc(RuntimeHelper::H_call_read_tp);
-      Context.insert<InstCall>(MaxArgs, Dest, TargetHelper, NoTailCall,
-                               IsTargetHelperCall);
-      Instr->setDeleted();
-      return;
-    }
     case Intrinsics::Setjmp: {
       static constexpr SizeT MaxArgs = 1;
       Operand *TargetHelper =
@@ -1631,7 +1617,8 @@
   // Generate "addiu sp, sp, -TotalStackSizeBytes"
   if (TotalStackSizeBytes) {
     // Use the scratch register if needed to legalize the immediate.
-    Sandboxer(this).addiu_sp(-TotalStackSizeBytes);
+    Variable *SP = getPhysicalRegister(RegMIPS32::Reg_SP);
+    _addiu(SP, SP, -TotalStackSizeBytes);
   }
 
   Ctx->statsUpdateFrameBytes(TotalStackSizeBytes);
@@ -1650,7 +1637,7 @@
       OperandMIPS32Mem *MemoryLocation = OperandMIPS32Mem::create(
           Func, RegType, SP,
           llvm::cast<ConstantInteger32>(Ctx->getConstantInt32(StackOffset)));
-      Sandboxer(this).sw(PhysicalRegister, MemoryLocation);
+      _sw(PhysicalRegister, MemoryLocation);
     }
   }
 
@@ -1754,7 +1741,7 @@
     // use of SP before the assignment of SP=FP keeps previous SP adjustments
     // from being dead-code eliminated.
     Context.insert<InstFakeUse>(SP);
-    Sandboxer(this).reset_sp(FP);
+    _mov(SP, FP);
   }
 
   VarList::reverse_iterator RIter, END;
@@ -1779,19 +1766,8 @@
   }
 
   if (TotalStackSizeBytes) {
-    Sandboxer(this).addiu_sp(TotalStackSizeBytes);
+    _addiu(SP, SP, TotalStackSizeBytes);
   }
-  if (!getFlags().getUseSandboxing())
-    return;
-
-  Variable *RA = getPhysicalRegister(RegMIPS32::Reg_RA);
-  Variable *RetValue = nullptr;
-  if (RI->getSrcSize())
-    RetValue = llvm::cast<Variable>(RI->getSrc(0));
-
-  Sandboxer(this).ret(RA, RetValue);
-
-  RI->setDeleted();
 }
 
 Variable *TargetMIPS32::PostLoweringLegalizer::newBaseRegister(
@@ -1841,7 +1817,7 @@
       SrcR = Target->makeReg(
           IceType_f32, RegMIPS32::get64PairSecondRegNum(SrcV->getRegNum()));
     }
-    Sandboxer(Target).sw(SrcR, Addr);
+    Target->_sw(SrcR, Addr);
     if (MovInstr->isDestRedefined()) {
       Target->_set_dest_redefined();
     }
@@ -1978,15 +1954,15 @@
     const bool IsSrcGPReg = RegMIPS32::isGPRReg(SrcR->getRegNum());
     if (SrcTy == IceType_f32 && IsSrcGPReg) {
       Variable *SrcGPR = Target->makeReg(IceType_i32, RegNum);
-      Sandboxer(Target).sw(SrcGPR, Addr);
+      Target->_sw(SrcGPR, Addr);
     } else if (SrcTy == IceType_f64 && IsSrcGPReg) {
       Variable *SrcGPRHi =
           Target->makeReg(IceType_i32, RegMIPS32::get64PairFirstRegNum(RegNum));
       Variable *SrcGPRLo = Target->makeReg(
           IceType_i32, RegMIPS32::get64PairSecondRegNum(RegNum));
-      Sandboxer(Target).sw(SrcGPRHi, Addr);
+      Target->_sw(SrcGPRHi, Addr);
       OperandMIPS32Mem *AddrHi = legalizeMemOperand(TAddrHi);
-      Sandboxer(Target).sw(SrcGPRLo, AddrHi);
+      Target->_sw(SrcGPRLo, AddrHi);
     } else if (DestTy == IceType_f64 && IsSrcGPReg) {
       const auto FirstReg =
           (llvm::cast<Variable>(MovInstr->getSrc(0)))->getRegNum();
@@ -1994,11 +1970,11 @@
           (llvm::cast<Variable>(MovInstr->getSrc(1)))->getRegNum();
       Variable *SrcGPRHi = Target->makeReg(IceType_i32, FirstReg);
       Variable *SrcGPRLo = Target->makeReg(IceType_i32, SecondReg);
-      Sandboxer(Target).sw(SrcGPRLo, Addr);
+      Target->_sw(SrcGPRLo, Addr);
       OperandMIPS32Mem *AddrHi = legalizeMemOperand(TAddrHi);
-      Sandboxer(Target).sw(SrcGPRHi, AddrHi);
+      Target->_sw(SrcGPRHi, AddrHi);
     } else {
-      Sandboxer(Target).sw(SrcR, Addr);
+      Target->_sw(SrcR, Addr);
     }
 
     Target->Context.insert<InstFakeDef>(Dest);
@@ -2046,9 +2022,9 @@
               Target->Func, IceType_i32, Base,
               llvm::cast<ConstantInteger32>(
                   Target->Ctx->getConstantInt32(Offset + 4)));
-          Sandboxer(Target).lw(Reg, AddrLo);
+          Target->_lw(Reg, AddrLo);
           Target->_mov(DestLo, Reg);
-          Sandboxer(Target).lw(Reg, AddrHi);
+          Target->_lw(Reg, AddrHi);
           Target->_mov(DestHi, Reg);
         } else {
           OperandMIPS32Mem *TAddr = OperandMIPS32Mem::create(
@@ -2065,15 +2041,15 @@
           // explicitly generate lw instead of lwc1.
           if (DestTy == IceType_f32 && IsDstGPReg) {
             Variable *DstGPR = Target->makeReg(IceType_i32, RegNum);
-            Sandboxer(Target).lw(DstGPR, Addr);
+            Target->_lw(DstGPR, Addr);
           } else if (DestTy == IceType_f64 && IsDstGPReg) {
             Variable *DstGPRHi = Target->makeReg(
                 IceType_i32, RegMIPS32::get64PairFirstRegNum(RegNum));
             Variable *DstGPRLo = Target->makeReg(
                 IceType_i32, RegMIPS32::get64PairSecondRegNum(RegNum));
-            Sandboxer(Target).lw(DstGPRHi, Addr);
+            Target->_lw(DstGPRHi, Addr);
             OperandMIPS32Mem *AddrHi = legalizeMemOperand(TAddrHi);
-            Sandboxer(Target).lw(DstGPRLo, AddrHi);
+            Target->_lw(DstGPRLo, AddrHi);
           } else if (DestTy == IceType_f64 && IsDstGPReg) {
             const auto FirstReg =
                 (llvm::cast<Variable>(MovInstr->getSrc(0)))->getRegNum();
@@ -2081,11 +2057,11 @@
                 (llvm::cast<Variable>(MovInstr->getSrc(1)))->getRegNum();
             Variable *DstGPRHi = Target->makeReg(IceType_i32, FirstReg);
             Variable *DstGPRLo = Target->makeReg(IceType_i32, SecondReg);
-            Sandboxer(Target).lw(DstGPRLo, Addr);
+            Target->_lw(DstGPRLo, Addr);
             OperandMIPS32Mem *AddrHi = legalizeMemOperand(TAddrHi);
-            Sandboxer(Target).lw(DstGPRHi, AddrHi);
+            Target->_lw(DstGPRHi, AddrHi);
           } else {
-            Sandboxer(Target).lw(Dest, Addr);
+            Target->_lw(Dest, Addr);
           }
         }
         Legalized = true;
@@ -2174,7 +2150,7 @@
       }
       if (llvm::isa<InstMIPS32Sw>(CurInstr)) {
         if (auto *LegalMem = Legalizer.legalizeMemOperand(Src1M)) {
-          Sandboxer(this).sw(Src0V, LegalMem);
+          _sw(Src0V, LegalMem);
           CurInstr->setDeleted();
         }
         continue;
@@ -2195,7 +2171,7 @@
       }
       if (llvm::isa<InstMIPS32Lw>(CurInstr)) {
         if (auto *LegalMem = Legalizer.legalizeMemOperand(Src0M)) {
-          Sandboxer(this).lw(Dst, LegalMem);
+          _lw(Dst, LegalMem);
           CurInstr->setDeleted();
         }
         continue;
@@ -2349,11 +2325,6 @@
 
 #undef X
 
-  if (NeedSandboxing) {
-    Registers[RegMIPS32::Reg_T6] = false;
-    Registers[RegMIPS32::Reg_T7] = false;
-    Registers[RegMIPS32::Reg_T8] = false;
-  }
   return Registers;
 }
 
@@ -2438,10 +2409,7 @@
     } else {
       _mov(Dest, T4);
     }
-    if (OptM1)
-      _mov(SP, Dest);
-    else
-      Sandboxer(this).reset_sp(Dest);
+    _mov(SP, Dest);
     return;
   }
 }
@@ -3535,7 +3503,7 @@
   // If variable alloca is used the extra 16 bytes for argument build area
   // will be allocated on stack before a call.
   if (VariableAllocaUsed)
-    Sandboxer(this).addiu_sp(-MaxOutArgsSizeBytes);
+    _addiu(SP, SP, -MaxOutArgsSizeBytes);
 
   Inst *NewCall;
 
@@ -3546,12 +3514,11 @@
     NewCall = InstMIPS32Call::create(Func, RetReg, CallTarget);
     Context.insert(NewCall);
   } else {
-    NewCall = Sandboxer(this, InstBundleLock::Opt_AlignToEnd)
-                  .jal(ReturnReg, CallTarget);
+    NewCall = Context.insert<InstMIPS32Call>(ReturnReg, CallTarget);
   }
 
   if (VariableAllocaUsed)
-    Sandboxer(this).addiu_sp(MaxOutArgsSizeBytes);
+    _addiu(SP, SP, MaxOutArgsSizeBytes);
 
   // Insert a fake use of stack pointer to avoid dead code elimination of addiu
   // instruction.
@@ -4573,10 +4540,10 @@
       constexpr CfgNode *NoTarget = nullptr;
       _sync();
       Context.insert(Retry);
-      Sandboxer(this).ll(T1, Addr);
+      _ll(T1, Addr);
       _br(NoTarget, NoTarget, T1, getZero(), Exit, CondMIPS32::Cond::NE);
       _addiu(RegAt, getZero(), 0); // Loaded value is zero here, writeback zero
-      Sandboxer(this).sc(RegAt, Addr);
+      _sc(RegAt, Addr);
       _br(NoTarget, NoTarget, RegAt, getZero(), Retry, CondMIPS32::Cond::EQ);
       Context.insert(Exit);
       _sync();
@@ -4606,11 +4573,11 @@
       _sllv(SrcMask, T5, T4); // Source mask
       auto *Addr = formMemoryOperand(T3, IceType_i32);
       Context.insert(Retry);
-      Sandboxer(this).ll(T6, Addr);
+      _ll(T6, Addr);
       _and(Tdest, T6, SrcMask);
       _br(NoTarget, NoTarget, T6, getZero(), Exit, CondMIPS32::Cond::NE);
       _addiu(RegAt, getZero(), 0); // Loaded value is zero here, writeback zero
-      Sandboxer(this).sc(RegAt, Addr);
+      _sc(RegAt, Addr);
       _br(NoTarget, NoTarget, RegAt, getZero(), Retry, CondMIPS32::Cond::EQ);
       Context.insert(Exit);
       auto *T7 = makeReg(IceType_i32);
@@ -4647,9 +4614,9 @@
       auto *RegAt = getPhysicalRegister(RegMIPS32::Reg_AT);
       _sync();
       Context.insert(Retry);
-      Sandboxer(this).ll(T1, Addr);
+      _ll(T1, Addr);
       _mov(RegAt, Val);
-      Sandboxer(this).sc(RegAt, Addr);
+      _sc(RegAt, Addr);
       _br(NoTarget, NoTarget, RegAt, getZero(), Retry, CondMIPS32::Cond::EQ);
       Context.insert<InstFakeUse>(T1); // To keep LL alive
       _sync();
@@ -4681,10 +4648,10 @@
       _nor(SrcMask, getZero(), T5);
       _and(DstMask, T6, T5);
       Context.insert(Retry);
-      Sandboxer(this).ll(RegAt, Addr);
+      _ll(RegAt, Addr);
       _and(RegAt, RegAt, SrcMask);
       _or(RegAt, RegAt, DstMask);
-      Sandboxer(this).sc(RegAt, Addr);
+      _sc(RegAt, Addr);
       _br(NoTarget, NoTarget, RegAt, getZero(), Retry, CondMIPS32::Cond::EQ);
       Context.insert<InstFakeUse>(SrcMask);
       Context.insert<InstFakeUse>(DstMask);
@@ -4745,12 +4712,12 @@
       _sllv(T6, RegAt, T2);
       _sync();
       Context.insert(Retry);
-      Sandboxer(this).ll(T7, Addr);
+      _ll(T7, Addr);
       _and(T8, T7, T3);
       _br(NoTarget, NoTarget, T8, T5, Exit, CondMIPS32::Cond::NE);
       _and(RegAt, T7, T4);
       _or(T9, RegAt, T6);
-      Sandboxer(this).sc(T9, Addr);
+      _sc(T9, Addr);
       _br(NoTarget, NoTarget, getZero(), T9, Retry, CondMIPS32::Cond::EQ);
       Context.insert<InstFakeUse>(getZero());
       Context.insert(Exit);
@@ -4774,10 +4741,10 @@
       auto *ActualAddressR = legalizeToReg(ActualAddress);
       _sync();
       Context.insert(Retry);
-      Sandboxer(this).ll(T1, formMemoryOperand(ActualAddressR, DestTy));
+      _ll(T1, formMemoryOperand(ActualAddressR, DestTy));
       _br(NoTarget, NoTarget, T1, ExpectedR, Exit, CondMIPS32::Cond::NE);
       _mov(T2, NewR);
-      Sandboxer(this).sc(T2, formMemoryOperand(ActualAddressR, DestTy));
+      _sc(T2, formMemoryOperand(ActualAddressR, DestTy));
       _br(NoTarget, NoTarget, T2, getZero(), Retry, CondMIPS32::Cond::EQ);
       Context.insert<InstFakeUse>(getZero());
       Context.insert(Exit);
@@ -4832,7 +4799,7 @@
       _nor(T4, getZero(), T3);
       _sllv(T5, NewR, T2);
       Context.insert(Retry);
-      Sandboxer(this).ll(T6, formMemoryOperand(T1, DestTy));
+      _ll(T6, formMemoryOperand(T1, DestTy));
       if (Operation != Intrinsics::AtomicExchange) {
         createArithInst(Operation, RegAt, T6, T5);
         _and(RegAt, RegAt, T3);
@@ -4843,7 +4810,7 @@
       } else {
         _or(RegAt, T7, RegAt);
       }
-      Sandboxer(this).sc(RegAt, formMemoryOperand(T1, DestTy));
+      _sc(RegAt, formMemoryOperand(T1, DestTy));
       _br(NoTarget, NoTarget, RegAt, getZero(), Retry, CondMIPS32::Cond::EQ);
       Context.insert<InstFakeUse>(getZero());
       _and(RegAt, T6, T3);
@@ -4861,13 +4828,13 @@
       auto *ActualAddressR = legalizeToReg(ActualAddress);
       _sync();
       Context.insert(Retry);
-      Sandboxer(this).ll(T1, formMemoryOperand(ActualAddressR, DestTy));
+      _ll(T1, formMemoryOperand(ActualAddressR, DestTy));
       if (Operation == Intrinsics::AtomicExchange) {
         _mov(T2, NewR);
       } else {
         createArithInst(Operation, T2, T1, NewR);
       }
-      Sandboxer(this).sc(T2, formMemoryOperand(ActualAddressR, DestTy));
+      _sc(T2, formMemoryOperand(ActualAddressR, DestTy));
       _br(NoTarget, NoTarget, T2, getZero(), Retry, CondMIPS32::Cond::EQ);
       Context.insert<InstFakeUse>(getZero());
       _mov(Dest, T1);
@@ -5128,19 +5095,6 @@
     llvm::report_fatal_error("memset should have been prelowered.");
     return;
   }
-  case Intrinsics::NaClReadTP: {
-    if (SandboxingType != ST_NaCl)
-      llvm::report_fatal_error("nacl-read-tp should have been prelowered.");
-    else {
-      auto *T8 = makeReg(IceType_i32, RegMIPS32::Reg_T8);
-      Context.insert<InstFakeDef>(T8);
-      Variable *TP = legalizeToReg(OperandMIPS32Mem::create(
-          Func, getPointerType(), T8,
-          llvm::cast<ConstantInteger32>(Ctx->getConstantZero(IceType_i32))));
-      _mov(Dest, TP);
-    }
-    return;
-  }
   case Intrinsics::Setjmp: {
     llvm::report_fatal_error("setjmp should have been prelowered.");
     return;
@@ -5165,8 +5119,9 @@
     return;
   }
   case Intrinsics::Stackrestore: {
+    Variable *SP = getPhysicalRegister(RegMIPS32::Reg_SP);
     Variable *Val = legalizeToReg(Instr->getArg(0));
-    Sandboxer(this).reset_sp(Val);
+    _mov(SP, Val);
     return;
   }
   case Intrinsics::Trap: {
@@ -5710,7 +5665,7 @@
 
 void TargetDataMIPS32::lowerGlobals(const VariableDeclarationList &Vars,
                                     const std::string &SectionSuffix) {
-  const bool IsPIC = getFlags().getUseNonsfi();
+  const bool IsPIC = false;
   switch (getFlags().getOutFileType()) {
   case FT_Elf: {
     ELFObjectWriter *Writer = Ctx->getObjectWriter();
@@ -5954,9 +5909,9 @@
         OperandMIPS32Mem *Addr =
             OperandMIPS32Mem::create(Func, Ty, TReg1, Offset);
         if (Ty == IceType_f32)
-          Sandboxer(this).lwc1(TReg, Addr, RO_Lo);
+          _lwc1(TReg, Addr, RO_Lo);
         else
-          Sandboxer(this).ldc1(TReg, Addr, RO_Lo);
+          _ldc1(TReg, Addr, RO_Lo);
       }
       return copyToReg(TReg, RegNum);
     }
@@ -6069,163 +6024,11 @@
       << "nomips16\n";
   Str << "\t.set\t"
       << "noat\n";
-  if (getFlags().getUseSandboxing())
-    Str << "\t.bundle_align_mode 4\n";
 }
 
 SmallBitVector TargetMIPS32::TypeToRegisterSet[RCMIPS32_NUM];
 SmallBitVector TargetMIPS32::TypeToRegisterSetUnfiltered[RCMIPS32_NUM];
 SmallBitVector TargetMIPS32::RegisterAliases[RegMIPS32::Reg_NUM];
 
-TargetMIPS32::Sandboxer::Sandboxer(TargetMIPS32 *Target,
-                                   InstBundleLock::Option BundleOption)
-    : Target(Target), BundleOption(BundleOption) {}
-
-TargetMIPS32::Sandboxer::~Sandboxer() {}
-
-void TargetMIPS32::Sandboxer::createAutoBundle() {
-  Bundler = makeUnique<AutoBundle>(Target, BundleOption);
-}
-
-void TargetMIPS32::Sandboxer::addiu_sp(uint32_t StackOffset) {
-  Variable *SP = Target->getPhysicalRegister(RegMIPS32::Reg_SP);
-  if (!Target->NeedSandboxing) {
-    Target->_addiu(SP, SP, StackOffset);
-    return;
-  }
-  auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-  Target->Context.insert<InstFakeDef>(T7);
-  createAutoBundle();
-  Target->_addiu(SP, SP, StackOffset);
-  Target->_and(SP, SP, T7);
-}
-
-void TargetMIPS32::Sandboxer::lw(Variable *Dest, OperandMIPS32Mem *Mem) {
-  Variable *Base = Mem->getBase();
-  if (Target->NeedSandboxing && (Target->getStackReg() != Base->getRegNum()) &&
-      (RegMIPS32::Reg_T8 != Base->getRegNum())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    createAutoBundle();
-    Target->_and(Base, Base, T7);
-  }
-  Target->_lw(Dest, Mem);
-  if (Target->NeedSandboxing && (Dest->getRegNum() == Target->getStackReg())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    Target->_and(Dest, Dest, T7);
-  }
-}
-
-void TargetMIPS32::Sandboxer::ll(Variable *Dest, OperandMIPS32Mem *Mem) {
-  Variable *Base = Mem->getBase();
-  if (Target->NeedSandboxing && (Target->getStackReg() != Base->getRegNum())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    createAutoBundle();
-    Target->_and(Base, Base, T7);
-  }
-  Target->_ll(Dest, Mem);
-  if (Target->NeedSandboxing && (Dest->getRegNum() == Target->getStackReg())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    Target->_and(Dest, Dest, T7);
-  }
-}
-
-void TargetMIPS32::Sandboxer::sc(Variable *Dest, OperandMIPS32Mem *Mem) {
-  Variable *Base = Mem->getBase();
-  if (Target->NeedSandboxing && (Target->getStackReg() != Base->getRegNum())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    createAutoBundle();
-    Target->_and(Base, Base, T7);
-  }
-  Target->_sc(Dest, Mem);
-}
-
-void TargetMIPS32::Sandboxer::sw(Variable *Dest, OperandMIPS32Mem *Mem) {
-  Variable *Base = Mem->getBase();
-  if (Target->NeedSandboxing && (Target->getStackReg() != Base->getRegNum())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    createAutoBundle();
-    Target->_and(Base, Base, T7);
-  }
-  Target->_sw(Dest, Mem);
-}
-
-void TargetMIPS32::Sandboxer::lwc1(Variable *Dest, OperandMIPS32Mem *Mem,
-                                   RelocOp Reloc) {
-  Variable *Base = Mem->getBase();
-  if (Target->NeedSandboxing && (Target->getStackReg() != Base->getRegNum())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    createAutoBundle();
-    Target->_and(Base, Base, T7);
-  }
-  Target->_lwc1(Dest, Mem, Reloc);
-  if (Target->NeedSandboxing && (Dest->getRegNum() == Target->getStackReg())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    Target->_and(Dest, Dest, T7);
-  }
-}
-
-void TargetMIPS32::Sandboxer::ldc1(Variable *Dest, OperandMIPS32Mem *Mem,
-                                   RelocOp Reloc) {
-  Variable *Base = Mem->getBase();
-  if (Target->NeedSandboxing && (Target->getStackReg() != Base->getRegNum())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    createAutoBundle();
-    Target->_and(Base, Base, T7);
-  }
-  Target->_ldc1(Dest, Mem, Reloc);
-  if (Target->NeedSandboxing && (Dest->getRegNum() == Target->getStackReg())) {
-    auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-    Target->Context.insert<InstFakeDef>(T7);
-    Target->_and(Dest, Dest, T7);
-  }
-}
-
-void TargetMIPS32::Sandboxer::ret(Variable *RetAddr, Variable *RetValue) {
-  if (!Target->NeedSandboxing) {
-    Target->_ret(RetAddr, RetValue);
-  }
-  auto *T6 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T6);
-  Target->Context.insert<InstFakeDef>(T6);
-  createAutoBundle();
-  Target->_and(RetAddr, RetAddr, T6);
-  Target->_ret(RetAddr, RetValue);
-}
-
-void TargetMIPS32::Sandboxer::reset_sp(Variable *Src) {
-  Variable *SP = Target->getPhysicalRegister(RegMIPS32::Reg_SP);
-  if (!Target->NeedSandboxing) {
-    Target->_mov(SP, Src);
-    return;
-  }
-  auto *T7 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T7);
-  Target->Context.insert<InstFakeDef>(T7);
-  createAutoBundle();
-  Target->_mov(SP, Src);
-  Target->_and(SP, SP, T7);
-  Target->getContext().insert<InstFakeUse>(SP);
-}
-
-InstMIPS32Call *TargetMIPS32::Sandboxer::jal(Variable *ReturnReg,
-                                             Operand *CallTarget) {
-  if (Target->NeedSandboxing) {
-    createAutoBundle();
-    if (auto *CallTargetR = llvm::dyn_cast<Variable>(CallTarget)) {
-      auto *T6 = Target->makeReg(IceType_i32, RegMIPS32::Reg_T6);
-      Target->Context.insert<InstFakeDef>(T6);
-      Target->_and(CallTargetR, CallTargetR, T6);
-    }
-  }
-  return Target->Context.insert<InstMIPS32Call>(ReturnReg, CallTarget);
-}
-
 } // end of namespace MIPS32
 } // end of namespace Ice
diff --git a/third_party/subzero/src/IceTargetLoweringMIPS32.h b/third_party/subzero/src/IceTargetLoweringMIPS32.h
index 858879c..832f711 100644
--- a/third_party/subzero/src/IceTargetLoweringMIPS32.h
+++ b/third_party/subzero/src/IceTargetLoweringMIPS32.h
@@ -605,38 +605,6 @@
 
   void lowerArguments() override;
 
-  class Sandboxer {
-    Sandboxer() = delete;
-    Sandboxer(const Sandboxer &) = delete;
-    Sandboxer &operator=(const Sandboxer &) = delete;
-
-  public:
-    explicit Sandboxer(
-        TargetMIPS32 *Target,
-        InstBundleLock::Option BundleOption = InstBundleLock::Opt_None);
-    ~Sandboxer();
-
-    void addiu_sp(uint32_t StackOffset);
-    void lw(Variable *Dest, OperandMIPS32Mem *Mem);
-    void sw(Variable *Dest, OperandMIPS32Mem *Mem);
-    void ll(Variable *Dest, OperandMIPS32Mem *Mem);
-    void sc(Variable *Dest, OperandMIPS32Mem *Mem);
-    void lwc1(Variable *Dest, OperandMIPS32Mem *Mem, RelocOp Reloc = RO_No);
-    void ldc1(Variable *Dest, OperandMIPS32Mem *Mem, RelocOp Reloc = RO_No);
-    void ret(Variable *RetAddr, Variable *RetValue);
-    void reset_sp(Variable *Src);
-    InstMIPS32Call *jal(Variable *ReturnReg, Operand *CallTarget);
-
-  private:
-    TargetMIPS32 *const Target;
-    const InstBundleLock::Option BundleOption;
-    std::unique_ptr<AutoBundle> Bundler;
-
-    void createAutoBundle();
-  };
-
-  const bool NeedSandboxing;
-
   /// Make a pass through the SortedSpilledVariables and actually assign stack
   /// slots. SpillAreaPaddingBytes takes into account stack alignment padding.
   /// The SpillArea starts after that amount of padding. This matches the scheme
diff --git a/third_party/subzero/src/IceTargetLoweringX8632.cpp b/third_party/subzero/src/IceTargetLoweringX8632.cpp
index a1434f1..212d656 100644
--- a/third_party/subzero/src/IceTargetLoweringX8632.cpp
+++ b/third_party/subzero/src/IceTargetLoweringX8632.cpp
@@ -39,14 +39,6 @@
 
 void staticInit(::Ice::GlobalContext *Ctx) {
   ::Ice::X8632::TargetX8632::staticInit(Ctx);
-  if (Ice::getFlags().getUseNonsfi()) {
-    // In nonsfi, we need to reference the _GLOBAL_OFFSET_TABLE_ for accessing
-    // globals. The GOT is an external symbol (i.e., it is not defined in the
-    // pexe) so we need to register it as such so that ELF emission won't barf
-    // on an "unknown" symbol. The GOT is added to the External symbols list
-    // here because staticInit() is invoked in a single-thread context.
-    Ctx->getConstantExternSym(Ctx->getGlobalString(::Ice::GlobalOffsetTable));
-  }
 }
 
 bool shouldBePooled(const class ::Ice::Constant *C) {
@@ -136,14 +128,6 @@
            TargetX86Base<X8632::Traits>::Traits::RegisterSet::Reg_NUM>
     TargetX86Base<X8632::Traits>::RegisterAliases = {{}};
 
-template <>
-FixupKind TargetX86Base<X8632::Traits>::PcRelFixup =
-    TargetX86Base<X8632::Traits>::Traits::FK_PcRel;
-
-template <>
-FixupKind TargetX86Base<X8632::Traits>::AbsFixup =
-    TargetX86Base<X8632::Traits>::Traits::FK_Abs;
-
 //------------------------------------------------------------------------------
 //     __      ______  __     __  ______  ______  __  __   __  ______
 //    /\ \    /\  __ \/\ \  _ \ \/\  ___\/\  == \/\ \/\ "-.\ \/\  ___\
@@ -162,45 +146,6 @@
   _redefined(_mov(esp, NewValue));
 }
 
-Traits::X86OperandMem *TargetX8632::_sandbox_mem_reference(X86OperandMem *Mem) {
-  switch (SandboxingType) {
-  case ST_None:
-  case ST_NaCl:
-    return Mem;
-  case ST_Nonsfi: {
-    if (Mem->getIsRebased()) {
-      return Mem;
-    }
-    // For Non-SFI mode, if the Offset field is a ConstantRelocatable, we
-    // replace either Base or Index with a legalized RebasePtr. At emission
-    // time, the ConstantRelocatable will be emitted with the @GOTOFF
-    // relocation.
-    if (llvm::dyn_cast_or_null<ConstantRelocatable>(Mem->getOffset()) ==
-        nullptr) {
-      return Mem;
-    }
-    Variable *T;
-    uint16_t Shift = 0;
-    if (Mem->getIndex() == nullptr) {
-      T = Mem->getBase();
-    } else if (Mem->getBase() == nullptr) {
-      T = Mem->getIndex();
-      Shift = Mem->getShift();
-    } else {
-      llvm::report_fatal_error(
-          "Either Base or Index must be unused in Non-SFI mode");
-    }
-    Variable *RebasePtrR = legalizeToReg(RebasePtr);
-    static constexpr bool IsRebased = true;
-    return Traits::X86OperandMem::create(
-        Func, Mem->getType(), RebasePtrR, Mem->getOffset(), T, Shift,
-        Traits::X86OperandMem::DefaultSegment, IsRebased);
-  }
-  }
-  llvm::report_fatal_error("Unhandled sandboxing type: " +
-                           std::to_string(SandboxingType));
-}
-
 void TargetX8632::_sub_sp(Operand *Adjustment) {
   Variable *esp = getPhysicalRegister(Traits::RegisterSet::Reg_esp);
   _sub(esp, Adjustment);
@@ -237,114 +182,7 @@
   _pop(getPhysicalRegister(RegNum, Traits::WordType));
 }
 
-void TargetX8632::emitGetIP(CfgNode *Node) {
-  // If there is a non-deleted InstX86GetIP instruction, we need to move it to
-  // the point after the stack frame has stabilized but before
-  // register-allocated in-args are copied into their home registers.  It would
-  // be slightly faster to search for the GetIP instruction before other prolog
-  // instructions are inserted, but it's more clear to do the whole
-  // transformation in a single place.
-  Traits::Insts::GetIP *GetIPInst = nullptr;
-  if (getFlags().getUseNonsfi()) {
-    for (Inst &Instr : Node->getInsts()) {
-      if (auto *GetIP = llvm::dyn_cast<Traits::Insts::GetIP>(&Instr)) {
-        if (!Instr.isDeleted())
-          GetIPInst = GetIP;
-        break;
-      }
-    }
-  }
-  // Delete any existing InstX86GetIP instruction and reinsert it here.  Also,
-  // insert the call to the helper function and the spill to the stack, to
-  // simplify emission.
-  if (GetIPInst) {
-    GetIPInst->setDeleted();
-    Variable *Dest = GetIPInst->getDest();
-    Variable *CallDest =
-        Dest->hasReg() ? Dest
-                       : getPhysicalRegister(Traits::RegisterSet::Reg_eax);
-    auto *BeforeAddReloc = RelocOffset::create(Ctx);
-    BeforeAddReloc->setSubtract(true);
-    auto *BeforeAdd = InstX86Label::create(Func, this);
-    BeforeAdd->setRelocOffset(BeforeAddReloc);
-
-    auto *AfterAddReloc = RelocOffset::create(Ctx);
-    auto *AfterAdd = InstX86Label::create(Func, this);
-    AfterAdd->setRelocOffset(AfterAddReloc);
-
-    const RelocOffsetT ImmSize = -typeWidthInBytes(IceType_i32);
-
-    auto *GotFromPc =
-        llvm::cast<ConstantRelocatable>(Ctx->getConstantSymWithEmitString(
-            ImmSize, {AfterAddReloc, BeforeAddReloc},
-            Ctx->getGlobalString(GlobalOffsetTable), GlobalOffsetTable));
-
-    // Insert a new version of InstX86GetIP.
-    Context.insert<Traits::Insts::GetIP>(CallDest);
-
-    Context.insert(BeforeAdd);
-    _add(CallDest, GotFromPc);
-    Context.insert(AfterAdd);
-
-    // Spill the register to its home stack location if necessary.
-    if (Dest != CallDest) {
-      _mov(Dest, CallDest);
-    }
-  }
-}
-
-void TargetX8632::lowerIndirectJump(Variable *JumpTarget) {
-  AutoBundle _(this);
-
-  if (NeedSandboxing) {
-    const SizeT BundleSize =
-        1 << Func->getAssembler<>()->getBundleAlignLog2Bytes();
-    _and(JumpTarget, Ctx->getConstantInt32(~(BundleSize - 1)));
-  }
-
-  _jmp(JumpTarget);
-}
-
-void TargetX8632::initRebasePtr() {
-  if (SandboxingType == ST_Nonsfi) {
-    RebasePtr = Func->makeVariable(IceType_i32);
-  }
-}
-
-void TargetX8632::initSandbox() {
-  if (SandboxingType != ST_Nonsfi) {
-    return;
-  }
-  // Insert the RebasePtr assignment as the very first lowered instruction.
-  // Later, it will be moved into the right place - after the stack frame is set
-  // up but before in-args are copied into registers.
-  Context.init(Func->getEntryNode());
-  Context.setInsertPoint(Context.getCur());
-  Context.insert<Traits::Insts::GetIP>(RebasePtr);
-}
-
-bool TargetX8632::legalizeOptAddrForSandbox(OptAddr *Addr) {
-  if (Addr->Relocatable == nullptr || SandboxingType != ST_Nonsfi) {
-    return true;
-  }
-
-  if (Addr->Base == RebasePtr || Addr->Index == RebasePtr) {
-    return true;
-  }
-
-  if (Addr->Base == nullptr) {
-    Addr->Base = RebasePtr;
-    return true;
-  }
-
-  if (Addr->Index == nullptr) {
-    Addr->Index = RebasePtr;
-    Addr->Shift = 0;
-    return true;
-  }
-
-  return false;
-}
+void TargetX8632::lowerIndirectJump(Variable *JumpTarget) { _jmp(JumpTarget); }
 
 Inst *TargetX8632::emitCallToTarget(Operand *CallTarget, Variable *ReturnReg,
                                     size_t NumVariadicFpArgs) {
@@ -353,20 +191,6 @@
   // calls, because floating point arguments are passed via vector registers,
   // whereas for x86-32, all args are passed via the stack.
 
-  std::unique_ptr<AutoBundle> Bundle;
-  if (NeedSandboxing) {
-    if (llvm::isa<Constant>(CallTarget)) {
-      Bundle = makeUnique<AutoBundle>(this, InstBundleLock::Opt_AlignToEnd);
-    } else {
-      Variable *CallTargetVar = nullptr;
-      _mov(CallTargetVar, CallTarget);
-      Bundle = makeUnique<AutoBundle>(this, InstBundleLock::Opt_AlignToEnd);
-      const SizeT BundleSize =
-          1 << Func->getAssembler<>()->getBundleAlignLog2Bytes();
-      _and(CallTargetVar, Ctx->getConstantInt32(~(BundleSize - 1)));
-      CallTarget = CallTargetVar;
-    }
-  }
   return Context.insert<Traits::Insts::Call>(ReturnReg, CallTarget);
 }
 
@@ -394,19 +218,6 @@
   }
 }
 
-void TargetX8632::emitSandboxedReturn() {
-  // Change the original ret instruction into a sandboxed return sequence.
-  // t:ecx = pop
-  // bundle_lock
-  // and t, ~31
-  // jmp *t
-  // bundle_unlock
-  // FakeUse <original_ret_operand>
-  Variable *T_ecx = makeReg(IceType_i32, Traits::RegisterSet::Reg_ecx);
-  _pop(T_ecx);
-  lowerIndirectJump(T_ecx);
-}
-
 void TargetX8632::emitStackProbe(size_t StackSizeBytes) {
 #if defined(_WIN32)
   if (StackSizeBytes >= 4096) {
diff --git a/third_party/subzero/src/IceTargetLoweringX8632.h b/third_party/subzero/src/IceTargetLoweringX8632.h
index d7a25e8..c057647 100644
--- a/third_party/subzero/src/IceTargetLoweringX8632.h
+++ b/third_party/subzero/src/IceTargetLoweringX8632.h
@@ -48,20 +48,14 @@
 protected:
   void _add_sp(Operand *Adjustment);
   void _mov_sp(Operand *NewValue);
-  Traits::X86OperandMem *_sandbox_mem_reference(X86OperandMem *Mem);
   void _sub_sp(Operand *Adjustment);
   void _link_bp();
   void _unlink_bp();
   void _push_reg(RegNumT RegNum);
   void _pop_reg(RegNumT RegNum);
 
-  void initRebasePtr();
-  void initSandbox();
-  bool legalizeOptAddrForSandbox(OptAddr *Addr);
-  void emitSandboxedReturn();
   void emitStackProbe(size_t StackSizeBytes);
   void lowerIndirectJump(Variable *JumpTarget);
-  void emitGetIP(CfgNode *Node);
   Inst *emitCallToTarget(Operand *CallTarget, Variable *ReturnReg,
                          size_t NumVariadicFpArgs = 0) override;
   Variable *moveReturnValueToRegister(Operand *Value, Type ReturnType) override;
@@ -71,13 +65,6 @@
   friend class X8632::TargetX86Base<X8632::Traits>;
 
   explicit TargetX8632(Cfg *Func) : TargetX86Base(Func) {}
-
-  Operand *createNaClReadTPSrcOperand() {
-    Constant *Zero = Ctx->getConstantZero(IceType_i32);
-    return Traits::X86OperandMem::create(Func, IceType_i32, nullptr, Zero,
-                                         nullptr, 0,
-                                         Traits::X86OperandMem::SegReg_GS);
-  }
 };
 
 // The -Wundefined-var-template warning requires to forward-declare static
@@ -101,10 +88,6 @@
 std::array<SmallBitVector,
            TargetX86Base<X8632::Traits>::Traits::RegisterSet::Reg_NUM>
     TargetX86Base<X8632::Traits>::RegisterAliases;
-
-template <> FixupKind TargetX86Base<X8632::Traits>::PcRelFixup;
-
-template <> FixupKind TargetX86Base<X8632::Traits>::AbsFixup;
 #endif // defined(__clang__)
 
 } // end of namespace X8632
diff --git a/third_party/subzero/src/IceTargetLoweringX8664.cpp b/third_party/subzero/src/IceTargetLoweringX8664.cpp
index df10d4d..54c8c8c 100644
--- a/third_party/subzero/src/IceTargetLoweringX8664.cpp
+++ b/third_party/subzero/src/IceTargetLoweringX8664.cpp
@@ -123,14 +123,6 @@
            TargetX86Base<X8664::Traits>::Traits::RegisterSet::Reg_NUM>
     TargetX86Base<X8664::Traits>::RegisterAliases = {{}};
 
-template <>
-FixupKind TargetX86Base<X8664::Traits>::PcRelFixup =
-    TargetX86Base<X8664::Traits>::Traits::FK_PcRel;
-
-template <>
-FixupKind TargetX86Base<X8664::Traits>::AbsFixup =
-    TargetX86Base<X8664::Traits>::Traits::FK_Abs;
-
 //------------------------------------------------------------------------------
 //     __      ______  __     __  ______  ______  __  __   __  ______
 //    /\ \    /\  __ \/\ \  _ \ \/\  ___\/\  == \/\ \/\ "-.\ \/\  ___\
@@ -142,43 +134,7 @@
 void TargetX8664::_add_sp(Operand *Adjustment) {
   Variable *rsp =
       getPhysicalRegister(Traits::RegisterSet::Reg_rsp, IceType_i64);
-  if (!NeedSandboxing) {
-    _add(rsp, Adjustment);
-    return;
-  }
-
-  Variable *esp =
-      getPhysicalRegister(Traits::RegisterSet::Reg_esp, IceType_i32);
-  Variable *r15 =
-      getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
-
-  // When incrementing rsp, NaCl sandboxing requires the following sequence
-  //
-  // .bundle_start
-  // add Adjustment, %esp
-  // add %r15, %rsp
-  // .bundle_end
-  //
-  // In Subzero, even though rsp and esp alias each other, defining one does not
-  // define the other. Therefore, we must emit
-  //
-  // .bundle_start
-  // %esp = fake-def %rsp
-  // add Adjustment, %esp
-  // %rsp = fake-def %esp
-  // add %r15, %rsp
-  // .bundle_end
-  //
-  // The fake-defs ensure that the
-  //
-  // add Adjustment, %esp
-  //
-  // instruction is not DCE'd.
-  AutoBundle _(this);
-  _redefined(Context.insert<InstFakeDef>(esp, rsp));
-  _add(esp, Adjustment);
-  _redefined(Context.insert<InstFakeDef>(rsp, esp));
-  _add(rsp, r15);
+  _add(rsp, Adjustment);
 }
 
 void TargetX8664::_mov_sp(Operand *NewValue) {
@@ -188,71 +144,20 @@
   Variable *rsp =
       getPhysicalRegister(Traits::RegisterSet::Reg_rsp, IceType_i64);
 
-  AutoBundle _(this);
-
   _redefined(Context.insert<InstFakeDef>(esp, rsp));
   _redefined(_mov(esp, NewValue));
   _redefined(Context.insert<InstFakeDef>(rsp, esp));
-
-  if (!NeedSandboxing) {
-    return;
-  }
-
-  Variable *r15 =
-      getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
-  _add(rsp, r15);
-}
-
-void TargetX8664::_push_rbp() {
-  assert(NeedSandboxing);
-
-  Constant *_0 = Ctx->getConstantZero(IceType_i32);
-  Variable *ebp =
-      getPhysicalRegister(Traits::RegisterSet::Reg_ebp, IceType_i32);
-  Variable *rsp =
-      getPhysicalRegister(Traits::RegisterSet::Reg_rsp, IceType_i64);
-  auto *TopOfStack = llvm::cast<X86OperandMem>(
-      legalize(X86OperandMem::create(Func, IceType_i32, rsp, _0),
-               Legal_Reg | Legal_Mem));
-
-  // Emits a sequence:
-  //
-  //   .bundle_start
-  //   push 0
-  //   mov %ebp, %(rsp)
-  //   .bundle_end
-  //
-  // to avoid leaking the upper 32-bits (i.e., the sandbox address.)
-  AutoBundle _(this);
-  _push(_0);
-  Context.insert<typename Traits::Insts::Store>(ebp, TopOfStack);
 }
 
 void TargetX8664::_link_bp() {
-  Variable *esp =
-      getPhysicalRegister(Traits::RegisterSet::Reg_esp, IceType_i32);
   Variable *rsp =
       getPhysicalRegister(Traits::RegisterSet::Reg_rsp, Traits::WordType);
-  Variable *ebp =
-      getPhysicalRegister(Traits::RegisterSet::Reg_ebp, IceType_i32);
   Variable *rbp =
       getPhysicalRegister(Traits::RegisterSet::Reg_rbp, Traits::WordType);
-  Variable *r15 =
-      getPhysicalRegister(Traits::RegisterSet::Reg_r15, Traits::WordType);
 
-  if (!NeedSandboxing) {
-    _push(rbp);
-    _mov(rbp, rsp);
-  } else {
-    _push_rbp();
+  _push(rbp);
+  _mov(rbp, rsp);
 
-    AutoBundle _(this);
-    _redefined(Context.insert<InstFakeDef>(ebp, rbp));
-    _redefined(Context.insert<InstFakeDef>(esp, rsp));
-    _mov(ebp, esp);
-    _redefined(Context.insert<InstFakeDef>(rsp, esp));
-    _add(rbp, r15);
-  }
   // Keep ebp live for late-stage liveness analysis (e.g. asm-verbose mode).
   Context.insert<InstFakeUse>(rbp);
 }
@@ -262,33 +167,13 @@
       getPhysicalRegister(Traits::RegisterSet::Reg_rsp, IceType_i64);
   Variable *rbp =
       getPhysicalRegister(Traits::RegisterSet::Reg_rbp, IceType_i64);
-  Variable *ebp =
-      getPhysicalRegister(Traits::RegisterSet::Reg_ebp, IceType_i32);
+
   // For late-stage liveness analysis (e.g. asm-verbose mode), adding a fake
   // use of rsp before the assignment of rsp=rbp keeps previous rsp
   // adjustments from being dead-code eliminated.
   Context.insert<InstFakeUse>(rsp);
-  if (!NeedSandboxing) {
-    _mov(rsp, rbp);
-    _pop(rbp);
-  } else {
-    _mov_sp(ebp);
-
-    Variable *r15 =
-        getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
-    Variable *rcx =
-        getPhysicalRegister(Traits::RegisterSet::Reg_rcx, IceType_i64);
-    Variable *ecx =
-        getPhysicalRegister(Traits::RegisterSet::Reg_ecx, IceType_i32);
-
-    _pop(rcx);
-    Context.insert<InstFakeDef>(ecx, rcx);
-    AutoBundle _(this);
-    _mov(ebp, ecx);
-
-    _redefined(Context.insert<InstFakeDef>(rbp, ebp));
-    _add(rbp, r15);
-  }
+  _mov(rsp, rbp);
+  _pop(rbp);
 }
 
 void TargetX8664::_push_reg(RegNumT RegNum) {
@@ -302,10 +187,8 @@
         Ctx->getConstantInt32(16)); // TODO(capn): accumulate all the offsets
                                     // and adjust the stack pointer once.
     _storep(reg, address);
-  } else if (RegNum != Traits::RegisterSet::Reg_rbp || !NeedSandboxing) {
-    _push(getPhysicalRegister(RegNum, Traits::WordType));
   } else {
-    _push_rbp();
+    _push(getPhysicalRegister(RegNum, Traits::WordType));
   }
 }
 
@@ -325,315 +208,22 @@
   }
 }
 
-void TargetX8664::emitGetIP(CfgNode *Node) {
-  // No IP base register is needed on X86-64.
-  (void)Node;
-}
-
-namespace {
-bool isAssignedToRspOrRbp(const Variable *Var) {
-  if (Var == nullptr) {
-    return false;
-  }
-
-  if (Var->isRematerializable()) {
-    return true;
-  }
-
-  if (!Var->hasReg()) {
-    return false;
-  }
-
-  const auto RegNum = Var->getRegNum();
-  if ((RegNum == Traits::RegisterSet::Reg_rsp) ||
-      (RegNum == Traits::RegisterSet::Reg_rbp)) {
-    return true;
-  }
-
-  return false;
-}
-} // end of anonymous namespace
-
-Traits::X86OperandMem *TargetX8664::_sandbox_mem_reference(X86OperandMem *Mem) {
-  if (SandboxingType == ST_None) {
-    return Mem;
-  }
-
-  if (SandboxingType == ST_Nonsfi) {
-    llvm::report_fatal_error(
-        "_sandbox_mem_reference not implemented for nonsfi");
-  }
-
-  // In x86_64-nacl, all memory references are relative to a base register
-  // (%r15, %rsp, %rbp, or %rip).
-
-  Variable *Base = Mem->getBase();
-  Variable *Index = Mem->getIndex();
-  uint16_t Shift = 0;
-  Variable *ZeroReg = RebasePtr;
-  Constant *Offset = Mem->getOffset();
-  Variable *T = nullptr;
-
-  bool AbsoluteAddress = false;
-  if (Base == nullptr && Index == nullptr) {
-    if (llvm::isa<ConstantRelocatable>(Offset)) {
-      // Mem is RIP-relative. There's no need to rebase it.
-      return Mem;
-    }
-    // Offset is an absolute address, so we need to emit
-    //   Offset(%r15)
-    AbsoluteAddress = true;
-  }
-
-  if (Mem->getIsRebased()) {
-    // If Mem.IsRebased, then we don't need to update Mem, as it's already been
-    // updated to contain a reference to one of %rsp, %rbp, or %r15.
-    // We don't return early because we still need to zero extend Index.
-    assert(ZeroReg == Base || AbsoluteAddress || isAssignedToRspOrRbp(Base));
-    if (!AbsoluteAddress) {
-      // If Mem is an absolute address, no need to update ZeroReg (which is
-      // already set to %r15.)
-      ZeroReg = Base;
-    }
-    if (Index != nullptr) {
-      T = makeReg(IceType_i32);
-      _mov(T, Index);
-      Shift = Mem->getShift();
-    }
-  } else {
-    if (Base != nullptr) {
-      // If Base is a valid base pointer we don't need to use the RebasePtr. By
-      // doing this we might save us the need to zero extend the memory operand.
-      if (isAssignedToRspOrRbp(Base)) {
-        ZeroReg = Base;
-      } else {
-        T = Base;
-      }
-    }
-
-    if (Index != nullptr) {
-      assert(!Index->isRematerializable());
-      // If Index is not nullptr, it is mandatory that T is a nullptr.
-      // Otherwise, the lowering generated a memory operand with two registers.
-      // Note that Base might still be non-nullptr, but it must be a valid
-      // base register.
-      if (T != nullptr) {
-        llvm::report_fatal_error("memory reference contains base and index.");
-      }
-      // If the Index is not shifted, and it is a Valid Base, and the ZeroReg is
-      // still RebasePtr, then we do ZeroReg = Index, and hopefully prevent the
-      // need to zero-extend the memory operand (which may still happen -- see
-      // NeedLea below.)
-      if (Shift == 0 && isAssignedToRspOrRbp(Index) && ZeroReg == RebasePtr) {
-        ZeroReg = Index;
-      } else {
-        T = Index;
-        Shift = Mem->getShift();
-      }
-    }
-  }
-
-  // NeedsLea is a flag indicating whether Mem needs to be materialized to a GPR
-  // prior to being used. A LEA is needed if Mem.Offset is a constant
-  // relocatable with a nonzero offset, or if Mem.Offset is a nonzero immediate;
-  // but only when the address mode contains a "user" register other than the
-  // rsp/rbp/r15 base. In both these cases, the LEA is needed to ensure the
-  // sandboxed memory operand will only use the lower 32-bits of T+Offset.
-  bool NeedsLea = false;
-  if (!Mem->getIsRebased()) {
-    bool IsOffsetZero = false;
-    if (Offset == nullptr) {
-      IsOffsetZero = true;
-    } else if (const auto *CR = llvm::dyn_cast<ConstantRelocatable>(Offset)) {
-      IsOffsetZero = (CR->getOffset() == 0);
-    } else if (const auto *Imm = llvm::dyn_cast<ConstantInteger32>(Offset)) {
-      IsOffsetZero = (Imm->getValue() == 0);
-    } else {
-      llvm::report_fatal_error("Unexpected Offset type.");
-    }
-    if (!IsOffsetZero) {
-      if (Base != nullptr && Base != ZeroReg)
-        NeedsLea = true;
-      if (Index != nullptr && Index != ZeroReg)
-        NeedsLea = true;
-    }
-  }
-
-  RegNumT RegNum, RegNum32;
-  if (T != nullptr) {
-    if (T->hasReg()) {
-      RegNum = Traits::getGprForType(IceType_i64, T->getRegNum());
-      RegNum32 = Traits::getGprForType(IceType_i32, RegNum);
-      // At this point, if T was assigned to rsp/rbp, then we would have already
-      // made this the ZeroReg.
-      assert(RegNum != Traits::RegisterSet::Reg_rsp);
-      assert(RegNum != Traits::RegisterSet::Reg_rbp);
-    }
-
-    switch (T->getType()) {
-    default:
-      llvm::report_fatal_error("Mem pointer should be a 32-bit GPR.");
-    case IceType_i64:
-      // Even though "default:" would also catch T.Type == IceType_i64, an
-      // explicit 'case IceType_i64' shows that memory operands are always
-      // supposed to be 32-bits.
-      llvm::report_fatal_error("Mem pointer should not be a 64-bit GPR.");
-    case IceType_i32: {
-      Variable *T64 = makeReg(IceType_i64, RegNum);
-      auto *Movzx = _movzx(T64, T);
-      if (!NeedsLea) {
-        // This movzx is only needed when Mem does not need to be lea'd into a
-        // temporary. If an lea is going to be emitted, then eliding this movzx
-        // is safe because the emitted lea will write a 32-bit result --
-        // implicitly zero-extended to 64-bit.
-        Movzx->setMustKeep();
-      }
-      T = T64;
-    } break;
-    }
-  }
-
-  if (NeedsLea) {
-    Variable *NewT = makeReg(IceType_i32, RegNum32);
-    Variable *Base = T;
-    Variable *Index = T;
-    static constexpr bool NotRebased = false;
-    if (Shift == 0) {
-      Index = nullptr;
-    } else {
-      Base = nullptr;
-    }
-    _lea(NewT, Traits::X86OperandMem::create(
-                   Func, Mem->getType(), Base, Offset, Index, Shift,
-                   Traits::X86OperandMem::DefaultSegment, NotRebased));
-
-    T = makeReg(IceType_i64, RegNum);
-    _movzx(T, NewT);
-    Shift = 0;
-    Offset = nullptr;
-  }
-
-  static constexpr bool IsRebased = true;
-  return Traits::X86OperandMem::create(
-      Func, Mem->getType(), ZeroReg, Offset, T, Shift,
-      Traits::X86OperandMem::DefaultSegment, IsRebased);
-}
-
 void TargetX8664::_sub_sp(Operand *Adjustment) {
   Variable *rsp =
       getPhysicalRegister(Traits::RegisterSet::Reg_rsp, Traits::WordType);
 
-  if (NeedSandboxing) {
-    Variable *esp =
-        getPhysicalRegister(Traits::RegisterSet::Reg_esp, IceType_i32);
-    Variable *r15 =
-        getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
-
-    // .bundle_start
-    // sub Adjustment, %esp
-    // add %r15, %rsp
-    // .bundle_end
-    AutoBundle _(this);
-    _redefined(Context.insert<InstFakeDef>(esp, rsp));
-    _sub(esp, Adjustment);
-    _redefined(Context.insert<InstFakeDef>(rsp, esp));
-    _add(rsp, r15);
-  } else {
-    _sub(rsp, Adjustment);
-  }
+  _sub(rsp, Adjustment);
 
   // Add a fake use of the stack pointer, to prevent the stack pointer adustment
   // from being dead-code eliminated in a function that doesn't return.
   Context.insert<InstFakeUse>(rsp);
 }
 
-void TargetX8664::initRebasePtr() {
-  switch (SandboxingType) {
-  case ST_Nonsfi:
-    // Probably no implementation is needed, but error to be safe for now.
-    llvm::report_fatal_error(
-        "initRebasePtr() is not yet implemented on x32-nonsfi.");
-  case ST_NaCl:
-    RebasePtr = getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
-    break;
-  case ST_None:
-    // nothing.
-    break;
-  }
-}
-
-void TargetX8664::initSandbox() {
-  assert(SandboxingType == ST_NaCl);
-  Context.init(Func->getEntryNode());
-  Context.setInsertPoint(Context.getCur());
-  Variable *r15 =
-      getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
-  Context.insert<InstFakeDef>(r15);
-  Context.insert<InstFakeUse>(r15);
-}
-
-namespace {
-bool isRematerializable(const Variable *Var) {
-  return Var != nullptr && Var->isRematerializable();
-}
-} // end of anonymous namespace
-
-bool TargetX8664::legalizeOptAddrForSandbox(OptAddr *Addr) {
-  if (SandboxingType == ST_Nonsfi) {
-    llvm::report_fatal_error("Nonsfi not yet implemented for x8664.");
-  }
-
-  if (isRematerializable(Addr->Base)) {
-    if (Addr->Index == RebasePtr) {
-      Addr->Index = nullptr;
-      Addr->Shift = 0;
-    }
-    return true;
-  }
-
-  if (isRematerializable(Addr->Index)) {
-    if (Addr->Base == RebasePtr) {
-      Addr->Base = nullptr;
-    }
-    return true;
-  }
-
-  assert(Addr->Base != RebasePtr && Addr->Index != RebasePtr);
-
-  if (Addr->Base == nullptr) {
-    return true;
-  }
-
-  if (Addr->Index == nullptr) {
-    return true;
-  }
-
-  return false;
-}
-
 void TargetX8664::lowerIndirectJump(Variable *JumpTarget) {
-  std::unique_ptr<AutoBundle> Bundler;
-
-  if (!NeedSandboxing) {
-    if (JumpTarget->getType() != IceType_i64) {
-      Variable *T = makeReg(IceType_i64);
-      _movzx(T, JumpTarget);
-      JumpTarget = T;
-    }
-  } else {
-    Variable *T = makeReg(IceType_i32);
-    Variable *T64 = makeReg(IceType_i64);
-    Variable *r15 =
-        getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
-
-    _mov(T, JumpTarget);
-    Bundler = makeUnique<AutoBundle>(this);
-    const SizeT BundleSize =
-        1 << Func->getAssembler<>()->getBundleAlignLog2Bytes();
-    _and(T, Ctx->getConstantInt32(~(BundleSize - 1)));
-    _movzx(T64, T);
-    _add(T64, r15);
-    JumpTarget = T64;
+  if (JumpTarget->getType() != IceType_i64) {
+    Variable *T = makeReg(IceType_i64);
+    _movzx(T, JumpTarget);
+    JumpTarget = T;
   }
 
   _jmp(JumpTarget);
@@ -642,109 +232,44 @@
 Inst *TargetX8664::emitCallToTarget(Operand *CallTarget, Variable *ReturnReg,
                                     size_t NumVariadicFpArgs) {
   Inst *NewCall = nullptr;
-  auto *CallTargetR = llvm::dyn_cast<Variable>(CallTarget);
-  if (NeedSandboxing) {
-    // In NaCl sandbox, calls are replaced by a push/jmp pair:
-    //
-    //     push .after_call
-    //     jmp CallTarget
-    //     .align bundle_size
-    // after_call:
-    //
-    // In order to emit this sequence, we need a temporary label ("after_call",
-    // in this example.)
-    //
-    // The operand to push is a ConstantRelocatable. The easy way to implement
-    // this sequence is to create a ConstantRelocatable(0, "after_call"), but
-    // this ends up creating more relocations for the linker to resolve.
-    // Therefore, we create a ConstantRelocatable from the name of the function
-    // being compiled (i.e., ConstantRelocatable(after_call - Func, Func).
-    //
-    // By default, ConstantRelocatables are emitted (in textual output) as
-    //
-    //  ConstantName + Offset
-    //
-    // ReturnReloc has an offset that is only known during binary emission.
-    // Therefore, we set a custom emit string for ReturnReloc that will be
-    // used instead. In this particular case, the code will be emitted as
-    //
-    //  push .after_call
-    InstX86Label *ReturnAddress = InstX86Label::create(Func, this);
-    auto *ReturnRelocOffset = RelocOffset::create(Func->getAssembler());
-    ReturnAddress->setRelocOffset(ReturnRelocOffset);
-    constexpr RelocOffsetT NoFixedOffset = 0;
-    const std::string EmitString =
-        BuildDefs::dump() ? ReturnAddress->getLabelName().toString() : "";
-    auto *ReturnReloc = ConstantRelocatable::create(
-        Func->getAssembler(), IceType_i32,
-        RelocatableTuple(NoFixedOffset, {ReturnRelocOffset},
-                         Func->getFunctionName(), EmitString));
-    /* AutoBundle scoping */ {
-      std::unique_ptr<AutoBundle> Bundler;
-      if (CallTargetR == nullptr) {
-        Bundler = makeUnique<AutoBundle>(this, InstBundleLock::Opt_PadToEnd);
-        _push(ReturnReloc);
-      } else {
-        Variable *T = makeReg(IceType_i32);
-        Variable *T64 = makeReg(IceType_i64);
-        Variable *r15 =
-            getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
 
-        _mov(T, CallTargetR);
-        Bundler = makeUnique<AutoBundle>(this, InstBundleLock::Opt_PadToEnd);
-        _push(ReturnReloc);
-        const SizeT BundleSize =
-            1 << Func->getAssembler<>()->getBundleAlignLog2Bytes();
-        _and(T, Ctx->getConstantInt32(~(BundleSize - 1)));
-        _movzx(T64, T);
-        _add(T64, r15);
-        CallTarget = T64;
-      }
-      NewCall = Context.insert<Traits::Insts::Jmp>(CallTarget);
-    }
-    if (ReturnReg != nullptr) {
-      Context.insert<InstFakeDef>(ReturnReg);
-    }
+  if (CallTarget->getType() == IceType_i64) {
+    // x86-64 does not support 64-bit direct calls, so write the value to a
+    // register and make an indirect call for Constant call targets.
+    RegNumT TargetReg = {};
 
-    Context.insert(ReturnAddress);
-  } else {
-    if (CallTarget->getType() == IceType_i64) {
-      // x86-64 does not support 64-bit direct calls, so write the value to a
-      // register and make an indirect call for Constant call targets.
-      RegNumT TargetReg = {};
-
-      // System V: force r11 when calling a variadic function so that rax isn't
-      // used, since rax stores the number of FP args (see NumVariadicFpArgs
-      // usage below).
+    // System V: force r11 when calling a variadic function so that rax isn't
+    // used, since rax stores the number of FP args (see NumVariadicFpArgs
+    // usage below).
 #if !defined(_WIN64)
-      if (NumVariadicFpArgs > 0)
-        TargetReg = Traits::RegisterSet::Reg_r11;
+    if (NumVariadicFpArgs > 0)
+      TargetReg = Traits::RegisterSet::Reg_r11;
 #endif
 
-      if (llvm::isa<Constant>(CallTarget)) {
-        Variable *T = makeReg(IceType_i64, TargetReg);
-        _mov(T, CallTarget);
-        CallTarget = T;
-      } else if (llvm::isa<Variable>(CallTarget)) {
-        Operand *T = legalizeToReg(CallTarget, TargetReg);
-        CallTarget = T;
-      }
+    if (llvm::isa<Constant>(CallTarget)) {
+      Variable *T = makeReg(IceType_i64, TargetReg);
+      _mov(T, CallTarget);
+      CallTarget = T;
+    } else if (llvm::isa<Variable>(CallTarget)) {
+      Operand *T = legalizeToReg(CallTarget, TargetReg);
+      CallTarget = T;
     }
-
-    // System V: store number of FP args in RAX for variadic calls
-#if !defined(_WIN64)
-    if (NumVariadicFpArgs > 0) {
-      // Store number of FP args (stored in XMM registers) in RAX for variadic
-      // calls
-      auto *NumFpArgs = Ctx->getConstantInt64(NumVariadicFpArgs);
-      Variable *NumFpArgsReg =
-          legalizeToReg(NumFpArgs, Traits::RegisterSet::Reg_rax);
-      Context.insert<InstFakeUse>(NumFpArgsReg);
-    }
-#endif
-
-    NewCall = Context.insert<Traits::Insts::Call>(ReturnReg, CallTarget);
   }
+
+  // System V: store number of FP args in RAX for variadic calls
+#if !defined(_WIN64)
+  if (NumVariadicFpArgs > 0) {
+    // Store number of FP args (stored in XMM registers) in RAX for variadic
+    // calls
+    auto *NumFpArgs = Ctx->getConstantInt64(NumVariadicFpArgs);
+    Variable *NumFpArgsReg =
+        legalizeToReg(NumFpArgs, Traits::RegisterSet::Reg_rax);
+    Context.insert<InstFakeUse>(NumFpArgsReg);
+  }
+#endif
+
+  NewCall = Context.insert<Traits::Insts::Call>(ReturnReg, CallTarget);
+
   return NewCall;
 }
 
@@ -761,27 +286,6 @@
   }
 }
 
-void TargetX8664::emitSandboxedReturn() {
-  Variable *T_rcx = makeReg(IceType_i64, Traits::RegisterSet::Reg_rcx);
-  Variable *T_ecx = makeReg(IceType_i32, Traits::RegisterSet::Reg_ecx);
-  _pop(T_rcx);
-  _mov(T_ecx, T_rcx);
-  // lowerIndirectJump(T_ecx);
-  Variable *r15 =
-      getPhysicalRegister(Traits::RegisterSet::Reg_r15, IceType_i64);
-
-  /* AutoBundle scoping */ {
-    AutoBundle _(this);
-    const SizeT BundleSize =
-        1 << Func->getAssembler<>()->getBundleAlignLog2Bytes();
-    _and(T_ecx, Ctx->getConstantInt32(~(BundleSize - 1)));
-    Context.insert<InstFakeDef>(T_rcx, T_ecx);
-    _add(T_rcx, r15);
-
-    _jmp(T_rcx);
-  }
-}
-
 void TargetX8664::emitStackProbe(size_t StackSizeBytes) {
 #if defined(_WIN64)
   // Mirroring the behavior of MSVC here, which emits a _chkstk when locals are
diff --git a/third_party/subzero/src/IceTargetLoweringX8664.h b/third_party/subzero/src/IceTargetLoweringX8664.h
index a56c1a4..a392934 100644
--- a/third_party/subzero/src/IceTargetLoweringX8664.h
+++ b/third_party/subzero/src/IceTargetLoweringX8664.h
@@ -48,20 +48,14 @@
 protected:
   void _add_sp(Operand *Adjustment);
   void _mov_sp(Operand *NewValue);
-  Traits::X86OperandMem *_sandbox_mem_reference(X86OperandMem *Mem);
   void _sub_sp(Operand *Adjustment);
   void _link_bp();
   void _unlink_bp();
   void _push_reg(RegNumT RegNum);
   void _pop_reg(RegNumT RegNum);
 
-  void initRebasePtr();
-  void initSandbox();
-  bool legalizeOptAddrForSandbox(OptAddr *Addr);
-  void emitSandboxedReturn();
   void emitStackProbe(size_t StackSizeBytes);
   void lowerIndirectJump(Variable *JumpTarget);
-  void emitGetIP(CfgNode *Node);
   Inst *emitCallToTarget(Operand *CallTarget, Variable *ReturnReg,
                          size_t NumVariadicFpArgs = 0) override;
   Variable *moveReturnValueToRegister(Operand *Value, Type ReturnType) override;
@@ -71,15 +65,6 @@
   friend class X8664::TargetX86Base<X8664::Traits>;
 
   explicit TargetX8664(Cfg *Func) : TargetX86Base(Func) {}
-
-  void _push_rbp();
-
-  Operand *createNaClReadTPSrcOperand() {
-    Variable *TDB = makeReg(IceType_i32);
-    InstCall *Call = makeHelperCall(RuntimeHelper::H_call_read_tp, TDB, 0);
-    lowerCall(Call);
-    return TDB;
-  }
 };
 
 // The -Wundefined-var-template warning requires to forward-declare static
@@ -103,10 +88,6 @@
 std::array<SmallBitVector,
            TargetX86Base<X8664::Traits>::Traits::RegisterSet::Reg_NUM>
     TargetX86Base<X8664::Traits>::RegisterAliases;
-
-template <> FixupKind TargetX86Base<X8664::Traits>::PcRelFixup;
-
-template <> FixupKind TargetX86Base<X8664::Traits>::AbsFixup;
 #endif
 
 } // end of namespace X8664
diff --git a/third_party/subzero/src/IceTargetLoweringX8664Traits.h b/third_party/subzero/src/IceTargetLoweringX8664Traits.h
index 0d94a40..d2bf600 100644
--- a/third_party/subzero/src/IceTargetLoweringX8664Traits.h
+++ b/third_party/subzero/src/IceTargetLoweringX8664Traits.h
@@ -547,7 +547,6 @@
 #undef X
     };
 
-    const bool NeedSandboxing = Flags.getUseSandboxing();
     for (SizeT ii = 0; ii < llvm::array_lengthof(X8664RegTable); ++ii) {
       const auto &Entry = X8664RegTable[ii];
       // Even though the register is disabled for register allocation, it might
@@ -561,11 +560,7 @@
       }
 
       (*RegisterAliases)[Entry.Val].set(Entry.Val);
-      const bool DisabledRegister =
-          NeedSandboxing && Entry.IsReservedWhenSandboxing;
-      if (DisabledRegister) {
-        continue;
-      }
+
       (IntegerRegistersI64)[Entry.Val] = Entry.Is64;
       (IntegerRegistersI32)[Entry.Val] = Entry.Is32;
       (IntegerRegistersI16)[Entry.Val] = Entry.Is16;
@@ -606,28 +601,25 @@
                                        TargetLowering::RegSetMask Exclude) {
     SmallBitVector Registers(RegisterSet::Reg_NUM);
 
-    const bool NeedSandboxing = Flags.getUseSandboxing();
 #define X(val, encode, name, base, scratch, preserved, stackptr, frameptr,     \
           sboxres, isGPR, is64, is32, is16, is8, isXmm, is64To8, is32To8,      \
           is16To8, isTrunc8Rcvr, isAhRcvr, aliases)                            \
-  if (!NeedSandboxing || !(sboxres)) {                                         \
-    if (scratch && (Include & ::Ice::TargetLowering::RegSet_CallerSave))       \
-      Registers[RegisterSet::val] = true;                                      \
-    if (preserved && (Include & ::Ice::TargetLowering::RegSet_CalleeSave))     \
-      Registers[RegisterSet::val] = true;                                      \
-    if (stackptr && (Include & ::Ice::TargetLowering::RegSet_StackPointer))    \
-      Registers[RegisterSet::val] = true;                                      \
-    if (frameptr && (Include & ::Ice::TargetLowering::RegSet_FramePointer))    \
-      Registers[RegisterSet::val] = true;                                      \
-    if (scratch && (Exclude & ::Ice::TargetLowering::RegSet_CallerSave))       \
-      Registers[RegisterSet::val] = false;                                     \
-    if (preserved && (Exclude & ::Ice::TargetLowering::RegSet_CalleeSave))     \
-      Registers[RegisterSet::val] = false;                                     \
-    if (stackptr && (Exclude & ::Ice::TargetLowering::RegSet_StackPointer))    \
-      Registers[RegisterSet::val] = false;                                     \
-    if (frameptr && (Exclude & ::Ice::TargetLowering::RegSet_FramePointer))    \
-      Registers[RegisterSet::val] = false;                                     \
-  }
+  if (scratch && (Include & ::Ice::TargetLowering::RegSet_CallerSave))         \
+    Registers[RegisterSet::val] = true;                                        \
+  if (preserved && (Include & ::Ice::TargetLowering::RegSet_CalleeSave))       \
+    Registers[RegisterSet::val] = true;                                        \
+  if (stackptr && (Include & ::Ice::TargetLowering::RegSet_StackPointer))      \
+    Registers[RegisterSet::val] = true;                                        \
+  if (frameptr && (Include & ::Ice::TargetLowering::RegSet_FramePointer))      \
+    Registers[RegisterSet::val] = true;                                        \
+  if (scratch && (Exclude & ::Ice::TargetLowering::RegSet_CallerSave))         \
+    Registers[RegisterSet::val] = false;                                       \
+  if (preserved && (Exclude & ::Ice::TargetLowering::RegSet_CalleeSave))       \
+    Registers[RegisterSet::val] = false;                                       \
+  if (stackptr && (Exclude & ::Ice::TargetLowering::RegSet_StackPointer))      \
+    Registers[RegisterSet::val] = false;                                       \
+  if (frameptr && (Exclude & ::Ice::TargetLowering::RegSet_FramePointer))      \
+    Registers[RegisterSet::val] = false;
 
     REGX8664_TABLE
 
diff --git a/third_party/subzero/src/IceTargetLoweringX86Base.h b/third_party/subzero/src/IceTargetLoweringX86Base.h
index 364a797..805465b 100644
--- a/third_party/subzero/src/IceTargetLoweringX86Base.h
+++ b/third_party/subzero/src/IceTargetLoweringX86Base.h
@@ -40,11 +40,7 @@
 
 /// TargetX86Base is a template for all X86 Targets, and it relies on the CRT
 /// pattern for generating code, delegating to actual backends target-specific
-/// lowerings (e.g., call, ret, and intrinsics.) Backends are expected to
-/// implement the following methods (which should be accessible from
-/// TargetX86Base):
-///
-/// Operand *createNaClReadTPSrcOperand()
+/// lowerings (e.g., call, ret, and intrinsics.).
 ///
 /// Note: Ideally, we should be able to
 ///
@@ -84,8 +80,6 @@
   static FixupKind getPcRelFixup() { return PcRelFixup; }
   static FixupKind getAbsFixup() { return AbsFixup; }
 
-  bool needSandboxing() const { return NeedSandboxing; }
-
   void translateOm1() override;
   void translateO2() override;
   void doLoadOpt();
@@ -193,13 +187,6 @@
     return Traits::Is64Bit ? false : Ty == IceType_i64;
   }
 
-  ConstantRelocatable *createGetIPForRegister(const Variable *Dest) {
-    assert(Dest->hasReg());
-    const std::string RegName = Traits::getRegName(Dest->getRegNum());
-    return llvm::cast<ConstantRelocatable>(Ctx->getConstantExternSym(
-        Ctx->getGlobalString(H_getIP_prefix + RegName)));
-  }
-
   SizeT getMinJumpTableSize() const override { return 4; }
 
   void emitVariable(const Variable *Var) const override;
@@ -242,26 +229,10 @@
   Operand *legalizeUndef(Operand *From, RegNumT RegNum = RegNumT());
 
 protected:
-  const bool NeedSandboxing;
-
   explicit TargetX86Base(Cfg *Func);
 
   void postLower() override;
 
-  /// Initializes the RebasePtr member variable -- if so required by
-  /// SandboxingType for the concrete Target.
-  void initRebasePtr() {
-    assert(SandboxingType != ST_None);
-    dispatchToConcrete(&Traits::ConcreteTarget::initRebasePtr);
-  }
-
-  /// Emit code that initializes the value of the RebasePtr near the start of
-  /// the function -- if so required by SandboxingType for the concrete type.
-  void initSandbox() {
-    assert(SandboxingType != ST_None);
-    dispatchToConcrete(&Traits::ConcreteTarget::initSandbox);
-  }
-
   void lowerAlloca(const InstAlloca *Instr) override;
   void lowerArguments() override;
   void lowerArithmetic(const InstArithmetic *Instr) override;
@@ -300,12 +271,7 @@
     int32_t Offset = 0;
     ConstantRelocatable *Relocatable = nullptr;
   };
-  /// Legalizes Addr w.r.t. SandboxingType. The exact type of legalization
-  /// varies for different <Target, SandboxingType> tuples.
-  bool legalizeOptAddrForSandbox(OptAddr *Addr) {
-    return dispatchToConcrete(
-        &Traits::ConcreteTarget::legalizeOptAddrForSandbox, std::move(Addr));
-  }
+
   // Builds information for a canonical address expresion:
   //   <Relocatable + Offset>(Base, Index, Shift)
   X86OperandMem *computeAddressOpt(const Inst *Instr, Type MemType,
@@ -340,7 +306,6 @@
   /// Replace some calls to memset with inline instructions.
   void lowerMemset(Operand *Dest, Operand *Val, Operand *Count);
 
-  /// Lower an indirect jump adding sandboxing when needed.
   void lowerIndirectJump(Variable *JumpTarget) {
     // Without std::move below, the compiler deduces that the argument to
     // lowerIndirectJmp is a Variable *&, not a Variable *.
@@ -367,21 +332,13 @@
 
   void eliminateNextVectorSextInstruction(Variable *SignExtendedResult);
 
-  void emitGetIP(CfgNode *Node) {
-    dispatchToConcrete(&Traits::ConcreteTarget::emitGetIP, std::move(Node));
-  }
-  /// Emit a sandboxed return sequence rather than a return.
-  void emitSandboxedReturn() {
-    dispatchToConcrete(&Traits::ConcreteTarget::emitSandboxedReturn);
-  }
-
   void emitStackProbe(size_t StackSizeBytes) {
     dispatchToConcrete(&Traits::ConcreteTarget::emitStackProbe,
                        std::move(StackSizeBytes));
   }
 
   /// Emit just the call instruction (without argument or return variable
-  /// processing), sandboxing if needed.
+  /// processing).
   virtual Inst *emitCallToTarget(Operand *CallTarget, Variable *ReturnReg,
                                  size_t NumVariadicFpArgs = 0) = 0;
   /// Materialize the moves needed to return a value of the specified type.
@@ -463,101 +420,43 @@
   X86OperandMem *getMemoryOperandForStackSlot(Type Ty, Variable *Slot,
                                               uint32_t Offset = 0);
 
-  /// AutoMemorySandboxer emits a bundle-lock/bundle-unlock pair if the
-  /// instruction's operand is a memory reference. This is only needed for
-  /// x86-64 NaCl sandbox.
-  template <InstBundleLock::Option BundleLockOpt = InstBundleLock::Opt_None>
-  class AutoMemorySandboxer {
-    AutoMemorySandboxer() = delete;
-    AutoMemorySandboxer(const AutoMemorySandboxer &) = delete;
-    AutoMemorySandboxer &operator=(const AutoMemorySandboxer &) = delete;
-
-  private:
-    typename Traits::TargetLowering *Target;
-
-    template <typename T, typename... Tail>
-    X86OperandMem **findMemoryReference(T **First, Tail... Others) {
-      if (llvm::isa<X86OperandMem>(*First)) {
-        return reinterpret_cast<X86OperandMem **>(First);
-      }
-      return findMemoryReference(Others...);
-    }
-
-    X86OperandMem **findMemoryReference() { return nullptr; }
-
-  public:
-    AutoBundle *Bundler = nullptr;
-    X86OperandMem **const MemOperand;
-
-    template <typename... T>
-    AutoMemorySandboxer(typename Traits::TargetLowering *Target, T... Args)
-        : Target(Target), MemOperand(Target->SandboxingType == ST_None
-                                         ? nullptr
-                                         : findMemoryReference(Args...)) {
-      if (MemOperand != nullptr) {
-        if (Traits::Is64Bit) {
-          Bundler = new (Target->Func->template allocate<AutoBundle>())
-              AutoBundle(Target, BundleLockOpt);
-        }
-        *MemOperand = Target->_sandbox_mem_reference(*MemOperand);
-      }
-    }
-
-    ~AutoMemorySandboxer() {
-      if (Bundler != nullptr) {
-        Bundler->~AutoBundle();
-      }
-    }
-  };
-
   /// The following are helpers that insert lowered x86 instructions with
   /// minimal syntactic overhead, so that the lowering code can look as close to
   /// assembly as practical.
   void _adc(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Adc>(Dest, Src0);
   }
   void _adc_rmw(X86OperandMem *DestSrc0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &DestSrc0, &Src1);
     Context.insert<typename Traits::Insts::AdcRMW>(DestSrc0, Src1);
   }
   void _add(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Add>(Dest, Src0);
   }
   void _add_rmw(X86OperandMem *DestSrc0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &DestSrc0, &Src1);
     Context.insert<typename Traits::Insts::AddRMW>(DestSrc0, Src1);
   }
   void _addps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Addps>(Dest, Src0);
   }
   void _addss(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Addss>(Dest, Src0);
   }
   void _add_sp(Operand *Adjustment) {
     dispatchToConcrete(&Traits::ConcreteTarget::_add_sp, std::move(Adjustment));
   }
   void _and(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::And>(Dest, Src0);
   }
   void _andnps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Andnps>(Dest, Src0);
   }
   void _andps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Andps>(Dest, Src0);
   }
   void _and_rmw(X86OperandMem *DestSrc0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &DestSrc0, &Src1);
     Context.insert<typename Traits::Insts::AndRMW>(DestSrc0, Src1);
   }
   void _blendvps(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Blendvps>(Dest, Src0, Src1);
   }
   void _br(BrCond Condition, CfgNode *TargetTrue, CfgNode *TargetFalse) {
@@ -575,36 +474,28 @@
     Context.insert<InstX86Br>(Label, Condition, Kind);
   }
   void _bsf(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Bsf>(Dest, Src0);
   }
   void _bsr(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Bsr>(Dest, Src0);
   }
   void _bswap(Variable *SrcDest) {
-    AutoMemorySandboxer<> _(this, &SrcDest);
     Context.insert<typename Traits::Insts::Bswap>(SrcDest);
   }
   void _cbwdq(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Cbwdq>(Dest, Src0);
   }
   void _cmov(Variable *Dest, Operand *Src0, BrCond Condition) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Cmov>(Dest, Src0, Condition);
   }
   void _cmp(Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Icmp>(Src0, Src1);
   }
   void _cmpps(Variable *Dest, Operand *Src0, CmppsCond Condition) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Cmpps>(Dest, Src0, Condition);
   }
   void _cmpxchg(Operand *DestOrAddr, Variable *Eax, Variable *Desired,
                 bool Locked) {
-    AutoMemorySandboxer<> _(this, &DestOrAddr);
     Context.insert<typename Traits::Insts::Cmpxchg>(DestOrAddr, Eax, Desired,
                                                     Locked);
     // Mark eax as possibly modified by cmpxchg.
@@ -614,7 +505,6 @@
   }
   void _cmpxchg8b(X86OperandMem *Addr, Variable *Edx, Variable *Eax,
                   Variable *Ecx, Variable *Ebx, bool Locked) {
-    AutoMemorySandboxer<> _(this, &Addr);
     Context.insert<typename Traits::Insts::Cmpxchg8b>(Addr, Edx, Eax, Ecx, Ebx,
                                                       Locked);
     // Mark edx, and eax as possibly modified by cmpxchg8b.
@@ -627,28 +517,22 @@
   }
   void _cvt(Variable *Dest, Operand *Src0,
             typename Traits::Insts::Cvt::CvtVariant Variant) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Cvt>(Dest, Src0, Variant);
   }
   void _round(Variable *Dest, Operand *Src0, Operand *Imm) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Round>(Dest, Src0, Imm);
   }
   void _div(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Div>(Dest, Src0, Src1);
   }
   void _divps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Divps>(Dest, Src0);
   }
   void _divss(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Divss>(Dest, Src0);
   }
   template <typename T = Traits>
   typename std::enable_if<T::UsesX87, void>::type _fld(Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Src0);
     Context.insert<typename Traits::Insts::template Fld<>>(Src0);
   }
   // TODO(jpp): when implementing the X8664 calling convention, make sure x8664
@@ -659,7 +543,6 @@
   }
   template <typename T = Traits>
   typename std::enable_if<T::UsesX87, void>::type _fstp(Variable *Dest) {
-    AutoMemorySandboxer<> _(this, &Dest);
     Context.insert<typename Traits::Insts::template Fstp<>>(Dest);
   }
   // TODO(jpp): when implementing the X8664 calling convention, make sure x8664
@@ -669,24 +552,19 @@
     llvm::report_fatal_error("fstp is not available in x86-64");
   }
   void _idiv(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Idiv>(Dest, Src0, Src1);
   }
   void _imul(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Imul>(Dest, Src0);
   }
   void _imul_imm(Variable *Dest, Operand *Src0, Constant *Imm) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::ImulImm>(Dest, Src0, Imm);
   }
   void _insertps(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Insertps>(Dest, Src0, Src1);
   }
   void _int3() { Context.insert<typename Traits::Insts::Int3>(); }
   void _jmp(Operand *Target) {
-    AutoMemorySandboxer<> _(this, &Target);
     Context.insert<typename Traits::Insts::Jmp>(Target);
   }
   void _lea(Variable *Dest, Operand *Src0) {
@@ -713,311 +591,238 @@
                                     RegNumT RegNum = RegNumT()) {
     if (Dest == nullptr)
       Dest = makeReg(Src0->getType(), RegNum);
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     return Context.insert<typename Traits::Insts::Mov>(Dest, Src0);
   }
   void _mov_sp(Operand *NewValue) {
     dispatchToConcrete(&Traits::ConcreteTarget::_mov_sp, std::move(NewValue));
   }
   typename Traits::Insts::Movp *_movp(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     return Context.insert<typename Traits::Insts::Movp>(Dest, Src0);
   }
   void _movd(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Movd>(Dest, Src0);
   }
   void _movq(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Movq>(Dest, Src0);
   }
   void _movss(Variable *Dest, Variable *Src0) {
     Context.insert<typename Traits::Insts::MovssRegs>(Dest, Src0);
   }
   void _movsx(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Movsx>(Dest, Src0);
   }
   typename Traits::Insts::Movzx *_movzx(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     return Context.insert<typename Traits::Insts::Movzx>(Dest, Src0);
   }
   void _maxss(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Maxss>(Dest, Src0);
   }
   void _minss(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Minss>(Dest, Src0);
   }
   void _maxps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Maxps>(Dest, Src0);
   }
   void _minps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Minps>(Dest, Src0);
   }
   void _mul(Variable *Dest, Variable *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Mul>(Dest, Src0, Src1);
   }
   void _mulps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Mulps>(Dest, Src0);
   }
   void _mulss(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Mulss>(Dest, Src0);
   }
   void _neg(Variable *SrcDest) {
-    AutoMemorySandboxer<> _(this, &SrcDest);
     Context.insert<typename Traits::Insts::Neg>(SrcDest);
   }
   void _nop(SizeT Variant) {
     Context.insert<typename Traits::Insts::Nop>(Variant);
   }
   void _or(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Or>(Dest, Src0);
   }
   void _orps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Orps>(Dest, Src0);
   }
   void _or_rmw(X86OperandMem *DestSrc0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &DestSrc0, &Src1);
     Context.insert<typename Traits::Insts::OrRMW>(DestSrc0, Src1);
   }
   void _padd(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Padd>(Dest, Src0);
   }
   void _padds(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Padds>(Dest, Src0);
   }
   void _paddus(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Paddus>(Dest, Src0);
   }
   void _pand(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pand>(Dest, Src0);
   }
   void _pandn(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pandn>(Dest, Src0);
   }
   void _pblendvb(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Pblendvb>(Dest, Src0, Src1);
   }
   void _pcmpeq(Variable *Dest, Operand *Src0,
                Type ArithmeticTypeOverride = IceType_void) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pcmpeq>(Dest, Src0,
                                                    ArithmeticTypeOverride);
   }
   void _pcmpgt(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pcmpgt>(Dest, Src0);
   }
   void _pextr(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Pextr>(Dest, Src0, Src1);
   }
   void _pinsr(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Pinsr>(Dest, Src0, Src1);
   }
   void _pmull(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pmull>(Dest, Src0);
   }
   void _pmulhw(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pmulhw>(Dest, Src0);
   }
   void _pmulhuw(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pmulhuw>(Dest, Src0);
   }
   void _pmaddwd(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pmaddwd>(Dest, Src0);
   }
   void _pmuludq(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pmuludq>(Dest, Src0);
   }
   void _pop(Variable *Dest) {
     Context.insert<typename Traits::Insts::Pop>(Dest);
   }
   void _por(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Por>(Dest, Src0);
   }
   void _punpckl(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Punpckl>(Dest, Src0);
   }
   void _punpckh(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Punpckh>(Dest, Src0);
   }
   void _packss(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Packss>(Dest, Src0);
   }
   void _packus(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Packus>(Dest, Src0);
   }
   void _pshufb(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pshufb>(Dest, Src0);
   }
   void _pshufd(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Pshufd>(Dest, Src0, Src1);
   }
   void _psll(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Psll>(Dest, Src0);
   }
   void _psra(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Psra>(Dest, Src0);
   }
   void _psrl(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Psrl>(Dest, Src0);
   }
   void _psub(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Psub>(Dest, Src0);
   }
   void _psubs(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Psubs>(Dest, Src0);
   }
   void _psubus(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Psubus>(Dest, Src0);
   }
   void _push(Operand *Src0) {
     Context.insert<typename Traits::Insts::Push>(Src0);
   }
   void _pxor(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Pxor>(Dest, Src0);
   }
   void _ret(Variable *Src0 = nullptr) {
     Context.insert<typename Traits::Insts::Ret>(Src0);
   }
   void _rol(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Rol>(Dest, Src0);
   }
   void _round(Variable *Dest, Operand *Src, Constant *Imm) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src);
     Context.insert<typename Traits::Insts::Round>(Dest, Src, Imm);
   }
-  X86OperandMem *_sandbox_mem_reference(X86OperandMem *Mem) {
-    return dispatchToConcrete(&Traits::ConcreteTarget::_sandbox_mem_reference,
-                              std::move(Mem));
-  }
   void _sar(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Sar>(Dest, Src0);
   }
   void _sbb(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Sbb>(Dest, Src0);
   }
   void _sbb_rmw(X86OperandMem *DestSrc0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &DestSrc0, &Src1);
     Context.insert<typename Traits::Insts::SbbRMW>(DestSrc0, Src1);
   }
   void _setcc(Variable *Dest, BrCond Condition) {
     Context.insert<typename Traits::Insts::Setcc>(Dest, Condition);
   }
   void _shl(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Shl>(Dest, Src0);
   }
   void _shld(Variable *Dest, Variable *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Shld>(Dest, Src0, Src1);
   }
   void _shr(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Shr>(Dest, Src0);
   }
   void _shrd(Variable *Dest, Variable *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Shrd>(Dest, Src0, Src1);
   }
   void _shufps(Variable *Dest, Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Shufps>(Dest, Src0, Src1);
   }
   void _movmsk(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Movmsk>(Dest, Src0);
   }
   void _sqrt(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Sqrt>(Dest, Src0);
   }
   void _store(Operand *Value, X86Operand *Mem) {
-    AutoMemorySandboxer<> _(this, &Value, &Mem);
     Context.insert<typename Traits::Insts::Store>(Value, Mem);
   }
   void _storep(Variable *Value, X86OperandMem *Mem) {
-    AutoMemorySandboxer<> _(this, &Value, &Mem);
     Context.insert<typename Traits::Insts::StoreP>(Value, Mem);
   }
   void _storeq(Operand *Value, X86OperandMem *Mem) {
-    AutoMemorySandboxer<> _(this, &Value, &Mem);
     Context.insert<typename Traits::Insts::StoreQ>(Value, Mem);
   }
   void _stored(Operand *Value, X86OperandMem *Mem) {
-    AutoMemorySandboxer<> _(this, &Value, &Mem);
     Context.insert<typename Traits::Insts::StoreD>(Value, Mem);
   }
   void _sub(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Sub>(Dest, Src0);
   }
   void _sub_rmw(X86OperandMem *DestSrc0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &DestSrc0, &Src1);
     Context.insert<typename Traits::Insts::SubRMW>(DestSrc0, Src1);
   }
   void _sub_sp(Operand *Adjustment) {
     dispatchToConcrete(&Traits::ConcreteTarget::_sub_sp, std::move(Adjustment));
   }
   void _subps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Subps>(Dest, Src0);
   }
   void _subss(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Subss>(Dest, Src0);
   }
   void _test(Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Test>(Src0, Src1);
   }
   void _ucomiss(Operand *Src0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &Src0, &Src1);
     Context.insert<typename Traits::Insts::Ucomiss>(Src0, Src1);
   }
   void _ud2() { Context.insert<typename Traits::Insts::UD2>(); }
   void _unlink_bp() { dispatchToConcrete(&Traits::ConcreteTarget::_unlink_bp); }
   void _xadd(Operand *Dest, Variable *Src, bool Locked) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src);
     Context.insert<typename Traits::Insts::Xadd>(Dest, Src, Locked);
     // The xadd exchanges Dest and Src (modifying Src). Model that update with
     // a FakeDef followed by a FakeUse.
@@ -1026,7 +831,6 @@
     Context.insert<InstFakeUse>(Src);
   }
   void _xchg(Operand *Dest, Variable *Src) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src);
     Context.insert<typename Traits::Insts::Xchg>(Dest, Src);
     // The xchg modifies Dest and Src -- model that update with a
     // FakeDef/FakeUse.
@@ -1035,15 +839,12 @@
     Context.insert<InstFakeUse>(Src);
   }
   void _xor(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Xor>(Dest, Src0);
   }
   void _xorps(Variable *Dest, Operand *Src0) {
-    AutoMemorySandboxer<> _(this, &Dest, &Src0);
     Context.insert<typename Traits::Insts::Xorps>(Dest, Src0);
   }
   void _xor_rmw(X86OperandMem *DestSrc0, Operand *Src1) {
-    AutoMemorySandboxer<> _(this, &DestSrc0, &Src1);
     Context.insert<typename Traits::Insts::XorRMW>(DestSrc0, Src1);
   }
 
@@ -1095,32 +896,30 @@
       RegisterAliases;
   SmallBitVector RegsUsed;
   std::array<VarList, IceType_NUM> PhysicalRegisters;
-  // RebasePtr is a Variable that holds the Rebasing pointer (if any) for the
-  // current sandboxing type.
-  Variable *RebasePtr = nullptr;
 
 private:
   /// dispatchToConcrete is the template voodoo that allows TargetX86Base to
   /// invoke methods in Machine (which inherits from TargetX86Base) without
-  /// having to rely on virtual method calls. There are two overloads, one for
-  /// non-void types, and one for void types. We need this becase, for non-void
-  /// types, we need to return the method result, where as for void, we don't.
-  /// While it is true that the code compiles without the void "version", there
-  /// used to be a time when compilers would reject such code.
+  /// having to rely on virtual method calls. There are two overloads, one
+  /// for non-void types, and one for void types. We need this becase, for
+  /// non-void types, we need to return the method result, where as for
+  /// void, we don't. While it is true that the code compiles without the
+  /// void "version", there used to be a time when compilers would reject
+  /// such code.
   ///
   /// This machinery is far from perfect. Note that, in particular, the
-  /// arguments provided to dispatchToConcrete() need to match the arguments for
-  /// Method **exactly** (i.e., no argument promotion is performed.)
+  /// arguments provided to dispatchToConcrete() need to match the arguments
+  /// for Method **exactly** (i.e., no argument promotion is performed.)
   template <typename Ret, typename... Args>
   typename std::enable_if<!std::is_void<Ret>::value, Ret>::type
-  dispatchToConcrete(Ret (ConcreteTarget::*Method)(Args...), Args &&... args) {
+  dispatchToConcrete(Ret (ConcreteTarget::*Method)(Args...), Args &&...args) {
     return (static_cast<ConcreteTarget *>(this)->*Method)(
         std::forward<Args>(args)...);
   }
 
   template <typename... Args>
   void dispatchToConcrete(void (ConcreteTarget::*Method)(Args...),
-                          Args &&... args) {
+                          Args &&...args) {
     (static_cast<ConcreteTarget *>(this)->*Method)(std::forward<Args>(args)...);
   }
 
@@ -1204,8 +1003,8 @@
                                       int8_t Idx14, int8_t Idx15);
   /// @}
 
-  static FixupKind PcRelFixup;
-  static FixupKind AbsFixup;
+  static constexpr FixupKind PcRelFixup = Traits::FK_PcRel;
+  static constexpr FixupKind AbsFixup = Traits::FK_Abs;
 };
 
 template <typename TraitsType>
diff --git a/third_party/subzero/src/IceTargetLoweringX86BaseImpl.h b/third_party/subzero/src/IceTargetLoweringX86BaseImpl.h
index 0fe65d9..1206220 100644
--- a/third_party/subzero/src/IceTargetLoweringX86BaseImpl.h
+++ b/third_party/subzero/src/IceTargetLoweringX86BaseImpl.h
@@ -400,8 +400,7 @@
 }
 
 template <typename TraitsType>
-TargetX86Base<TraitsType>::TargetX86Base(Cfg *Func)
-    : TargetLowering(Func), NeedSandboxing(SandboxingType == ST_NaCl) {
+TargetX86Base<TraitsType>::TargetX86Base(Cfg *Func) : TargetLowering(Func) {
   static_assert(
       (Traits::InstructionSet::End - Traits::InstructionSet::Begin) ==
           (TargetInstructionSet::X86InstructionSet_End -
@@ -425,8 +424,6 @@
   filterTypeToRegisterSet(Ctx, Traits::RegisterSet::Reg_NUM,
                           TypeToRegisterSet.data(), TypeToRegisterSet.size(),
                           Traits::getRegName, getRegClassName);
-  PcRelFixup = Traits::FK_PcRel;
-  AbsFixup = getFlags().getUseNonsfi() ? Traits::FK_Gotoff : Traits::FK_Abs;
 }
 
 template <typename TraitsType>
@@ -448,10 +445,6 @@
 template <typename TraitsType> void TargetX86Base<TraitsType>::translateO2() {
   TimerMarker T(TimerStack::TT_O2, Func);
 
-  if (SandboxingType != ST_None) {
-    initRebasePtr();
-  }
-
   genTargetHelperCalls();
   Func->dump("After target helper call insertion");
 
@@ -532,9 +525,6 @@
   Func->genCode();
   if (Func->hasError())
     return;
-  if (SandboxingType != ST_None) {
-    initSandbox();
-  }
   Func->dump("After x86 codegen");
   splitBlockLocalVariables(Func);
 
@@ -579,20 +569,11 @@
   // to reduce the amount of work needed for searching for opportunities.
   Func->doBranchOpt();
   Func->dump("After branch optimization");
-
-  // Mark nodes that require sandbox alignment
-  if (NeedSandboxing) {
-    Func->markNodesForSandboxing();
-  }
 }
 
 template <typename TraitsType> void TargetX86Base<TraitsType>::translateOm1() {
   TimerMarker T(TimerStack::TT_Om1, Func);
 
-  if (SandboxingType != ST_None) {
-    initRebasePtr();
-  }
-
   genTargetHelperCalls();
 
   // Do not merge Alloca instructions, and lay out the stack.
@@ -617,9 +598,6 @@
   Func->genCode();
   if (Func->hasError())
     return;
-  if (SandboxingType != ST_None) {
-    initSandbox();
-  }
   Func->dump("After initial x86 codegen");
 
   regAlloc(RAK_InfOnly);
@@ -631,10 +609,6 @@
   if (Func->hasError())
     return;
   Func->dump("After stack frame mapping");
-
-  // Mark nodes that require sandbox alignment
-  if (NeedSandboxing)
-    Func->markNodesForSandboxing();
 }
 
 inline bool canRMW(const InstArithmetic *Arith) {
@@ -943,11 +917,7 @@
     return;
   Ostream &Str = Ctx->getStrEmit();
   if (Var->hasReg()) {
-    const bool Is64BitSandboxing = Traits::Is64Bit && NeedSandboxing;
-    const Type VarType = (Var->isRematerializable() && Is64BitSandboxing)
-                             ? IceType_i64
-                             : Var->getType();
-    Str << "%" << getRegName(Var->getRegNum(), VarType);
+    Str << "%" << getRegName(Var->getRegNum(), Var->getType());
     return;
   }
   if (Var->mustHaveReg()) {
@@ -1222,8 +1192,6 @@
   if (!IsEbpBasedFrame)
     BasicFrameOffset += SpillAreaSizeBytes;
 
-  emitGetIP(Node);
-
   const VarList &Args = Func->getArgs();
   size_t InArgsSizeBytes = 0;
   unsigned NumXmmArgs = 0;
@@ -1405,16 +1373,6 @@
     assert(RegNum == Traits::getBaseReg(RegNum));
     _pop_reg(RegNum);
   }
-
-  if (!NeedSandboxing) {
-    return;
-  }
-  emitSandboxedReturn();
-  if (RI->getSrcSize()) {
-    auto *RetValue = llvm::cast<Variable>(RI->getSrc(0));
-    Context.insert<InstFakeUse>(RetValue);
-  }
-  RI->setDeleted();
 }
 
 template <typename TraitsType> Type TargetX86Base<TraitsType>::stackSlotType() {
@@ -1551,8 +1509,7 @@
     // Non-constant sizes need to be adjusted to the next highest multiple of
     // the required alignment at runtime.
     Variable *T = nullptr;
-    if (Traits::Is64Bit && TotalSize->getType() != IceType_i64 &&
-        !NeedSandboxing) {
+    if (Traits::Is64Bit && TotalSize->getType() != IceType_i64) {
       T = makeReg(IceType_i64);
       _movzx(T, TotalSize);
     } else {
@@ -2285,7 +2242,7 @@
       auto *Var = legalizeToReg(Src0);
       auto *Mem = Traits::X86OperandMem::create(Func, IceType_void, Var, Const);
       T = makeReg(Ty);
-      _lea(T, _sandbox_mem_reference(Mem));
+      _lea(T, Mem);
       _mov(Dest, T);
       break;
     }
@@ -4411,21 +4368,6 @@
     lowerMemset(Instr->getArg(0), Instr->getArg(1), Instr->getArg(2));
     return;
   }
-  case Intrinsics::NaClReadTP: {
-    if (NeedSandboxing) {
-      Operand *Src =
-          dispatchToConcrete(&ConcreteTarget::createNaClReadTPSrcOperand);
-      Variable *Dest = Instr->getDest();
-      Variable *T = nullptr;
-      _mov(T, Src);
-      _mov(Dest, T);
-    } else {
-      InstCall *Call =
-          makeHelperCall(RuntimeHelper::H_call_read_tp, Instr->getDest(), 0);
-      lowerCall(Call);
-    }
-    return;
-  }
   case Intrinsics::Setjmp: {
     InstCall *Call =
         makeHelperCall(RuntimeHelper::H_call_setjmp, Instr->getDest(), 1);
@@ -4446,18 +4388,10 @@
     return;
   }
   case Intrinsics::Stacksave: {
-    if (!Traits::Is64Bit || !NeedSandboxing) {
-      Variable *esp = Func->getTarget()->getPhysicalRegister(getStackReg(),
-                                                             Traits::WordType);
-      Variable *Dest = Instr->getDest();
-      _mov(Dest, esp);
-      return;
-    }
-    Variable *esp = Func->getTarget()->getPhysicalRegister(
-        Traits::RegisterSet::Reg_esp, IceType_i32);
+    Variable *esp =
+        Func->getTarget()->getPhysicalRegister(getStackReg(), Traits::WordType);
     Variable *Dest = Instr->getDest();
     _mov(Dest, esp);
-
     return;
   }
   case Intrinsics::Stackrestore: {
@@ -5836,34 +5770,16 @@
     bool OffsetFromIndex = false;
     bool CombinedBaseIndex = false;
   } Skip;
-  // This points to the boolean in Skip that represents the last folding
-  // performed. This is used to disable a pattern match that generated an
-  // invalid address. Without this, the algorithm would never finish.
-  bool *SkipLastFolding = nullptr;
   // NewAddrCheckpoint is used to rollback the address being formed in case an
   // invalid address is formed.
   OptAddr NewAddrCheckpoint;
   Reason = Instr;
   do {
-    if (SandboxingType != ST_None) {
-      // When sandboxing, we defer the sandboxing of NewAddr to the Concrete
-      // Target. If our optimization was overly aggressive, then we simply undo
-      // what the previous iteration did, and set the previous pattern's skip
-      // bit to true.
-      if (!legalizeOptAddrForSandbox(&NewAddr)) {
-        *SkipLastFolding = true;
-        SkipLastFolding = nullptr;
-        NewAddr = NewAddrCheckpoint;
-        Reason = nullptr;
-      }
-    }
-
     if (Reason) {
       AddrOpt.dumpAddressOpt(NewAddr.Relocatable, NewAddr.Offset, NewAddr.Base,
                              NewAddr.Index, NewAddr.Shift, Reason);
       AddressWasOptimized = true;
       Reason = nullptr;
-      SkipLastFolding = nullptr;
       memset(reinterpret_cast<void *>(&Skip), 0, sizeof(Skip));
     }
 
@@ -5873,7 +5789,6 @@
     if (!Skip.AssignBase &&
         (Reason = AddrOpt.matchAssign(&NewAddr.Base, &NewAddr.Relocatable,
                                       &NewAddr.Offset))) {
-      SkipLastFolding = &Skip.AssignBase;
       // Assignments of Base from a Relocatable or ConstantInt32 can result
       // in Base becoming nullptr.  To avoid code duplication in this loop we
       // prefer that Base be non-nullptr if possible.
@@ -5886,7 +5801,6 @@
     if (!Skip.AssignBase &&
         (Reason = AddrOpt.matchAssign(&NewAddr.Index, &NewAddr.Relocatable,
                                       &NewAddr.Offset))) {
-      SkipLastFolding = &Skip.AssignIndex;
       continue;
     }
 
@@ -5897,7 +5811,6 @@
       if (!Skip.CombinedBaseIndex &&
           (Reason = AddrOpt.matchCombinedBaseIndex(
                &NewAddr.Base, &NewAddr.Index, &NewAddr.Shift))) {
-        SkipLastFolding = &Skip.CombinedBaseIndex;
         continue;
       }
 
@@ -5929,13 +5842,11 @@
     if (!Skip.OffsetFromBase && (Reason = AddrOpt.matchOffsetIndexOrBase(
                                      &NewAddr.Base, /*Shift =*/0,
                                      &NewAddr.Relocatable, &NewAddr.Offset))) {
-      SkipLastFolding = &Skip.OffsetFromBase;
       continue;
     }
     if (!Skip.OffsetFromIndex && (Reason = AddrOpt.matchOffsetIndexOrBase(
                                       &NewAddr.Index, NewAddr.Shift,
                                       &NewAddr.Relocatable, &NewAddr.Offset))) {
-      SkipLastFolding = &Skip.OffsetFromIndex;
       continue;
     }
 
@@ -5946,17 +5857,6 @@
     return nullptr;
   }
 
-  // Undo any addition of RebasePtr.  It will be added back when the mem
-  // operand is sandboxed.
-  if (NewAddr.Base == RebasePtr) {
-    NewAddr.Base = nullptr;
-  }
-
-  if (NewAddr.Index == RebasePtr) {
-    NewAddr.Index = nullptr;
-    NewAddr.Shift = 0;
-  }
-
   Constant *OffsetOp = nullptr;
   if (NewAddr.Relocatable == nullptr) {
     OffsetOp = Ctx->getConstantInt32(NewAddr.Offset);
@@ -7042,10 +6942,6 @@
     constexpr auto Segment = X86OperandMem::SegmentRegisters::DefaultSegment;
 
     Variable *Target = nullptr;
-    if (Traits::Is64Bit && NeedSandboxing) {
-      assert(Index != nullptr && Index->getType() == IceType_i32);
-    }
-
     if (PointerType == IceType_i32) {
       _mov(Target, X86OperandMem::create(Func, PointerType, NoBase, Offset,
                                          Index, Shift, Segment));
@@ -7353,28 +7249,6 @@
 /// since loOperand() and hiOperand() don't expect Undef input.  Also, in
 /// Non-SFI mode, add a FakeUse(RebasePtr) for every pooled constant operand.
 template <typename TraitsType> void TargetX86Base<TraitsType>::prelowerPhis() {
-  if (getFlags().getUseNonsfi()) {
-    assert(RebasePtr);
-    CfgNode *Node = Context.getNode();
-    uint32_t RebasePtrUseCount = 0;
-    for (Inst &I : Node->getPhis()) {
-      auto *Phi = llvm::dyn_cast<InstPhi>(&I);
-      if (Phi->isDeleted())
-        continue;
-      for (SizeT I = 0; I < Phi->getSrcSize(); ++I) {
-        Operand *Src = Phi->getSrc(I);
-        // TODO(stichnot): This over-counts for +0.0, and under-counts for other
-        // kinds of pooling.
-        if (llvm::isa<ConstantRelocatable>(Src) ||
-            llvm::isa<ConstantFloat>(Src) || llvm::isa<ConstantDouble>(Src)) {
-          ++RebasePtrUseCount;
-        }
-      }
-    }
-    if (RebasePtrUseCount) {
-      Node->getInsts().push_front(InstFakeUse::create(Func, RebasePtr));
-    }
-  }
   if (Traits::Is64Bit) {
     // On x86-64 we don't need to prelower phis -- the architecture can handle
     // 64-bit integer natively.
@@ -7604,9 +7478,6 @@
       ArgTypes = {IceType_i32, IceType_i32, IceType_i32};
       ReturnType = IceType_void;
       break;
-    case Intrinsics::NaClReadTP:
-      ReturnType = IceType_i32;
-      break;
     case Intrinsics::Setjmp:
       ArgTypes = {IceType_i32};
       ReturnType = IceType_i32;
@@ -7893,7 +7764,6 @@
 template <typename TraitsType>
 Operand *TargetX86Base<TraitsType>::legalize(Operand *From, LegalMask Allowed,
                                              RegNumT RegNum) {
-  const bool UseNonsfi = getFlags().getUseNonsfi();
   const Type Ty = From->getType();
   // Assert that a physical register is allowed. To date, all calls to
   // legalize() allow a physical register. If a physical register needs to be
@@ -7979,34 +7849,24 @@
       }
     }
 
-    if (auto *CR = llvm::dyn_cast<ConstantRelocatable>(Const)) {
-      // If the operand is a ConstantRelocatable, and Legal_AddrAbs is not
-      // specified, and UseNonsfi is indicated, we need to add RebasePtr.
-      if (UseNonsfi && !(Allowed & Legal_AddrAbs)) {
-        assert(Ty == IceType_i32);
-        Variable *NewVar = makeReg(Ty, RegNum);
-        auto *Mem = Traits::X86OperandMem::create(Func, Ty, nullptr, CR);
-        // LEAs are not automatically sandboxed, thus we explicitly invoke
-        // _sandbox_mem_reference.
-        _lea(NewVar, _sandbox_mem_reference(Mem));
-        From = NewVar;
-      }
-    } else if (isScalarFloatingType(Ty)) {
-      // Convert a scalar floating point constant into an explicit memory
-      // operand.
-      if (auto *ConstFloat = llvm::dyn_cast<ConstantFloat>(Const)) {
-        if (Utils::isPositiveZero(ConstFloat->getValue()))
-          return makeZeroedRegister(Ty, RegNum);
-      } else if (auto *ConstDouble = llvm::dyn_cast<ConstantDouble>(Const)) {
-        if (Utils::isPositiveZero(ConstDouble->getValue()))
-          return makeZeroedRegister(Ty, RegNum);
-      }
+    if (!llvm::dyn_cast<ConstantRelocatable>(Const)) {
+      if (isScalarFloatingType(Ty)) {
+        // Convert a scalar floating point constant into an explicit memory
+        // operand.
+        if (auto *ConstFloat = llvm::dyn_cast<ConstantFloat>(Const)) {
+          if (Utils::isPositiveZero(ConstFloat->getValue()))
+            return makeZeroedRegister(Ty, RegNum);
+        } else if (auto *ConstDouble = llvm::dyn_cast<ConstantDouble>(Const)) {
+          if (Utils::isPositiveZero(ConstDouble->getValue()))
+            return makeZeroedRegister(Ty, RegNum);
+        }
 
-      auto *CFrom = llvm::cast<Constant>(From);
-      assert(CFrom->getShouldBePooled());
-      Constant *Offset = Ctx->getConstantSym(0, CFrom->getLabelName());
-      auto *Mem = X86OperandMem::create(Func, Ty, nullptr, Offset);
-      From = Mem;
+        auto *CFrom = llvm::cast<Constant>(From);
+        assert(CFrom->getShouldBePooled());
+        Constant *Offset = Ctx->getConstantSym(0, CFrom->getLabelName());
+        auto *Mem = X86OperandMem::create(Func, Ty, nullptr, Offset);
+        From = Mem;
+      }
     }
 
     bool NeedsReg = false;
@@ -8225,8 +8085,6 @@
 void TargetX86Base<Machine>::emit(const ConstantRelocatable *C) const {
   if (!BuildDefs::dump())
     return;
-  assert(!getFlags().getUseNonsfi() ||
-         C->getName().toString() == GlobalOffsetTable);
   Ostream &Str = Ctx->getStrEmit();
   Str << "$";
   emitWithoutPrefix(C);
@@ -8238,9 +8096,7 @@
   if (!BuildDefs::dump())
     return;
   Ostream &Str = Ctx->getStrEmit();
-  const bool UseNonsfi = getFlags().getUseNonsfi();
-  const char *Prefix = UseNonsfi ? ".data.rel.ro." : ".rodata.";
-  Str << "\t.section\t" << Prefix << JumpTable->getSectionName()
+  Str << "\t.section\t.rodata." << JumpTable->getSectionName()
       << ",\"a\",@progbits\n"
          "\t.align\t"
       << typeWidthInBytes(getPointerType()) << "\n"
@@ -8317,7 +8173,7 @@
 
 template <typename TraitsType>
 void TargetDataX86<TraitsType>::lowerJumpTables() {
-  const bool IsPIC = getFlags().getUseNonsfi();
+  const bool IsPIC = false;
   switch (getFlags().getOutFileType()) {
   case FT_Elf: {
     ELFObjectWriter *Writer = Ctx->getObjectWriter();
@@ -8353,7 +8209,7 @@
 template <typename TraitsType>
 void TargetDataX86<TraitsType>::lowerGlobals(
     const VariableDeclarationList &Vars, const std::string &SectionSuffix) {
-  const bool IsPIC = getFlags().getUseNonsfi();
+  const bool IsPIC = false;
   switch (getFlags().getOutFileType()) {
   case FT_Elf: {
     ELFObjectWriter *Writer = Ctx->getObjectWriter();