Fix Subzero Optimizer run time variability

std::unordered_map<> can have greatly varying performance when using a
pointer as the key value type. That's because it allocates 'buckets' for
storing values corresponding with each key's hash value. When the
buckets get too full, more buckets are allocated and elements get
rehashed, which is a costly operation. When that happens, and how often,
is unpredictable when using object pointers as keys.

This was the case for Optimizer::optimizeSingleBasicBlockLoadsStores()
optimization pass for Subzero. This change replaces the use of the
pointer to InstAlloca instructions, which represent stack variables,
with the (unique) index of the alloca's destination argument (i.e. the
address of the allocated variable), which is of integer type.

This eliminates dramatic differences in run time, for identical input.
Specifically this makes it feasible again to reliably determine whether
a change is an optimization or a performance regression, as part of our
CI testing with Regres.

Fixes: b/193550986
Change-Id: Idf491082e4e2ffb83b2ff3e97fff2f190b0534ff
Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/56528
Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
Kokoro-Result: kokoro <noreply+kokoro@google.com>
Tested-by: Nicolas Capens <nicolascapens@google.com>
Reviewed-by: Alexis Hétu <sugoi@google.com>
diff --git a/src/Reactor/Optimizer.cpp b/src/Reactor/Optimizer.cpp
index 3de6283..b4872be 100644
--- a/src/Reactor/Optimizer.cpp
+++ b/src/Reactor/Optimizer.cpp
@@ -396,7 +396,11 @@
 			bool allLoadsReplaced = true;
 		};
 
-		std::unordered_map<const Ice::InstAlloca *, LastStore> lastStoreTo;
+		// Use the (unique) index of the alloca's destination argument (i.e. the address
+		// of the allocated variable), which is of type SizeT, as the key. Note we do not
+		// use the pointer to the alloca instruction or its resulting address, to avoid
+		// undeterministic unordered_map behavior.
+		std::unordered_map<Ice::SizeT, LastStore> lastStoreTo;
 
 		for(Ice::Inst &inst : block->getInsts())
 		{
@@ -415,9 +419,11 @@
 					// a pointer which could be used for indirect stores.
 					if(getUses(address)->areOnlyLoadStore())
 					{
+						Ice::SizeT addressIdx = alloca->getDest()->getIndex();
+
 						// If there was a previous store to this address, and it was propagated
 						// to all subsequent loads, it can be eliminated.
-						if(auto entry = lastStoreTo.find(alloca); entry != lastStoreTo.end())
+						if(auto entry = lastStoreTo.find(addressIdx); entry != lastStoreTo.end())
 						{
 							Ice::Inst *previousStore = entry->second.store;
 
@@ -428,7 +434,7 @@
 							}
 						}
 
-						lastStoreTo[alloca] = { &inst };
+						lastStoreTo[addressIdx] = { &inst };
 					}
 				}
 			}
@@ -436,7 +442,8 @@
 			{
 				if(Ice::InstAlloca *alloca = allocaOf(inst.getLoadAddress()))
 				{
-					auto entry = lastStoreTo.find(alloca);
+					Ice::SizeT addressIdx = alloca->getDest()->getIndex();
+					auto entry = lastStoreTo.find(addressIdx);
 					if(entry != lastStoreTo.end())
 					{
 						const Ice::Inst *store = entry->second.store;