Implement VK_KHR_vulkan_memory_model This is largely a no-op because CPUs typically have fully coherent cache hierarchies (including on ccNUMA systems). Thus every write is implicitly 'visible'. Atomic operations with memory-order semantics likewise have system-wide 'scope'. And we don't support spare images yet so 'availability' operations don't do anything. Note that cache coherency would be lost if we were to use non-temporal store instructions (e.g. MOVNTPS). A 'visibility' operation would require inserting an SFENCE instruction. The `VolatileTexel` image operand signifies that "This access cannot be eliminated, duplicated, or combined with other accesses." There doesn't appear to be dEQP-VK test coverage for it at this time, which makes it non-trivial to optimally design how to handle it. It is therefore not implemented at this time. Bug: b/176819536 Tests: dEQP-VK.memory_model.* Change-Id: Ifa97eed5a4506eef8ba77b8fb2042a0b1a624db5 Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/56209 Kokoro-Result: kokoro <noreply+kokoro@google.com> Reviewed-by: Alexis Hétu <sugoi@google.com> Tested-by: Nicolas Capens <nicolascapens@google.com> Commit-Queue: Nicolas Capens <nicolascapens@google.com>
diff --git a/src/Pipeline/SpirvShaderMemory.cpp b/src/Pipeline/SpirvShaderMemory.cpp index 18ed889..8439011 100644 --- a/src/Pipeline/SpirvShaderMemory.cpp +++ b/src/Pipeline/SpirvShaderMemory.cpp
@@ -279,8 +279,8 @@ SpirvShader::EmitResult SpirvShader::EmitMemoryBarrier(InsnIterator insn, EmitState *state) const { auto semantics = spv::MemorySemanticsMask(GetConstScalarInt(insn.word(2))); - // TODO: We probably want to consider the memory scope here. For now, - // just always emit the full fence. + // TODO(b/176819536): We probably want to consider the memory scope here. + // For now, just always emit the full fence. Fence(semantics); return EmitResult::Continue; } @@ -421,13 +421,21 @@ } } +void SpirvShader::Fence(spv::MemorySemanticsMask semantics) const +{ + if(semantics != spv::MemorySemanticsMaskNone) + { + rr::Fence(MemoryOrder(semantics)); + } +} + std::memory_order SpirvShader::MemoryOrder(spv::MemorySemanticsMask memorySemantics) { - auto control = static_cast<uint32_t>(memorySemantics) & static_cast<uint32_t>( - spv::MemorySemanticsAcquireMask | - spv::MemorySemanticsReleaseMask | - spv::MemorySemanticsAcquireReleaseMask | - spv::MemorySemanticsSequentiallyConsistentMask); + uint32_t control = static_cast<uint32_t>(memorySemantics) & static_cast<uint32_t>( + spv::MemorySemanticsAcquireMask | + spv::MemorySemanticsReleaseMask | + spv::MemorySemanticsAcquireReleaseMask | + spv::MemorySemanticsSequentiallyConsistentMask); switch(control) { case spv::MemorySemanticsMaskNone: return std::memory_order_relaxed; @@ -437,7 +445,7 @@ case spv::MemorySemanticsSequentiallyConsistentMask: return std::memory_order_acq_rel; // Vulkan 1.1: "SequentiallyConsistent is treated as AcquireRelease" default: // "it is invalid for more than one of these four bits to be set: - // Acquire, Release, AcquireRelease, or SequentiallyConsistent." + // Acquire, Release, AcquireRelease, or SequentiallyConsistent." UNREACHABLE("MemorySemanticsMask: %x", int(control)); return std::memory_order_acq_rel; }