Memory operations used operands to encode positional/size arguments, with stack variants in case the user wanted to programmatically do so. In most large scale cases I don't see the non-stack variants being used; many cases require using the stack for additions and subtractions to create values such as indexes or sizes. Therefore, it's better to be stack-first. One counter point is inline optimisation of code at runtime: if an compile-time-known object is pushed then immediately used in an operation, we can instead encode the value directly into an operand based instruction which will speed up execution time because it's slower to pop the value off the stack than have it available as part of the instruction.