Well, it has been years since I did my own performance test / comparison, but back in 2014 I determine that Restlets scripts did indeed perform faster that just about any other script or integration method at the time (faster that scheduled scripts, SuiteTalk Web Services, CSV imports, etc.) Of course, at that time it was before SuiteScript 2.x came out, so Map/Reduce wasn’t an option and SuiteTalk REST wasn’t available yet either. But over the years we have created many Restlets and the throughput we see with them is impressive, so this doesn’t surprise me.
However we also use Map/Reduce scripts a lot, and they can perform very well, but we have noticed that the getInputData stage can actually end up being the bottleneck for Map/Reduce script when the data to retrieve is very large, like over 30,000 records. For some reason, the time it takes to get all of the data and send it to the next phase does not scale well above a certain point, and you may find that 45,000 records could take more than twice the time of 30,000 records in the getInputData phase (that’s just an example, it is going to depend on a lot of factors specific to your account and your data). But I will say if you are dealing with tens of thousand of records or more, it is worth experimenting with limiting the GID phase to a certain amount, and see where the sweet spot is for that script. I have to limit most of mine to the 30-40 thousand range, as that seem to be where we get the best throughput, before it degrades.
As mentioned by others, workflows, user events, and even client scripts (yes client scripts can run server-side!) can all fire for certain contexts that might only affect one script type but not another, so that’s worth a look too, to make sure you are doing a real comparison of the scripts, and not other customizations being triggered by them.