So here's some monkey wrenches:
1) System Caching - there are levels of speed that a program can operate at based on where it sits in your computer. The level that it sits at can change for many reasons, but generally because it is running for the first time, or because another program took priority and pushed you out of the faster level. If it sits in the hard drive and needs to be retrieved it will take a long time. If it sits in RAM, it will be quicker, but still seam sluggish when you are hoping for an optimized result. If it sits in L1 L2 or L3 cache which actually resides on top of the CPU, then things are going to be very quick. It's nearly impossible to say which level your data is sitting at at any given time on all computers.
2) Compilers implement things in different ways, so code compiled on MingW will work differently than code compiled on Cygwin.
2.5) Operating Systems work in different ways, so code run on Windows will have different results than those running on MAC and or Linux. Many functions called in the C++ libraries are actually routed to system calls in some way.
3) Different platforms (ARM vs x86_64) - I think you can guess it from 2 and 2.5... Different implementations taking different routes to activity.
In general it's fine to test the speeds of one function vs another when it's just for your own use, but don't use that result as a rule. It gets messy if you expect the results to be the same for everyone. You're best option is understanding BigO, do your research, and just try to do things as efficiently as you can. Don't get sucked down the wrong optimization rabbit hole.
https://www.youtube.com/watch?v=p8u_k2LIZyo <- how researching general computer hardware and standards, minor calculus, and using the most basic tools can produce impressive results.