This graph can help validate that gathered user-experience performance metrics from different VM or SBC hosts are consistent or not. In most cases it's expected there would be performance consistency, unless the same SBC or VM hosts in a test are deliberately not configured the same way. An example of an issue this graph can help point out is if power settings are accidentally configured differently on SBC or VM hosts then gathered performance metrics might not match up.
- This graph likely wont be helpful if test sessions are running on their own machines. This can make the graph convoluted. For example, this graph is very busy:
- On the other hand, there being too little of hosts can make it so results aren't interesting, such as this:
- This following is an example of what would be desired to be seen -- multiple VM or SBC hosts showing similar results:
- This is an example of what would be a good idea to investigate, if unexpected. In other words, in this example, finding out why the hosts' baseline user-experience metrics reported in as having such different response times: