Nigel's presentation of the data seems overly complicated in my opinion, but I totally understand that's it not easy to get your head around, especially when he's trying to compare two different GoM firmware versions. He definitely loves the GoM and he's trying hard to validate and trust it, and he has a fair chance of that living in a flat region of the UK. I dislike the GoM because it's a moving target and consumers are not privy to the formula, plus our roads are more variable.
The only readings I really trust are the odometer and the SoC. The latter still needs to be tested for having the
linearity we expect with energy used, or, since we don't know that, the proxy of distance driven will have to suffice. The graph (which I've posted before) is a 300 km round trip over several hours to test this. It's a mostly-hilly drive with an average speed of less than 80 km/h.
The first graph is distance driven (in 100 km blocks) graphed against
loss of SoC (because that's what happens.) The first data point at the lower-left is at home and the data point at the upper-right is also at home, without any charging stops. The slight "S"-shape in the data points is due to the hills and higher altitude of the destination, which is at the midpoint. The reason for collecting data at numerous waypoints is to validate the linearity of the SoC, which it has, down to 32% SoC. The slope of the curve (24% SoC per 100 km) is the summary of the round trip. The dithering either side of a straight line is remarkably minor considering the variations in speed and hills up to 760 m.
The second graph is
loss of GoM v.s. distance driven, both in kms, characterising the GoM for that day only. The summary of the trip is that the GoM returns 72% of what it estimated, pretty far off but it wouldn't have had a chance to estimate my range because I didn't do the same drive every day the week before. Again, the dithering is very slight considering the same variations in driving.
