The wind lull and gust should use a fixed period for their definition. Currently it takes the min/max during a flexible period. It makes that for example the year graph for wind lull is all zeroes, instead of showing that wind lull is less in spring then in autumn.
I have learned to ignore the minimum value because it sometimes returns zero in error. I have some Sky’s which I needed to test for wind speed and direction accuracy prior to installing them in difficult to access locations. I built an impromptu wind tunnel by moving furniture and mounting a fan. I left two Skys beside each other in the wind for days with constant wind to test for errors in the gusts and lulls to Zero. Using two Skys enabled me to cross check if there was a change to the fan. There were several moments where a Sky would read zero for no reason. Following discussion with weatherflow to determine if the Skys were faulty I learned that sometimes ‘glitches’ will cause a zero wind speed. A glitch would occur for a 3 second moment sometimes more than once a day. It made me think that the calculation for minimum wind speed should ignore ‘glitches’.
Suggestion:
There are several ways this could be implemented. If my Sky is using 3 sec values if the single minimum value was removed from the calculation it might actually improve the accuracy. And the same could be for the maximum. I videoed my sky with the 3 sec values displayed on my phone in frame and in my opinion the maximum gust for the 3 sec value was significantly higher than the 3 second value on each side of the maximum gust.
cheers Ian
wow that is some thorough testing you did. Impressive.
if only they used some fixed definition of the interval used, it would be less of a problem. If, for example wind lull was defined as the minimum wind speed during a 1 minute interval, and stick to it, the wind lull for a 60 minute period would be the average of those values, not the minimum. In case of a glitch, as you experienced, that would have less effect on the average.
Same is true for gust.