Sensor Status bits?

Hi,

I am running a new device that appears to be working well, but when I parse the device status, all of the 9 least significant bits are set to 1. The device is running FW version 173…has something major changed since 171?

Raw Message:
{‘serial_number’: ‘ST-00127092’, ‘type’: ‘device_status’, ‘hub_sn’: ‘HB-00132986’, ‘timestamp’: 1703619905, ‘uptime’: 77089, ‘voltage’: 2.467, ‘firmware_revision’: 173, ‘rssi’: -46, ‘hub_rssi’: -46, ‘sensor_status’: 655871, ‘debug’: 0}

Parsed Data (note status is masked with 0b111111111):
Device Status

Time: 2023-12-26 12:45:05

Uptime: 77089
FW: 173
Voltage: 2.467 V
RSSI: -46
Hub RSSI: -46
Status: FAIL: 111111111

I am seeing the same thing on a unit running FW version 173. Getting the exact same sensor_status number.
App doesn’t show any errors and unit seems to be functioning fine.

I’m also on device firmware v173, hub firmware v194.

I’m willing to bet $1 on someone reversing the sense of all of the bits from “fail” to “working correctly”.

I just installed a power booster and wanted to ensure it was working, so I wrote some Python code to decode the bits.

Here is the output:
“sensor_status_hex”: “0xB07FF”,
“sensor_status_decoded”: “lightning fail, lightning noise, lightning disturber, pressure fail, temp fail, rh fail, wind fail, precip fail, light/uv fail, power booster shore power”

The dictionary I used to decode it:
map_sensor_status_bits = {
0x00000001: ‘lightning fail’
, 0x00000002: ‘lightning noise’
, 0x00000004: ‘lightning disturber’
, 0x00000008: ‘pressure fail’
, 0x00000010: ‘temp fail’
, 0x00000020: ‘rh fail’
, 0x00000040: ‘wind fail’
, 0x00000080: ‘precip fail’
, 0x00000100: ‘light/uv fail’
, 0x00008000: ‘power booster depleted’
, 0x00010000: ‘power booster shore power’
}

0xB07FF = 0b10110000011111111111

Hmmm - but only if every system worldwide with that firmware and equally all-good sensors showed the same value…

All zeroes should be all bits zero per WeatherFlow Tempest UDP Reference - v171

Might be worth some others with firmware 173 reporting their values…

Yes there are a good handful of people who’ve reported similar issues publicly. I started this thread on the topic when it started for me in February. I still have an open customer support issue in which I’m told “they’ll get back to me soon” every couple of weeks when I ask.

It might be new hardware + the new firmware versions as a combination. I have all new equipment purchased in January, 2024. Note that it may not be the software but the hardware reporting it in reverse as well, or the UDP packet generation is different from what the hub sends to the cloud.

It is odd that none of the settings or status data coming from the cloud side of our data reports anything unusual happening.

I’m receiving 511 during the day and 10751 during the evening: all sensors should be faulty, if I interpret the UDP API docs correctly. That can’t be possible obviously, since the device is working fine.

{
  "serial_number": "ST-00008550",
  "type": "device_status",
  "hub_sn": "HB-00026574",
  "timestamp": 1729088063,
  "uptime": 5488007,
  "voltage": 2.658,
  "firmware_revision": 181,
  "rssi": -48,
  "hub_rssi": -60,
  "sensor_status": 511,
  "debug": 0,
  "ip": "10.1.10.116",
  "go": true
}

I think we should just all assume the sensor status reported is incorrect.

As per my above thread, I still receive this kind of reply from customer support every time I check. Considering how long it’s been it’s clearly not a priority, so even if it gets fixed now, it could easily break again in the future (clearly not a part of their unit tests) and again go many months before being fixed. It doesn’t seem worth it to rely on this field.

They are still looking into the issue but for now, you can just assume that it isn’t correct. I do apologize about that.