As the period is rather conducive to this kind of things, here is my wishlist to Santa Claus*
First of all, I would just like to say that I have been using WeatherFlow APIs on a daily basis for several months. What follows is only my point of view, resulting from any difficulties or shortcomings that I have met
First thing first, I think /observations/device and /observations/station endpoints should be consistent. What I mean by this is that these two endpoints should offer the same set of parameters. I understand the “station” paradigm as a smart way to represent a synthesis of the observations of all devices of a single station.
This means that if there are several indoor modules and outdoor modules attached to it, this endpoint should expose indoor and outdoor data. The quality of the “station” paradigm relies on smartness to generate a unique indoor temperature and a unique outdoor temperature, and so on for other measurement types… Which is a sort of snapshot of the station.
Of course if I want the detail of the measurements for the probe staying in a particular room or special area of my garden, I must use “device” paradigm and use /observations/device endpoint… But using it consistently, and with same parameters, as the /observations/station endpoint.
So this 2 endpoints should allow the following parameters:
time_start / time_end (currently, only /observations/device endpoint allows it)
scale: which is the timelapse between two measurements. For obvious reasons all scales can’t be served for all date ranges (1 minute of scale, for a date range of 30 days, is definitely inconceivable). If scale can’t be served the API must generate a self descriptive error.
Note: these 3 parameters are self-sufficient - there is no longer any need for parameters like day_offset…
Last main point, the documentation. If you don’t have an API which fully implements affordance best practices (and who fully implements that?), you should explain precisely all parameters and all errors. Two examples:
to discover scales used at different period range type for /observations/device endpoint I needed to do several tests with Postman…
as I don’t know all possible errors, I don’t know how my application can react to a sent error… And what about the error 99 (UNKNOWN) I have sometimes
That’s it. I understand that everything can not be done overnight and I think the quality of your API is already very good. But these few elements could make it even better!
Have a good day!
(*) don’t worry, @dsj, I know it is too late for this year
Sorry for that, Gary. That was not my intention.
I just wanted to indicate that with these 3 parameters, any other parameter is only a shortcut for something which could be complicated/complex to do. Of course this sort of “ice on the cake” is something important. This allows to quickly prototype/test something. And this is also important for the understanding that we have of the API and what we can do with it…
Thanks, Pierre. This is excellent feedback. Both of these features (a “station” timeseries and an explicit “scale” parameter) are on our roadmap. And (perhaps an early Christmas present for you?) the “scale” feature actually already exists (though it’s undocumented)! We added it to support the graph zooming feature in our apps (described more here: Data archive buckets explained). We haven’t publicly documented it, however, since it’s fairly application-specific and is very subject to change.
But, if you feel adventurous, and with the caveat that this is an undocumented feature so you should use it at your own risk, here’s how it works: Simply add a bucket parameter to your /observations/device request to explicitly ask for a specific scale/zoom level:
bucket=a
time step = 1 minute
max range = 1 day
bucket=b
time step = 5 minutes
max range = 5 days
bucket=c
time step = 30 minutes
max range = 30 days
bucket=d
time step = 180 minutes
max range = 180 days
bucket=e
time step = 1 day
max range = none
Note, bucket can only override the default value when fewer data points would be returned, compared to the default. For example, it is valid to ask for bucket=e for any time range but if you ask for bucket=a for a time range greater than one day, you will get an error (status_code=3) with a (hopefully) helpful explanation.
Again, use this feature at your own risk, if at all. If you have feedback on the current implementation and/or the definitions of what’s returned at each “bucket level”, we’d love to hear it!
Yes, we agree day_offset is a nice convenience feature. We have no plans to remove that parameter.
documenting all possible errors is also on our roadmap!
Thanks you @dsj for your answer… and for explaining this undocumented feature. I will try it to elaborate different scenarios…
Nevertheless I will only use (in official release of my application) the “station timeseries” when it will be ready. Do you already have an ETA?
Thank you again for your kindness towards developers
So would be opening up a read-only instance of the to-do list for bug-fixes, new features, software roadmap, etc. so we don’t all ask multiple times in parallel for the same things…