ArchiveSW - Display & Data Archive Storage

Regarding air quality and particulate matter, I just purchased an Enviro+ HAT and Particle Matter Sensor from Pimoroni to have a play around with:

https://learn.pimoroni.com/tutorial/sandyj/enviro-plus-and-luftdaten-air-quality-station

Excuse the very crude (but functional I guess) enclosure on their tutorial page!


No idea how good it will be, but thought I’d mention it given the recent comments on here.

2 Likes

no… still the same. and I can ping it (them) when it refuses connection.- I can obviously SSH in and VNC in. I get the same error when using the local (archivesw’s) browser http://localhost:8080

1 Like

Have you rebooted the RPi?

You can try

pm2 stop
pm2 start

Yes, I have rebooted- doesn’t seem to help. And that is the only way I can start it as it no longer starts by itself. But that stopped happening awhile back.

Is there a config file I can copy? I may just do a clean build, it would be nice not having to manually setting up from scratch with my specific device information/data.

1 Like

All the config files are on the config folder.

So after I do an install I can just copy that folder over the new default prior to running?

1 Like

@GaryFunk

I think I found the problem …

Unhandled rejection Error: ENOSPC: no space left on device, write

I’m assuming the MySQL database ate up my SDCard … 8gig. ?

is there an easy way to purge the database - and just start from scratch?

it would save me from building a whole system from scratch…

2 Likes

When I get home I’ll get you a command to delete all the rapid wind. That will solve it.

I was able to resolve …

I went back to a backup image I (forgot) had. restored to a 16gig sdcard - then expanded the volume via raspi-config.

I also went in and truncated each table in archivesw dB, to start from scratch(today).

The bright side of this was I got my internal sensor up and working again!
image

3 Likes

I’m happy you solved the issue. I will add a command to let you purge non-weather related data.

1 Like

Ever since some things got updated a while ago I have not been able to get a completely working and stable RPi with ArchiveSW. I’m not sure what is going on. I wish I had just never updated. One thing I experience was the RPi getting overloaded constantly writing to the Archive.log file to the point that I couldn’t access or control it. This was the worst on a RPi 2B. Powering off to get control of it actually corrupted the SD card to the point I can’t use it any more. No partitioning software or formatting software will do anything to it. On my RPi 3B+ my Samsung Pro Endurance SD card apparently doesn’t have any more endurance because it quit in the same manner while I was re-partitioning it on my Surface 3 to do a clean slate install. If anyone knows of any software which might happen to be able to make the SD cards usable, I’d appreciate it.

The error I’m having right now with a fresh install on a bare RPi 3B+ is this:
2019-06-28 23:17:19 Starting Archive v1.8.16.111
2019-06-28 23:17:19 TCP client active on: 127.0.0.1:33514 to 127.0.0.1:9090
2019-06-28 23:17:21 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column ‘fs’ at row 1
2019-06-28 23:17:51 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column ‘fs’ at row 1

The Archive.log file is over 22kB in less than 15 minutes. Any suggestions on what to do?

1 Like

I look into that and see why.

1 Like

This one is the best I have found … has worked well for me. However, there are times the card is just toast.
image
you can also try command prompt - diskpart

and do a clean to that Disk # once listed & selected. (just be careful - don’t select your system disk by mistake)

2 Likes

I left it running over night and this is what the log files are like:



I found and fixed it. There is a new AlterTables. Update and run AlterTables. It may take a few minutes based on the data in the tables.

2 Likes

Good. I’ll update when I get home. I shut down the RPi because the log file was over 1.5Gb after less than 24hrs.

1 Like

I applied the update yesterday and let it run … I didn’t have any issues and still don’t have :slight_smile:
Yes I know Gary, I’m not looking hard enough for a bug :clown_face:

1 Like

I updated and ran AlterTables and that fixed one error and another showed up. I reinstalled from scratch (NOOBS), let the install of the OS do its update at the end of install and let the ArchiveSW install script run the update too. Here is what the logs show:

Archive.log
====
2019-07-01 23:48:37 Starting Archive v1.8.16.111
2019-07-01 23:48:37 TCP client active on: 127.0.0.1:33048 to 127.0.0.1:9090
2019-07-01 23:48:43 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column 'fs' at row 1
2019-07-01 23:48:45 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column 'fs' at row 1
2019-07-01 23:48:46 appRestart :  : 
2019-07-01 23:48:47 piReboot :  : 
2019-07-01 23:48:53 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column 'fs' at row 1
2019-07-01 23:48:55 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column 'fs' at row 1
{......truncated.......}
2019-07-01 23:50:15 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column 'fs' at row 1
2019-07-01 23:50:23 Error in insertHubStatus (4): ER_TRUNCATED_WRONG_VALUE_FOR_FIELD: Incorrect integer value: '[25,0]' for column `archivesw`.`HubStatus`.`mqtt_stats` at row 1
2019-07-01 23:50:25 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column 'radio_stats' at row 1
2019-07-01 23:50:33 Error in insertHubStatus (4): ER_TRUNCATED_WRONG_VALUE_FOR_FIELD: Incorrect integer value: '[25,0]' for column `archivesw`.`HubStatus`.`mqtt_stats` at row 1
2019-07-01 23:50:35 Error in insertHubStatus (4): ER_DATA_TOO_LONG: Data too long for column 'radio_stats' at row 1

The error changed after running AlterTables right after the new install. Here is the log:

AlterTables.log
====
2019-07-01 23:50:20 Starting AlterTables v1.8.16.030
2019-07-01 23:50:21 Dropping table index: DailySky
2019-07-01 23:50:21 Dropping table index: PrecipEvent
2019-07-01 23:50:21 Dropping table index: SkyObservation
2019-07-01 23:50:21 Dropping table index: StrikeEvent
2019-07-01 23:50:21 Checking table index: AirBackfill
2019-07-01 23:50:21 Checking table index: AirObservation
2019-07-01 23:50:21 Checking table index: DailyAir
2019-07-01 23:50:21 Checking table index: DailySensor
2019-07-01 23:50:21 Checking table index: DailySky
2019-07-01 23:50:21 Checking table index: DeviceEvents
2019-07-01 23:50:21 Checking table index: DeviceStatus
2019-07-01 23:50:21 Checking table index: FWUpdate
2019-07-01 23:50:21 Checking table index: HubEvents
2019-07-01 23:50:21 Checking table index: HubStatus
2019-07-01 23:50:21 Checking table index: PrecipEvent
2019-07-01 23:50:21 Checking table index: RapidWind
2019-07-01 23:50:21 Checking table index: SkyBackfill
2019-07-01 23:50:21 Checking table index: SkyObservation
2019-07-01 23:50:21 Checking table index: StrikeEvent
2019-07-01 23:50:21 Checking table index: Xrain
2019-07-01 23:50:21 Checking table index: Z_version
2019-07-01 23:50:21 Error changeTable: DailySky
2019-07-01 23:50:21 ChangeTable: DailySky
2019-07-01 23:50:21 ChangeTable: HubEvents
2019-07-01 23:50:21 ChangeTable: HubStatus
2019-07-01 23:50:21 Process changes complete
2019-07-01 23:50:21 Update Version complete
1 Like

After you ran AlterTables do you still have the error?

The first errors are before running AlterTables. Where the error changed is when AlterTables was run. I inserted {…truncated…} one error above when AlterTables was run.