mirror of
https://github.com/Akkudoktor-EOS/EOS.git
synced 2026-02-24 01:46:21 +00:00
Add database support for measurements and historic prediction data. (#848)
The database supports backend selection, compression, incremental data load, automatic data saving to storage, automatic vaccum and compaction. Make SQLite3 and LMDB database backends available. Update tests for new interface conventions regarding data sequences, data containers, data providers. This includes the measurements provider and the prediction providers. Add database documentation. The fix includes several bug fixes that are not directly related to the database implementation but are necessary to keep EOS running properly and to test and document the changes. * fix: config eos test setup Make the config_eos fixture generate a new instance of the config_eos singleton. Use correct env names to setup data folder path. * fix: startup with no config Make cache and measurements complain about missing data path configuration but do not bail out. * fix: soc data preparation and usage for genetic optimization. Search for soc measurments 48 hours around the optimization start time. Only clamp soc to maximum in battery device simulation. * fix: dashboard bailout on zero value solution display Do not use zero values to calculate the chart values adjustment for display. * fix: openapi generation script Make the script also replace data_folder_path and data_output_path to hide real (test) environment pathes. * feat: add make repeated task function make_repeated_task allows to wrap a function to be repeated cyclically. * chore: removed index based data sequence access Index based data sequence access does not make sense as the sequence can be backed by the database. The sequence is now purely time series data. * chore: refactor eos startup to avoid module import startup Avoid module import initialisation expecially of the EOS configuration. Config mutation, singleton initialization, logging setup, argparse parsing, background task definitions depending on config and environment-dependent behavior is now done at function startup. * chore: introduce retention manager A single long-running background task that owns the scheduling of all periodic server-maintenance jobs (cache cleanup, DB autosave, …) * chore: canonicalize timezone name for UTC Timezone names that are semantically identical to UTC are canonicalized to UTC. * chore: extend config file migration for default value handling Extend the config file migration handling values None or nonexisting values that will invoke a default value generation in the new config file. Also adapt test to handle this situation. * chore: extend datetime util test cases * chore: make version test check for untracked files Check for files that are not tracked by git. Version calculation will be wrong if these files will not be commited. * chore: bump pandas to 3.0.0 Pandas 3.0 now performs inference on the appropriate resolution (a.k.a. unit) for the output dtype which may become datetime64[us] (before it was ns). Also numeric dtype detection is now more strict which needs a different detection for numerics. * chore: bump pydantic-settings to 2.12.0 pydantic-settings 2.12.0 under pytest creates a different behaviour. The tests were adapted and a workaround was introduced. Also ConfigEOS was adapted to allow for fine grain initialization control to be able to switch off certain settings such as file settings during test. * chore: remove sci learn kit from dependencies The sci learn kit is not strictly necessary as long as we have scipy. * chore: add documentation mode guarding for sphinx autosummary Sphinx autosummary excecutes functions. Prevent exceptions in case of pure doc mode. * chore: adapt docker-build CI workflow to stricter GitHub handling Signed-off-by: Bobby Noelte <b0661n0e17e@gmail.com>
This commit is contained in:
2
.env
2
.env
@@ -11,7 +11,7 @@ DOCKER_COMPOSE_DATA_DIR=${HOME}/.local/share/net.akkudoktor.eos
|
||||
# -----------------------------------------------------------------------------
|
||||
# Image / build
|
||||
# -----------------------------------------------------------------------------
|
||||
VERSION=0.2.0.dev84352035
|
||||
VERSION=0.2.0.dev58204789
|
||||
PYTHON_VERSION=3.13.9
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
4
.github/workflows/docker-build.yml
vendored
4
.github/workflows/docker-build.yml
vendored
@@ -60,6 +60,9 @@ jobs:
|
||||
- linux/arm64
|
||||
exclude: ${{ fromJSON(needs.platform-excludes.outputs.excludes) }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Prepare
|
||||
run: |
|
||||
platform=${{ matrix.platform }}
|
||||
@@ -114,6 +117,7 @@ jobs:
|
||||
id: build
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
platforms: ${{ matrix.platform }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
annotations: ${{ steps.meta.outputs.annotations }}
|
||||
|
||||
90
CHANGELOG.md
90
CHANGELOG.md
@@ -5,7 +5,7 @@ All notable changes to the akkudoktoreos project will be documented in this file
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## 0.3.0 (2025-12-??)
|
||||
## 0.3.0 (2026-02-??)
|
||||
|
||||
Adapters for Home Assistant and NodeRed integration are added. These adapters
|
||||
provide a simplified interface to these HEMS besides the standard REST interface.
|
||||
@@ -13,94 +13,126 @@ Akkudoktor-EOS can now be run as Home Assistant add-on and standalone.
|
||||
As Home Assistant add-on EOS uses ingress to fully integrate the EOSdash dashboard
|
||||
in Home Assistant.
|
||||
|
||||
The prediction and measurement data can now be backed by a database. The database allows
|
||||
to keep historic prediction data and measurement data for long time without keeping
|
||||
it in memory. The database supports backend selection, compression, incremental data load,
|
||||
automatic data saving to storage, automatic vaccum and compaction. Two database backends
|
||||
are integrated and can be configured, LMDB and SQLight3.
|
||||
|
||||
In addition, bugs were fixed and new features were added.
|
||||
|
||||
### Feat
|
||||
|
||||
- add database support for measurements and historic prediction data.
|
||||
The prediction and measurement data can now be backed by a database. The database allows
|
||||
to keep historic prediction data and measurement data for long time without keeping
|
||||
it in memory. Two database backends are integrated and can be configured, LMDB and SQLight3.
|
||||
- add adapters for integrations
|
||||
|
||||
Adapters for Home Assistant and NodeRED integration are added.
|
||||
Akkudoktor-EOS can now be run as Home Assistant add-on and standalone.
|
||||
|
||||
As Home Assistant add-on EOS uses ingress to fully integrate the EOSdash dashboard
|
||||
in Home Assistant.
|
||||
|
||||
- add make repeated task function
|
||||
make_repeated_task allows to wrap a function to be repeated cyclically.
|
||||
- allow eos to be started with root permissions and drop priviledges
|
||||
|
||||
Home assistant starts all add-ons with root permissions. Eos now drops
|
||||
root permissions if an applicable user is defined by paramter --run_as_user.
|
||||
The docker image defines the user eos to be used.
|
||||
|
||||
- make eos supervise and monitor EOSdash
|
||||
|
||||
Eos now not only starts EOSdash but also monitors EOSdash during runtime
|
||||
and restarts EOSdash on fault. EOSdash logging is captured by EOS
|
||||
and forwarded to the EOS log to provide better visibility.
|
||||
|
||||
- add duration to string conversion
|
||||
|
||||
Make to_duration to also return the duration as string on request.
|
||||
|
||||
### Fixed
|
||||
|
||||
- config eos test setup
|
||||
Make the config_eos fixture generate a new instance of the config_eos singleton.
|
||||
Use correct env names to setup data folder path.
|
||||
- startup with no config
|
||||
Make cache and measurements complain about missing data path configuration but
|
||||
do not bail out.
|
||||
- soc data preparation and usage for genetic optimization.
|
||||
Search for soc measurments 48 hours around the optimization start time.
|
||||
Only clamp soc to maximum in battery device simulation.
|
||||
- dashboard bailout on zero value solution display
|
||||
Do not use zero values to calculate the chart values adjustment for display.
|
||||
- openapi generation script
|
||||
Make the script also replace data_folder_path and data_output_path to hide
|
||||
real (test) environment pathes.
|
||||
- development version scheme
|
||||
|
||||
The development versioning scheme is adaptet to fit to docker and
|
||||
home assistant expectations. The new scheme is x.y.z and x.y.z.dev<hash>.
|
||||
Hash is only digits as expected by home assistant. Development version
|
||||
is appended by .dev as expected by docker.
|
||||
|
||||
- use mean value in interval on resampling for array
|
||||
|
||||
When downsampling data use the mean value of all values within the new
|
||||
sampling interval.
|
||||
|
||||
- default battery ev soc and appliance wh
|
||||
|
||||
Make the genetic simulation return default values for the
|
||||
battery SoC, electric vehicle SoC and appliance load if these
|
||||
assets are not used.
|
||||
|
||||
- import json string
|
||||
|
||||
Strip outer quotes from JSON strings on import to be compliant to json.loads()
|
||||
expectation.
|
||||
|
||||
- default interval definition for import data
|
||||
|
||||
Default interval must be defined in lowercase human definition to
|
||||
be accepted by pendulum.
|
||||
|
||||
- clearoutside schema change
|
||||
|
||||
### Chore
|
||||
|
||||
- removed index based data sequence access
|
||||
Index based data sequence access does not make sense as the sequence can be backed
|
||||
by the database. The sequence is now purely time series data.
|
||||
- refactor eos startup to avoid module import startup
|
||||
Avoid module import initialisation expecially of the EOS configuration.
|
||||
Config mutation, singleton initialization, logging setup, argparse parsing,
|
||||
background task definitions depending on config and environment-dependent behavior
|
||||
is now done at function startup.
|
||||
- introduce retention manager
|
||||
A single long-running background task that owns the scheduling of all periodic
|
||||
server-maintenance jobs (cache cleanup, DB autosave, …)
|
||||
- canonicalize timezone name for UTC
|
||||
Timezone names that are semantically identical to UTC are canonicalized to UTC.
|
||||
- extend config file migration for default value handling
|
||||
- extend datetime util test cases
|
||||
- make version test check for untracked files
|
||||
Check for files that are not tracked by git. Version calculation will be
|
||||
wrong if these files will not be commited.
|
||||
- bump pandas to 3.0.0
|
||||
Pandas 3.0 now performs inference on the appropriate resolution (a.k.a. unit)
|
||||
for the output dtype which may become datetime64[us] (before it was ns). Also
|
||||
numeric dtype detection is now more strict which needs a different detection for
|
||||
numerics.
|
||||
- bump pydantic-settings to 2.12.0
|
||||
pydantic-settings 2.12.0 under pytest creates a different behaviour. The tests
|
||||
were adapted and a workaround was introduced. Also ConfigEOS was adapted
|
||||
to allow for fine grain initialization control to be able to switch
|
||||
off certain settings such as file settings during test.
|
||||
- remove sci learn kit from dependencies
|
||||
The sci learn kit is not strictly necessary as long as we have scipy.
|
||||
- add documentation mode guarding for sphinx autosummary
|
||||
Sphinx autosummary excecutes functions. Prevent exceptions in case of pure doc
|
||||
mode.
|
||||
- adapt docker-build CI workflow to stricter GitHub handling
|
||||
- Use info logging to report missing optimization parameters
|
||||
|
||||
In parameter preparation for automatic optimization an error was logged for missing paramters.
|
||||
Log is now down using the info level.
|
||||
|
||||
- make EOSdash use the EOS data directory for file import/ export
|
||||
|
||||
EOSdash use the EOS data directory for file import/ export by default.
|
||||
This allows to use the configuration import/ export function also
|
||||
within docker images.
|
||||
|
||||
- improve EOSdash config tab display
|
||||
|
||||
Improve display of JSON code and add more forms for config value update.
|
||||
|
||||
- make docker image file system layout similar to home assistant
|
||||
|
||||
Only use /data directory for persistent data. This is handled as a
|
||||
docker volume. The /data volume is mapped to ~/.local/share/net.akkudoktor.eos
|
||||
if using docker compose.
|
||||
|
||||
- add home assistant add-on development environment
|
||||
|
||||
Add VSCode devcontainer and task definition for home assistant add-on
|
||||
development.
|
||||
|
||||
- improve documentation
|
||||
|
||||
## 0.2.0 (2025-11-09)
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
# the root directory (no add-on folder as usual).
|
||||
|
||||
name: "Akkudoktor-EOS"
|
||||
version: "0.2.0.dev84352035"
|
||||
version: "0.2.0.dev58204789"
|
||||
slug: "eos"
|
||||
description: "Akkudoktor-EOS add-on"
|
||||
url: "https://github.com/Akkudoktor-EOS/EOS"
|
||||
|
||||
@@ -4,6 +4,7 @@
|
||||
|
||||
../_generated/configadapter.md
|
||||
../_generated/configcache.md
|
||||
../_generated/configdatabase.md
|
||||
../_generated/configdevices.md
|
||||
../_generated/configelecprice.md
|
||||
../_generated/configems.md
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
| homeassistant | `EOS_ADAPTER__HOMEASSISTANT` | `HomeAssistantAdapterCommonSettings` | `rw` | `required` | Home Assistant adapter settings. |
|
||||
| nodered | `EOS_ADAPTER__NODERED` | `NodeREDAdapterCommonSettings` | `rw` | `required` | NodeRED adapter settings. |
|
||||
| provider | `EOS_ADAPTER__PROVIDER` | `Optional[list[str]]` | `rw` | `None` | List of adapter provider id(s) of provider(s) to be used. |
|
||||
| providers | | `list[str]` | `ro` | `N/A` | Available electricity price provider ids. |
|
||||
| providers | | `list[str]` | `ro` | `N/A` | Available adapter provider ids. |
|
||||
:::
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
@@ -33,10 +33,7 @@
|
||||
"pv_production_emr_entity_ids": null,
|
||||
"device_measurement_entity_ids": null,
|
||||
"device_instruction_entity_ids": null,
|
||||
"solution_entity_ids": null,
|
||||
"homeassistant_entity_ids": [],
|
||||
"eos_solution_entity_ids": [],
|
||||
"eos_device_instruction_entity_ids": []
|
||||
"solution_entity_ids": null
|
||||
},
|
||||
"nodered": {
|
||||
"host": "127.0.0.1",
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
| Name | Environment Variable | Type | Read-Only | Default | Description |
|
||||
| ---- | -------------------- | ---- | --------- | ------- | ----------- |
|
||||
| cleanup_interval | `EOS_CACHE__CLEANUP_INTERVAL` | `float` | `rw` | `300` | Intervall in seconds for EOS file cache cleanup. |
|
||||
| cleanup_interval | `EOS_CACHE__CLEANUP_INTERVAL` | `float` | `rw` | `300.0` | Intervall in seconds for EOS file cache cleanup. |
|
||||
| subpath | `EOS_CACHE__SUBPATH` | `Optional[pathlib.Path]` | `rw` | `cache` | Sub-path for the EOS cache data directory. |
|
||||
:::
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
72
docs/_generated/configdatabase.md
Normal file
72
docs/_generated/configdatabase.md
Normal file
@@ -0,0 +1,72 @@
|
||||
## Configuration model for database settings
|
||||
|
||||
Attributes:
|
||||
provider: Optional provider identifier (e.g. "LMDB").
|
||||
max_records_in_memory: Maximum records kept in memory before auto-save.
|
||||
auto_save: Whether to auto-save when threshold exceeded.
|
||||
batch_size: Batch size for batch operations.
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
:::{table} database
|
||||
:widths: 10 20 10 5 5 30
|
||||
:align: left
|
||||
|
||||
| Name | Environment Variable | Type | Read-Only | Default | Description |
|
||||
| ---- | -------------------- | ---- | --------- | ------- | ----------- |
|
||||
| autosave_interval_sec | `EOS_DATABASE__AUTOSAVE_INTERVAL_SEC` | `Optional[int]` | `rw` | `10` | Automatic saving interval [seconds].
|
||||
Set to None to disable automatic saving. |
|
||||
| batch_size | `EOS_DATABASE__BATCH_SIZE` | `int` | `rw` | `100` | Number of records to process in batch operations. |
|
||||
| compaction_interval_sec | `EOS_DATABASE__COMPACTION_INTERVAL_SEC` | `Optional[int]` | `rw` | `604800` | Interval in between automatic tiered compaction runs [seconds].
|
||||
Compaction downsamples old records to reduce storage while retaining coverage. Set to None to disable automatic compaction. |
|
||||
| compression_level | `EOS_DATABASE__COMPRESSION_LEVEL` | `int` | `rw` | `9` | Compression level for database record data. |
|
||||
| initial_load_window_h | `EOS_DATABASE__INITIAL_LOAD_WINDOW_H` | `Optional[int]` | `rw` | `None` | Specifies the default duration of the initial load window when loading records from the database, in hours. If set to None, the full available range is loaded. The window is centered around the current time by default, unless a different center time is specified. Different database namespaces may define their own default windows. |
|
||||
| keep_duration_h | `EOS_DATABASE__KEEP_DURATION_H` | `Optional[int]` | `rw` | `None` | Default maximum duration records shall be kept in database [hours, none].
|
||||
None indicates forever. Database namespaces may have diverging definitions. |
|
||||
| provider | `EOS_DATABASE__PROVIDER` | `Optional[str]` | `rw` | `None` | Database provider id of provider to be used. |
|
||||
| providers | | `List[str]` | `ro` | `N/A` | Return available database provider ids. |
|
||||
:::
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
<!-- pyml disable no-emphasis-as-heading -->
|
||||
**Example Input**
|
||||
<!-- pyml enable no-emphasis-as-heading -->
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
```json
|
||||
{
|
||||
"database": {
|
||||
"provider": "LMDB",
|
||||
"compression_level": 0,
|
||||
"initial_load_window_h": 48,
|
||||
"keep_duration_h": 48,
|
||||
"autosave_interval_sec": 5,
|
||||
"compaction_interval_sec": 604800,
|
||||
"batch_size": 100
|
||||
}
|
||||
}
|
||||
```
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
<!-- pyml disable no-emphasis-as-heading -->
|
||||
**Example Output**
|
||||
<!-- pyml enable no-emphasis-as-heading -->
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
```json
|
||||
{
|
||||
"database": {
|
||||
"provider": "LMDB",
|
||||
"compression_level": 0,
|
||||
"initial_load_window_h": 48,
|
||||
"keep_duration_h": 48,
|
||||
"autosave_interval_sec": 5,
|
||||
"compaction_interval_sec": 604800,
|
||||
"batch_size": 100,
|
||||
"providers": [
|
||||
"LMDB",
|
||||
"SQLite"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
<!-- pyml enable line-length -->
|
||||
@@ -50,19 +50,7 @@
|
||||
1.0
|
||||
],
|
||||
"min_soc_percentage": 0,
|
||||
"max_soc_percentage": 100,
|
||||
"measurement_key_soc_factor": "battery1-soc-factor",
|
||||
"measurement_key_power_l1_w": "battery1-power-l1-w",
|
||||
"measurement_key_power_l2_w": "battery1-power-l2-w",
|
||||
"measurement_key_power_l3_w": "battery1-power-l3-w",
|
||||
"measurement_key_power_3_phase_sym_w": "battery1-power-3-phase-sym-w",
|
||||
"measurement_keys": [
|
||||
"battery1-soc-factor",
|
||||
"battery1-power-l1-w",
|
||||
"battery1-power-l2-w",
|
||||
"battery1-power-l3-w",
|
||||
"battery1-power-3-phase-sym-w"
|
||||
]
|
||||
"max_soc_percentage": 100
|
||||
}
|
||||
],
|
||||
"max_batteries": 1,
|
||||
@@ -89,19 +77,7 @@
|
||||
1.0
|
||||
],
|
||||
"min_soc_percentage": 0,
|
||||
"max_soc_percentage": 100,
|
||||
"measurement_key_soc_factor": "battery1-soc-factor",
|
||||
"measurement_key_power_l1_w": "battery1-power-l1-w",
|
||||
"measurement_key_power_l2_w": "battery1-power-l2-w",
|
||||
"measurement_key_power_l3_w": "battery1-power-l3-w",
|
||||
"measurement_key_power_3_phase_sym_w": "battery1-power-3-phase-sym-w",
|
||||
"measurement_keys": [
|
||||
"battery1-soc-factor",
|
||||
"battery1-power-l1-w",
|
||||
"battery1-power-l2-w",
|
||||
"battery1-power-l3-w",
|
||||
"battery1-power-3-phase-sym-w"
|
||||
]
|
||||
"max_soc_percentage": 100
|
||||
}
|
||||
],
|
||||
"max_electric_vehicles": 1,
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
| Name | Environment Variable | Type | Read-Only | Default | Description |
|
||||
| ---- | -------------------- | ---- | --------- | ------- | ----------- |
|
||||
| interval | `EOS_EMS__INTERVAL` | `Optional[float]` | `rw` | `None` | Intervall in seconds between EOS energy management runs. |
|
||||
| interval | `EOS_EMS__INTERVAL` | `float` | `rw` | `300.0` | Intervall between EOS energy management runs [seconds]. |
|
||||
| mode | `EOS_EMS__MODE` | `Optional[akkudoktoreos.core.emsettings.EnergyManagementMode]` | `rw` | `None` | Energy management mode [OPTIMIZATION | PREDICTION]. |
|
||||
| startup_delay | `EOS_EMS__STARTUP_DELAY` | `float` | `rw` | `5` | Startup delay in seconds for EOS energy management runs. |
|
||||
:::
|
||||
|
||||
@@ -15,10 +15,7 @@
|
||||
"pv_production_emr_entity_ids": null,
|
||||
"device_measurement_entity_ids": null,
|
||||
"device_instruction_entity_ids": null,
|
||||
"solution_entity_ids": null,
|
||||
"homeassistant_entity_ids": [],
|
||||
"eos_solution_entity_ids": [],
|
||||
"eos_device_instruction_entity_ids": []
|
||||
"solution_entity_ids": null
|
||||
},
|
||||
"nodered": {
|
||||
"host": "127.0.0.1",
|
||||
@@ -29,6 +26,15 @@
|
||||
"subpath": "cache",
|
||||
"cleanup_interval": 300.0
|
||||
},
|
||||
"database": {
|
||||
"provider": "LMDB",
|
||||
"compression_level": 0,
|
||||
"initial_load_window_h": 48,
|
||||
"keep_duration_h": 48,
|
||||
"autosave_interval_sec": 5,
|
||||
"compaction_interval_sec": 604800,
|
||||
"batch_size": 100
|
||||
},
|
||||
"devices": {
|
||||
"batteries": [
|
||||
{
|
||||
@@ -53,19 +59,7 @@
|
||||
1.0
|
||||
],
|
||||
"min_soc_percentage": 0,
|
||||
"max_soc_percentage": 100,
|
||||
"measurement_key_soc_factor": "battery1-soc-factor",
|
||||
"measurement_key_power_l1_w": "battery1-power-l1-w",
|
||||
"measurement_key_power_l2_w": "battery1-power-l2-w",
|
||||
"measurement_key_power_l3_w": "battery1-power-l3-w",
|
||||
"measurement_key_power_3_phase_sym_w": "battery1-power-3-phase-sym-w",
|
||||
"measurement_keys": [
|
||||
"battery1-soc-factor",
|
||||
"battery1-power-l1-w",
|
||||
"battery1-power-l2-w",
|
||||
"battery1-power-l3-w",
|
||||
"battery1-power-3-phase-sym-w"
|
||||
]
|
||||
"max_soc_percentage": 100
|
||||
}
|
||||
],
|
||||
"max_batteries": 1,
|
||||
@@ -92,19 +86,7 @@
|
||||
1.0
|
||||
],
|
||||
"min_soc_percentage": 0,
|
||||
"max_soc_percentage": 100,
|
||||
"measurement_key_soc_factor": "battery1-soc-factor",
|
||||
"measurement_key_power_l1_w": "battery1-power-l1-w",
|
||||
"measurement_key_power_l2_w": "battery1-power-l2-w",
|
||||
"measurement_key_power_l3_w": "battery1-power-l3-w",
|
||||
"measurement_key_power_3_phase_sym_w": "battery1-power-3-phase-sym-w",
|
||||
"measurement_keys": [
|
||||
"battery1-soc-factor",
|
||||
"battery1-power-l1-w",
|
||||
"battery1-power-l2-w",
|
||||
"battery1-power-l3-w",
|
||||
"battery1-power-3-phase-sym-w"
|
||||
]
|
||||
"max_soc_percentage": 100
|
||||
}
|
||||
],
|
||||
"max_electric_vehicles": 1,
|
||||
@@ -138,8 +120,8 @@
|
||||
}
|
||||
},
|
||||
"general": {
|
||||
"version": "0.2.0.dev84352035",
|
||||
"data_folder_path": null,
|
||||
"version": "0.2.0.dev58204789",
|
||||
"data_folder_path": "/home/user/.local/share/net.akkudoktoreos.net",
|
||||
"data_output_subpath": "output",
|
||||
"latitude": 52.52,
|
||||
"longitude": 13.405
|
||||
@@ -157,6 +139,7 @@
|
||||
"file_level": "TRACE"
|
||||
},
|
||||
"measurement": {
|
||||
"historic_hours": 17520,
|
||||
"load_emr_keys": [
|
||||
"load0_emr"
|
||||
],
|
||||
|
||||
@@ -9,14 +9,14 @@
|
||||
| ---- | -------------------- | ---- | --------- | ------- | ----------- |
|
||||
| config_file_path | | `Optional[pathlib.Path]` | `ro` | `N/A` | Path to EOS configuration file. |
|
||||
| config_folder_path | | `Optional[pathlib.Path]` | `ro` | `N/A` | Path to EOS configuration directory. |
|
||||
| data_folder_path | `EOS_GENERAL__DATA_FOLDER_PATH` | `Optional[pathlib.Path]` | `rw` | `None` | Path to EOS data directory. |
|
||||
| data_folder_path | `EOS_GENERAL__DATA_FOLDER_PATH` | `Path` | `rw` | `required` | Path to EOS data folder. |
|
||||
| data_output_path | | `Optional[pathlib.Path]` | `ro` | `N/A` | Computed data_output_path based on data_folder_path. |
|
||||
| data_output_subpath | `EOS_GENERAL__DATA_OUTPUT_SUBPATH` | `Optional[pathlib.Path]` | `rw` | `output` | Sub-path for the EOS output data directory. |
|
||||
| home_assistant_addon | | `bool` | `ro` | `N/A` | EOS is running as home assistant add-on. |
|
||||
| data_output_subpath | `EOS_GENERAL__DATA_OUTPUT_SUBPATH` | `Optional[pathlib.Path]` | `rw` | `output` | Sub-path for the EOS output data folder. |
|
||||
| home_assistant_addon | `EOS_GENERAL__HOME_ASSISTANT_ADDON` | `bool` | `rw` | `required` | EOS is running as home assistant add-on. |
|
||||
| latitude | `EOS_GENERAL__LATITUDE` | `Optional[float]` | `rw` | `52.52` | Latitude in decimal degrees between -90 and 90. North is positive (ISO 19115) (°) |
|
||||
| longitude | `EOS_GENERAL__LONGITUDE` | `Optional[float]` | `rw` | `13.405` | Longitude in decimal degrees within -180 to 180 (°) |
|
||||
| timezone | | `Optional[str]` | `ro` | `N/A` | Computed timezone based on latitude and longitude. |
|
||||
| version | `EOS_GENERAL__VERSION` | `str` | `rw` | `0.2.0.dev84352035` | Configuration file version. Used to check compatibility. |
|
||||
| version | `EOS_GENERAL__VERSION` | `str` | `rw` | `0.2.0.dev58204789` | Configuration file version. Used to check compatibility. |
|
||||
:::
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
@@ -28,8 +28,8 @@
|
||||
```json
|
||||
{
|
||||
"general": {
|
||||
"version": "0.2.0.dev84352035",
|
||||
"data_folder_path": null,
|
||||
"version": "0.2.0.dev58204789",
|
||||
"data_folder_path": "/home/user/.local/share/net.akkudoktoreos.net",
|
||||
"data_output_subpath": "output",
|
||||
"latitude": 52.52,
|
||||
"longitude": 13.405
|
||||
@@ -46,16 +46,15 @@
|
||||
```json
|
||||
{
|
||||
"general": {
|
||||
"version": "0.2.0.dev84352035",
|
||||
"data_folder_path": null,
|
||||
"version": "0.2.0.dev58204789",
|
||||
"data_folder_path": "/home/user/.local/share/net.akkudoktoreos.net",
|
||||
"data_output_subpath": "output",
|
||||
"latitude": 52.52,
|
||||
"longitude": 13.405,
|
||||
"timezone": "Europe/Berlin",
|
||||
"data_output_path": null,
|
||||
"data_output_path": "/home/user/.local/share/net.akkudoktoreos.net/output",
|
||||
"config_folder_path": "/home/user/.config/net.akkudoktoreos.net",
|
||||
"config_file_path": "/home/user/.config/net.akkudoktoreos.net/EOS.config.json",
|
||||
"home_assistant_addon": false
|
||||
"config_file_path": "/home/user/.config/net.akkudoktoreos.net/EOS.config.json"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
| ---- | -------------------- | ---- | --------- | ------- | ----------- |
|
||||
| grid_export_emr_keys | `EOS_MEASUREMENT__GRID_EXPORT_EMR_KEYS` | `Optional[list[str]]` | `rw` | `None` | The keys of the measurements that are energy meter readings of energy export to grid [kWh]. |
|
||||
| grid_import_emr_keys | `EOS_MEASUREMENT__GRID_IMPORT_EMR_KEYS` | `Optional[list[str]]` | `rw` | `None` | The keys of the measurements that are energy meter readings of energy import from grid [kWh]. |
|
||||
| historic_hours | `EOS_MEASUREMENT__HISTORIC_HOURS` | `Optional[int]` | `rw` | `17520` | Number of hours into the past for measurement data |
|
||||
| keys | | `list[str]` | `ro` | `N/A` | The keys of the measurements that can be stored. |
|
||||
| load_emr_keys | `EOS_MEASUREMENT__LOAD_EMR_KEYS` | `Optional[list[str]]` | `rw` | `None` | The keys of the measurements that are energy meter readings of a load [kWh]. |
|
||||
| pv_production_emr_keys | `EOS_MEASUREMENT__PV_PRODUCTION_EMR_KEYS` | `Optional[list[str]]` | `rw` | `None` | The keys of the measurements that are PV production energy meter readings [kWh]. |
|
||||
@@ -23,6 +24,7 @@
|
||||
```json
|
||||
{
|
||||
"measurement": {
|
||||
"historic_hours": 17520,
|
||||
"load_emr_keys": [
|
||||
"load0_emr"
|
||||
],
|
||||
@@ -48,6 +50,7 @@
|
||||
```json
|
||||
{
|
||||
"measurement": {
|
||||
"historic_hours": 17520,
|
||||
"load_emr_keys": [
|
||||
"load0_emr"
|
||||
],
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Akkudoktor-EOS
|
||||
|
||||
**Version**: `v0.2.0.dev84352035`
|
||||
**Version**: `v0.2.0.dev58204789`
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
**Description**: This project provides a comprehensive solution for simulating and optimizing an energy system based on renewable energy sources. With a focus on photovoltaic (PV) systems, battery storage (batteries), load management (consumer requirements), heat pumps, electric vehicles, and consideration of electricity price data, this system enables forecasting and optimization of energy flow and costs over a specified period.
|
||||
@@ -338,6 +338,56 @@ Returns:
|
||||
|
||||
---
|
||||
|
||||
## GET /v1/admin/database/stats
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
**Links**: [local](http://localhost:8503/docs#/default/fastapi_admin_database_stats_get_v1_admin_database_stats_get), [eos](https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/Akkudoktor-EOS/EOS/refs/heads/main/openapi.json#/default/fastapi_admin_database_stats_get_v1_admin_database_stats_get)
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
Fastapi Admin Database Stats Get
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
```python
|
||||
"""
|
||||
Get statistics from database.
|
||||
|
||||
Returns:
|
||||
data (dict): The database statistics
|
||||
"""
|
||||
```
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
**Responses**:
|
||||
|
||||
- **200**: Successful Response
|
||||
|
||||
---
|
||||
|
||||
## POST /v1/admin/database/vacuum
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
**Links**: [local](http://localhost:8503/docs#/default/fastapi_admin_database_vacuum_post_v1_admin_database_vacuum_post), [eos](https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/Akkudoktor-EOS/EOS/refs/heads/main/openapi.json#/default/fastapi_admin_database_vacuum_post_v1_admin_database_vacuum_post)
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
Fastapi Admin Database Vacuum Post
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
```python
|
||||
"""
|
||||
Remove old records from database.
|
||||
|
||||
Returns:
|
||||
data (dict): The database stats after removal of old records.
|
||||
"""
|
||||
```
|
||||
<!-- pyml enable line-length -->
|
||||
|
||||
**Responses**:
|
||||
|
||||
- **200**: Successful Response
|
||||
|
||||
---
|
||||
|
||||
## POST /v1/admin/server/restart
|
||||
|
||||
<!-- pyml disable line-length -->
|
||||
|
||||
599
docs/akkudoktoreos/database.md
Normal file
599
docs/akkudoktoreos/database.md
Normal file
@@ -0,0 +1,599 @@
|
||||
% SPDX-License-Identifier: Apache-2.0
|
||||
(database-page)=
|
||||
|
||||
# Database
|
||||
|
||||
## Overview
|
||||
|
||||
The EOS database system provides a flexible, pluggable persistence layer for time-series data
|
||||
records with automatic lazy loading, dirty tracking, and multi-backend support. The architecture
|
||||
separates the abstract database interface from concrete storage implementations, allowing seamless
|
||||
switching between LMDB and SQLite backends.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Three-Layer Design
|
||||
|
||||
**Abstract Interface Layer** (`DatabaseABC`)
|
||||
|
||||
- Defines the contract for all database operations
|
||||
- Provides compression/decompression utilities
|
||||
- Backend-agnostic API
|
||||
|
||||
**Backend Implementation Layer** (`DatabaseBackendABC`)
|
||||
|
||||
- Concrete implementations: `LMDBDatabase`, `SQLiteDatabase`
|
||||
- Singleton pattern ensures single instance per backend
|
||||
- Thread-safe operations via internal locking
|
||||
|
||||
**Record Protocol Layer** (`DatabaseRecordProtocolMixin`)
|
||||
|
||||
- Manages in-memory record lifecycle
|
||||
- Implements lazy loading strategies
|
||||
- Handles dirty tracking and autosave
|
||||
|
||||
## Configuration
|
||||
|
||||
### Database Settings (`DatabaseCommonSettings`)
|
||||
|
||||
```python
|
||||
provider: Optional[str] = None # "LMDB" or "SQLite"
|
||||
compression_level: int = 9 # 0-9, gzip compression
|
||||
initial_load_window_h: Optional[int] = None # Hours, None = full load
|
||||
keep_duration_h: Optional[int] = None # Retention period
|
||||
autosave_interval_sec: Optional[int] = None # Auto-flush interval
|
||||
compaction_interval_sec: Optional[int] = 604800 # Compaction interval
|
||||
batch_size: int = 100 # Batch operation size
|
||||
```
|
||||
|
||||
### User Configuration Guide
|
||||
|
||||
This section explains what each setting does in practical terms and gives
|
||||
concrete recommendations for common deployment scenarios.
|
||||
|
||||
#### `provider` — choosing a backend
|
||||
|
||||
Set `provider` to `"LMDB"` or `"SQLite"`. Leave it `None` only during
|
||||
development or unit testing — with `None` set, nothing is persisted to disk and
|
||||
all data is lost on restart.
|
||||
|
||||
**Use LMDB** for a long-running home server that records data continuously. It
|
||||
is significantly faster for high-frequency writes and range reads because it
|
||||
uses memory-mapped files. The trade-off is that it pre-allocates a large file
|
||||
on disk (default 10 GB) even when mostly empty.
|
||||
|
||||
**Use SQLite** when disk space is constrained, for portable single-file
|
||||
deployments, or when you want to inspect or manipulate the database with
|
||||
standard SQL tools. SQLite is slightly slower for bulk writes but perfectly
|
||||
adequate for home energy data volumes.
|
||||
|
||||
**Do not** switch backends while data exists in the old backend — records are
|
||||
not migrated automatically. If you need to switch, vacuum the old database
|
||||
first, export your data, then reconfigure.
|
||||
|
||||
#### `compression_level` — storage size vs. CPU
|
||||
|
||||
Values range from `0` (no compression) to `9` (maximum compression). The default of `9` is
|
||||
appropriate for most deployments: home energy time-series data compresses very well (often
|
||||
60–80 % reduction) and the CPU overhead is negligible on modern hardware.
|
||||
|
||||
**Set to `0`** only if you are running on very constrained hardware (e.g. a single-core ARM
|
||||
board at full load) and storage space is not a concern.
|
||||
|
||||
**Do not** change this setting after data has been written — the database stores each record
|
||||
with the compression level active at write time and auto-detects the format on read, so mixed
|
||||
levels are fine technically, but you will not reclaim space from already-written records until
|
||||
they are rewritten by compaction.
|
||||
|
||||
#### `initial_load_window_h` — startup memory usage
|
||||
|
||||
Controls how much history is loaded into memory when the application first accesses a namespace.
|
||||
|
||||
**Set a window** (e.g. `48`) on systems with limited RAM or large databases. Only the most
|
||||
recent 48 hours are loaded immediately; older data is fetched on demand if a query reaches
|
||||
outside that window.
|
||||
|
||||
**Leave as `None`** (the default) on well-resourced systems or when you need guaranteed
|
||||
access to all history from the first query. Full load is simpler and avoids the small latency
|
||||
spike of incremental loads.
|
||||
|
||||
**Do not** set this to a very small value (e.g. `1`) if your forecasting or reporting queries
|
||||
routinely look back further — every out-of-window query triggers a database read, and many
|
||||
small reads are slower than one full load.
|
||||
|
||||
#### `keep_duration_h` — data retention
|
||||
|
||||
Sets the age limit (in hours) for the vacuum operation. Records older than
|
||||
`max_timestamp - keep_duration_h` are permanently deleted when vacuum runs.
|
||||
|
||||
**Set this** to match your actual analysis needs. If your forecast models only look back 7 days,
|
||||
keeping 14 days (`336`) gives a comfortable safety margin without accumulating indefinitely.
|
||||
|
||||
**Leave as `None`** only if you have a strong archival requirement and understand that the
|
||||
database will grow without bound. Even with compaction reducing resolution, old data is not
|
||||
deleted unless vacuum runs with a retention limit.
|
||||
|
||||
**Do not** set `keep_duration_h` shorter than the oldest data your forecast or reporting
|
||||
queries ever request — vacuum is permanent and irreversible.
|
||||
|
||||
#### `autosave_interval_sec` — write durability
|
||||
|
||||
Controls how often dirty (modified) records are flushed to disk automatically, in seconds.
|
||||
|
||||
**Set to a low value** (e.g. `10`–`30`) on a system that could lose power unexpectedly,
|
||||
such as a Raspberry Pi without a UPS. A power cut between autosaves loses that window of data.
|
||||
|
||||
**Set to a higher value** (e.g. `300`) on stable systems to reduce write amplification. Each
|
||||
autosave is a full flush of all dirty records, so frequent saves on large dirty sets are
|
||||
more expensive.
|
||||
|
||||
**Leave as `None`** only if you call `db_save_records()` manually at appropriate points in
|
||||
your application code. With `None`, data written since the last manual save is lost on crash.
|
||||
|
||||
#### `compaction_interval_sec` — automatic tiered downsampling
|
||||
|
||||
Controls how often the compaction maintenance job runs, in seconds. The default is
|
||||
604 800 (one week). Set to `None` to disable automatic compaction entirely.
|
||||
|
||||
Compaction applies a tiered downsampling policy to old records:
|
||||
|
||||
- Records older than **2 hours** are downsampled to **15-minute** resolution
|
||||
- Records older than **14 days** are downsampled to **1-hour** resolution
|
||||
|
||||
This reduces storage and speeds up range queries on historical data while preserving full
|
||||
resolution for recent data where it matters most. Each tier is processed incrementally —
|
||||
only the window since the last compaction run is examined, so weekly runs are fast regardless
|
||||
of total history length.
|
||||
|
||||
**Leave at the default weekly interval** for most deployments. Compaction is idempotent and
|
||||
cheap when run frequently on small new windows.
|
||||
|
||||
**Set to a shorter interval** (e.g. `86400`, daily) if your device records at very high
|
||||
frequency (sub-minute) and disk space is a concern.
|
||||
|
||||
**Set to `None`** only if you have a custom retention policy and manage downsampling manually,
|
||||
or if you store data that must not be averaged (e.g. raw event logs where mean resampling
|
||||
would be meaningless).
|
||||
|
||||
**Do not** set the interval shorter than `autosave_interval_sec` — compaction reads from the
|
||||
backend and a record that has not been saved yet will not be visible to it.
|
||||
|
||||
**Interaction with vacuum:** compaction and vacuum are complementary. Compaction reduces
|
||||
resolution of old data; vacuum deletes it entirely past `keep_duration_h`. The recommended
|
||||
pipeline is: compaction runs first (weekly), then vacuum runs immediately after. This means
|
||||
vacuum always operates on already-downsampled data, which is faster and produces cleaner
|
||||
storage boundaries.
|
||||
|
||||
### Recommended Configurations by Scenario
|
||||
|
||||
#### Home server, typical (Raspberry Pi 4, SSD)
|
||||
|
||||
```python
|
||||
provider = "LMDB"
|
||||
compression_level = 9
|
||||
initial_load_window_h = 48
|
||||
keep_duration_h = 720 # 30 days
|
||||
autosave_interval_sec = 30
|
||||
compaction_interval_sec = 604800 # weekly
|
||||
```
|
||||
|
||||
#### Home server, low storage (Raspberry Pi Zero, SD card)
|
||||
|
||||
```python
|
||||
provider = "SQLite"
|
||||
compression_level = 9
|
||||
initial_load_window_h = 24
|
||||
keep_duration_h = 168 # 7 days
|
||||
autosave_interval_sec = 60
|
||||
compaction_interval_sec = 86400 # daily — reclaim space faster
|
||||
```
|
||||
|
||||
#### Development / testing
|
||||
|
||||
```python
|
||||
provider = "SQLite" # or None for fully in-memory
|
||||
compression_level = 0 # faster without compression overhead
|
||||
initial_load_window_h = None # always load everything
|
||||
keep_duration_h = None # never vacuum automatically
|
||||
autosave_interval_sec = None # manual saves only
|
||||
compaction_interval_sec = None # disable compaction
|
||||
```
|
||||
|
||||
#### High-frequency recording (sub-minute intervals)
|
||||
|
||||
```python
|
||||
provider = "LMDB"
|
||||
compression_level = 9
|
||||
initial_load_window_h = 24
|
||||
keep_duration_h = 336 # 14 days
|
||||
autosave_interval_sec = 10
|
||||
compaction_interval_sec = 86400 # daily — essential at high frequency
|
||||
```
|
||||
|
||||
## Storage Backends
|
||||
|
||||
### LMDB Backend
|
||||
|
||||
**Characteristics:**
|
||||
|
||||
- Memory-mapped file database
|
||||
- Native namespace support via DBIs (Database Instances)
|
||||
- High-performance reads with MVCC
|
||||
- Configurable map size (default: 10 GB)
|
||||
|
||||
**Configuration:**
|
||||
|
||||
```python
|
||||
map_size: int = 10 * 1024 * 1024 * 1024 # 10 GB
|
||||
writemap=True, map_async=True # Performance optimizations
|
||||
max_dbs=128 # Maximum namespaces
|
||||
```
|
||||
|
||||
**File Structure:**
|
||||
|
||||
```text
|
||||
data_folder_path/
|
||||
└── db/
|
||||
└── lmdbdatabase/
|
||||
├── data.mdb
|
||||
└── lock.mdb
|
||||
```
|
||||
|
||||
### SQLite Backend
|
||||
|
||||
**Characteristics:**
|
||||
|
||||
- Single-file relational database
|
||||
- Namespace emulation via `namespace` column
|
||||
- ACID transactions with autocommit mode
|
||||
- Cross-platform compatibility
|
||||
|
||||
**Schema:**
|
||||
|
||||
```sql
|
||||
CREATE TABLE records (
|
||||
namespace TEXT NOT NULL DEFAULT '',
|
||||
key BLOB NOT NULL,
|
||||
value BLOB NOT NULL,
|
||||
PRIMARY KEY (namespace, key)
|
||||
);
|
||||
|
||||
CREATE TABLE metadata (
|
||||
namespace TEXT PRIMARY KEY,
|
||||
value BLOB
|
||||
);
|
||||
```
|
||||
|
||||
**File Structure:**
|
||||
|
||||
```text
|
||||
data_folder_path/
|
||||
└── db/
|
||||
└── sqlitedatabase/
|
||||
└── data.db
|
||||
```
|
||||
|
||||
## Timestamp System
|
||||
|
||||
### DatabaseTimestamp
|
||||
|
||||
All records are indexed by UTC timestamps in sortable ISO 8601 format:
|
||||
|
||||
```python
|
||||
DatabaseTimestamp.from_datetime(dt: DateTime) -> "20241027T123456[Z]"
|
||||
```
|
||||
|
||||
**Properties:**
|
||||
- Always stored in UTC (timezone-aware required)
|
||||
- Lexicographically sortable
|
||||
- Bijective conversion to/from `pendulum.DateTime`
|
||||
- Second-level precision
|
||||
|
||||
### Unbounded Sentinels
|
||||
|
||||
```python
|
||||
UNBOUND_START # Smaller than any timestamp
|
||||
UNBOUND_END # Greater than any timestamp
|
||||
```
|
||||
|
||||
Used for open-ended range queries without special-casing `None`.
|
||||
|
||||
## Lazy Loading Strategy
|
||||
|
||||
### Three-Phase Loading
|
||||
|
||||
The system uses a progressive loading model to minimize memory footprint:
|
||||
|
||||
#### **Phase 0: NONE**
|
||||
|
||||
- No records loaded
|
||||
- First query triggers either:
|
||||
- Initial window load (if `initial_load_window_h` configured)
|
||||
- Full database load (if `initial_load_window_h = None`)
|
||||
- Targeted range load (if explicit range requested)
|
||||
|
||||
#### **Phase 1: INITIAL**
|
||||
|
||||
- Partial time window loaded
|
||||
- `_db_loaded_range` tracks coverage: `[start_timestamp, end_timestamp)`
|
||||
- Out-of-window queries trigger incremental expansion:
|
||||
- Left expansion: load records before current window
|
||||
- Right expansion: load records after current window
|
||||
- Unbounded queries escalate to FULL
|
||||
|
||||
#### **Phase 2: FULL**
|
||||
|
||||
- All database records in memory
|
||||
- No further database access needed
|
||||
- `_db_loaded_range` spans entire dataset
|
||||
|
||||
### Boundary Extension
|
||||
|
||||
When loading a range `[start, end)`, the system automatically extends boundaries to include:
|
||||
- **First record before** `start` (for interpolation/context)
|
||||
- **First record at or after** `end` (for closing boundary)
|
||||
|
||||
This prevents additional database lookups during nearest-neighbor searches.
|
||||
|
||||
## Namespace Support
|
||||
|
||||
Namespaces provide logical isolation within a single database instance:
|
||||
|
||||
```python
|
||||
# LMDB: uses native DBIs
|
||||
db.save_records(records, namespace="measurement")
|
||||
|
||||
# SQLite: uses namespace column
|
||||
SELECT * FROM records WHERE namespace='measurement'
|
||||
```
|
||||
|
||||
**Default Namespace:**
|
||||
- Can be set during `open(namespace="default")`
|
||||
- Operations with `namespace=None` use the default
|
||||
- Each record class typically defines its own namespace via `db_namespace()`
|
||||
|
||||
## Record Lifecycle
|
||||
|
||||
### Insertion
|
||||
|
||||
```python
|
||||
db_insert_record(record, mark_dirty=True)
|
||||
```
|
||||
|
||||
1. Normalize `record.date_time` to UTC `DatabaseTimestamp`
|
||||
2. Ensure timestamp range is loaded (lazy load if needed)
|
||||
3. Check for duplicates (raises `ValueError`)
|
||||
4. Insert into sorted position in memory
|
||||
5. Update index: `_db_record_index[timestamp] = record`
|
||||
6. Mark dirty if `mark_dirty=True`
|
||||
|
||||
### Retrieval
|
||||
|
||||
```python
|
||||
db_get_record(target_timestamp, time_window=None)
|
||||
```
|
||||
|
||||
**Search Strategies:**
|
||||
|
||||
| `time_window` | Behavior |
|
||||
|---|---|
|
||||
| `None` | Exact match only |
|
||||
| `UNBOUND_WINDOW` | Nearest record (unlimited search) |
|
||||
| `Duration` | Nearest within symmetric window |
|
||||
|
||||
**Memory-First:** Checks in-memory index before querying database.
|
||||
|
||||
### Deletion
|
||||
|
||||
```python
|
||||
db_delete_records(start_timestamp, end_timestamp)
|
||||
```
|
||||
|
||||
1. Ensure range is fully loaded
|
||||
2. Remove from memory: `records`, `_db_sorted_timestamps`, `_db_record_index`
|
||||
3. Add to `_db_deleted_timestamps` (tombstone)
|
||||
4. Discard from dirty sets (cancel pending writes)
|
||||
5. Physical deletion deferred until `db_save_records()`
|
||||
|
||||
## Dirty Tracking
|
||||
|
||||
The system maintains three dirty sets to optimize writes:
|
||||
|
||||
```python
|
||||
_db_dirty_timestamps: set[DatabaseTimestamp] # Modified records
|
||||
_db_new_timestamps: set[DatabaseTimestamp] # Newly inserted
|
||||
_db_deleted_timestamps: set[DatabaseTimestamp] # Pending deletes
|
||||
```
|
||||
|
||||
**Write Strategy:**
|
||||
|
||||
1. **Saves first:** Insert/update all dirty records
|
||||
2. **Deletes last:** Remove tombstoned records
|
||||
3. **Clear tracking sets:** Reset dirty state
|
||||
|
||||
**Autosave:** Triggered periodically if `autosave_interval_sec` configured.
|
||||
|
||||
## Compression
|
||||
|
||||
Optional gzip compression reduces storage footprint:
|
||||
|
||||
```python
|
||||
# Serialize
|
||||
data = pickle.dumps(record.model_dump())
|
||||
if compression_level > 0:
|
||||
data = gzip.compress(data, compresslevel=compression_level)
|
||||
|
||||
# Deserialize (auto-detect)
|
||||
if data[:2] == b'\x1f\x8b': # gzip magic bytes
|
||||
data = gzip.decompress(data)
|
||||
record_data = pickle.loads(data)
|
||||
```
|
||||
|
||||
**Compression is transparent:** Application code never handles compressed data directly.
|
||||
|
||||
## Metadata
|
||||
|
||||
Each namespace can store arbitrary metadata (version, creation time, provider):
|
||||
|
||||
```python
|
||||
_db_metadata = {
|
||||
"version": 1,
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"provider_id": "LMDB",
|
||||
"compression": True,
|
||||
"backend": "LMDBDatabase"
|
||||
}
|
||||
```
|
||||
|
||||
Stored separately from records using reserved key `__metadata__`.
|
||||
|
||||
## Compaction
|
||||
|
||||
Compaction reduces storage by downsampling old records to a lower time resolution. Unlike
|
||||
vacuum — which deletes records outright — compaction preserves the full time span of the
|
||||
data while replacing many fine-grained records with fewer coarse-grained averages.
|
||||
|
||||
### Tiered Downsampling Policy
|
||||
|
||||
The default policy has two tiers, applied coarsest-first:
|
||||
|
||||
| Age threshold | Target resolution | Effect |
|
||||
|---|---|---|
|
||||
| Older than 14 days | 1 hour | 15-min records → 1 per hour (75 % reduction) |
|
||||
| Older than 2 hours | 15 minutes | 1-min records → 1 per 15 min (93 % reduction) |
|
||||
|
||||
Records within the most recent 2 hours are never touched.
|
||||
|
||||
### How Compaction Works
|
||||
|
||||
Each tier is processed incrementally using a stored cutoff timestamp per tier. On each run,
|
||||
only the window `[last_cutoff, new_cutoff)` is examined — records already compacted in a
|
||||
previous run are never re-processed. This makes weekly runs fast even on years of history.
|
||||
|
||||
For each writable numeric field, records in the window are mean-resampled at the target
|
||||
interval using time interpolation. The original records are deleted and the downsampled
|
||||
records are written back. A **sparse-data guard** skips any window where the existing record
|
||||
count is already at or below the resampled bucket count, preventing compaction from
|
||||
accidentally *increasing* record count for data that is already coarse or irregular.
|
||||
|
||||
### Customising the Policy per Namespace
|
||||
|
||||
Individual data providers can override `db_compact_tiers()` to use a different policy:
|
||||
|
||||
```python
|
||||
class PriceDataProvider(DataProvider):
|
||||
def db_compact_tiers(self):
|
||||
# Price data is already at 15-min resolution from the source.
|
||||
# Skip the first tier; only compact to hourly after 2 weeks.
|
||||
return [(to_duration("14 days"), to_duration("1 hour"))]
|
||||
```
|
||||
|
||||
Return an empty list to disable compaction for a specific namespace entirely:
|
||||
|
||||
```python
|
||||
class EventLogProvider(DataProvider):
|
||||
def db_compact_tiers(self):
|
||||
return [] # Raw events must not be averaged
|
||||
```
|
||||
|
||||
### Manual Invocation
|
||||
|
||||
```python
|
||||
# Compact all providers in the container
|
||||
data_container.db_compact()
|
||||
|
||||
# Compact a single provider
|
||||
provider.db_compact()
|
||||
|
||||
# Use a one-off policy without changing the instance default
|
||||
provider.db_compact(compact_tiers=[
|
||||
(to_duration("7 days"), to_duration("1 hour"))
|
||||
])
|
||||
```
|
||||
|
||||
### Interaction with Vacuum
|
||||
|
||||
Compaction and vacuum are complementary and should always run in this order:
|
||||
|
||||
```text
|
||||
compact → vacuum
|
||||
```
|
||||
|
||||
Compact first so that vacuum operates on already-downsampled records. This produces cleaner
|
||||
retention boundaries and ensures the vacuum cutoff falls on hour-aligned timestamps rather
|
||||
than arbitrary sub-minute ones. Running them in reverse order (vacuum then compact) wastes
|
||||
work: vacuum may delete records that compaction would have downsampled and kept.
|
||||
|
||||
The `RetentionManager` registers both jobs and ensures compaction always runs before vacuum
|
||||
within the same maintenance window.
|
||||
|
||||
## Vacuum Operation
|
||||
|
||||
Remove old records to reclaim space:
|
||||
|
||||
```python
|
||||
db_vacuum(keep_hours=48) # Keep last 48 hours
|
||||
db_vacuum(keep_timestamp=cutoff) # Keep from cutoff onward
|
||||
```
|
||||
|
||||
**Strategy:**
|
||||
- Computes cutoff relative to `max_timestamp - keep_hours`
|
||||
- Deletes all records before cutoff
|
||||
- Immediately persists changes via `db_save_records()`
|
||||
|
||||
## Thread Safety
|
||||
|
||||
- **LMDB:** Internal lock protects write transactions; reads are lock-free via MVCC
|
||||
- **SQLite:** Lock guards all operations (autocommit mode eliminates transaction deadlocks)
|
||||
- **Record Protocol:** No internal locking (assumes single-threaded access per instance)
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
| Operation | LMDB | SQLite |
|
||||
|---|---|---|
|
||||
| Sequential read | Excellent (mmap) | Good (indexed) |
|
||||
| Random read | Excellent (mmap) | Good (B-tree) |
|
||||
| Bulk write | Excellent (single txn) | Good (batch insert) |
|
||||
| Range query | Excellent (cursor) | Good (indexed scan) |
|
||||
| Disk usage | Moderate (pre-allocated) | Compact (auto-grow) |
|
||||
| Concurrency | High (MVCC readers) | Low (write serialization) |
|
||||
|
||||
**Recommendation:** Use LMDB for high-frequency time-series workloads;
|
||||
SQLite for portability and simpler deployment.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```python
|
||||
# Configuration
|
||||
config.database.provider = "LMDB"
|
||||
config.database.compression_level = 9
|
||||
config.database.initial_load_window_h = 24 # Load last 24h initially
|
||||
config.database.keep_duration_h = 720 # Retain 30 days
|
||||
config.database.compaction_interval_sec = 604800 # Compact weekly
|
||||
|
||||
# Access (automatic singleton initialization)
|
||||
class MeasurementData(DatabaseRecordProtocolMixin):
|
||||
records: list[MeasurementRecord] = []
|
||||
|
||||
def db_namespace(self) -> str:
|
||||
return "measurement"
|
||||
|
||||
# Operations
|
||||
measurement = MeasurementData()
|
||||
|
||||
# Lazy load on first access
|
||||
record = measurement.db_get_record(
|
||||
DatabaseTimestamp.from_datetime(now),
|
||||
time_window=Duration(hours=1)
|
||||
)
|
||||
|
||||
# Insert new record
|
||||
measurement.db_insert_record(new_record)
|
||||
|
||||
# Automatic save (if autosave configured) or manual
|
||||
measurement.db_save_records()
|
||||
|
||||
# Maintenance pipeline (normally handled by RetentionManager)
|
||||
measurement.db_compact() # downsample old records first
|
||||
measurement.db_vacuum(keep_hours=720) # then delete beyond retention
|
||||
```
|
||||
@@ -18,7 +18,7 @@ from akkudoktoreos.core.version import __version__
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
|
||||
|
||||
project = "Akkudoktor EOS"
|
||||
copyright = "2025, Andreas Schmitz"
|
||||
copyright = "2025..2026, Andreas Schmitz"
|
||||
author = "Andreas Schmitz"
|
||||
release = __version__
|
||||
|
||||
|
||||
@@ -50,6 +50,7 @@ akkudoktoreos/prediction.md
|
||||
akkudoktoreos/measurement.md
|
||||
akkudoktoreos/integration.md
|
||||
akkudoktoreos/logging.md
|
||||
akkudoktoreos/database.md
|
||||
akkudoktoreos/adapter.md
|
||||
akkudoktoreos/serverapi.md
|
||||
akkudoktoreos/api.rst
|
||||
|
||||
557
openapi.json
557
openapi.json
@@ -2,8 +2,13 @@
|
||||
"openapi": "3.1.0",
|
||||
"info": {
|
||||
"title": "Akkudoktor-EOS",
|
||||
"summary": "Comprehensive solution for simulating and optimizing an energy system based on renewable energy sources",
|
||||
"description": "This project provides a comprehensive solution for simulating and optimizing an energy system based on renewable energy sources. With a focus on photovoltaic (PV) systems, battery storage (batteries), load management (consumer requirements), heat pumps, electric vehicles, and consideration of electricity price data, this system enables forecasting and optimization of energy flow and costs over a specified period.",
|
||||
"version": "v0.2.0.dev84352035"
|
||||
"license": {
|
||||
"name": "Apache 2.0",
|
||||
"url": "https://www.apache.org/licenses/LICENSE-2.0.html"
|
||||
},
|
||||
"version": "v0.2.0.dev58204789"
|
||||
},
|
||||
"paths": {
|
||||
"/v1/admin/cache/clear": {
|
||||
@@ -126,6 +131,54 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"/v1/admin/database/stats": {
|
||||
"get": {
|
||||
"tags": [
|
||||
"admin"
|
||||
],
|
||||
"summary": "Fastapi Admin Database Stats Get",
|
||||
"description": "Get statistics from database.\n\nReturns:\n data (dict): The database statistics",
|
||||
"operationId": "fastapi_admin_database_stats_get_v1_admin_database_stats_get",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Successful Response",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"title": "Response Fastapi Admin Database Stats Get V1 Admin Database Stats Get"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"/v1/admin/database/vacuum": {
|
||||
"post": {
|
||||
"tags": [
|
||||
"admin"
|
||||
],
|
||||
"summary": "Fastapi Admin Database Vacuum Post",
|
||||
"description": "Remove old records from database.\n\nReturns:\n data (dict): The database stats after removal of old records.",
|
||||
"operationId": "fastapi_admin_database_vacuum_post_v1_admin_database_vacuum_post",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Successful Response",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"title": "Response Fastapi Admin Database Vacuum Post V1 Admin Database Vacuum Post"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"/v1/admin/server/restart": {
|
||||
"post": {
|
||||
"tags": [
|
||||
@@ -2102,7 +2155,7 @@
|
||||
},
|
||||
"type": "array",
|
||||
"title": "Providers",
|
||||
"description": "Available electricity price provider ids.",
|
||||
"description": "Available adapter provider ids.",
|
||||
"readOnly": true
|
||||
}
|
||||
},
|
||||
@@ -2493,9 +2546,10 @@
|
||||
},
|
||||
"cleanup_interval": {
|
||||
"type": "number",
|
||||
"minimum": 5.0,
|
||||
"title": "Cleanup Interval",
|
||||
"description": "Intervall in seconds for EOS file cache cleanup.",
|
||||
"default": 300
|
||||
"default": 300.0
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
@@ -2523,170 +2577,61 @@
|
||||
"ConfigEOS": {
|
||||
"properties": {
|
||||
"general": {
|
||||
"$ref": "#/components/schemas/GeneralSettings-Output",
|
||||
"default": {
|
||||
"version": "0.2.0.dev84352035",
|
||||
"data_output_subpath": "output",
|
||||
"latitude": 52.52,
|
||||
"longitude": 13.405,
|
||||
"timezone": "Europe/Berlin",
|
||||
"config_folder_path": "/home/user/.config/net.akkudoktoreos.net",
|
||||
"config_file_path": "/home/user/.config/net.akkudoktoreos.net/EOS.config.json",
|
||||
"home_assistant_addon": false
|
||||
}
|
||||
"$ref": "#/components/schemas/GeneralSettings-Output"
|
||||
},
|
||||
"cache": {
|
||||
"$ref": "#/components/schemas/CacheCommonSettings",
|
||||
"default": {
|
||||
"subpath": "cache",
|
||||
"cleanup_interval": 300.0
|
||||
}
|
||||
"$ref": "#/components/schemas/CacheCommonSettings"
|
||||
},
|
||||
"database": {
|
||||
"$ref": "#/components/schemas/DatabaseCommonSettings-Output"
|
||||
},
|
||||
"ems": {
|
||||
"$ref": "#/components/schemas/EnergyManagementCommonSettings",
|
||||
"default": {
|
||||
"startup_delay": 5.0
|
||||
}
|
||||
"$ref": "#/components/schemas/EnergyManagementCommonSettings"
|
||||
},
|
||||
"logging": {
|
||||
"$ref": "#/components/schemas/LoggingCommonSettings-Output",
|
||||
"default": {
|
||||
"file_path": "/home/user/.local/share/net.akkudoktoreos.net/output/eos.log"
|
||||
}
|
||||
"$ref": "#/components/schemas/LoggingCommonSettings-Output"
|
||||
},
|
||||
"devices": {
|
||||
"$ref": "#/components/schemas/DevicesCommonSettings-Output",
|
||||
"default": {
|
||||
"measurement_keys": []
|
||||
}
|
||||
"$ref": "#/components/schemas/DevicesCommonSettings-Output"
|
||||
},
|
||||
"measurement": {
|
||||
"$ref": "#/components/schemas/MeasurementCommonSettings-Output",
|
||||
"default": {
|
||||
"keys": []
|
||||
}
|
||||
"$ref": "#/components/schemas/MeasurementCommonSettings-Output"
|
||||
},
|
||||
"optimization": {
|
||||
"$ref": "#/components/schemas/OptimizationCommonSettings-Output",
|
||||
"default": {
|
||||
"horizon_hours": 24,
|
||||
"interval": 3600,
|
||||
"algorithm": "GENETIC",
|
||||
"genetic": {
|
||||
"generations": 400,
|
||||
"individuals": 300
|
||||
},
|
||||
"keys": []
|
||||
}
|
||||
"$ref": "#/components/schemas/OptimizationCommonSettings-Output"
|
||||
},
|
||||
"prediction": {
|
||||
"$ref": "#/components/schemas/PredictionCommonSettings",
|
||||
"default": {
|
||||
"hours": 48,
|
||||
"historic_hours": 48
|
||||
}
|
||||
"$ref": "#/components/schemas/PredictionCommonSettings"
|
||||
},
|
||||
"elecprice": {
|
||||
"$ref": "#/components/schemas/ElecPriceCommonSettings-Output",
|
||||
"default": {
|
||||
"vat_rate": 1.19,
|
||||
"elecpriceimport": {},
|
||||
"energycharts": {
|
||||
"bidding_zone": "DE-LU"
|
||||
},
|
||||
"providers": [
|
||||
"ElecPriceAkkudoktor",
|
||||
"ElecPriceEnergyCharts",
|
||||
"ElecPriceImport"
|
||||
]
|
||||
}
|
||||
"$ref": "#/components/schemas/ElecPriceCommonSettings-Output"
|
||||
},
|
||||
"feedintariff": {
|
||||
"$ref": "#/components/schemas/FeedInTariffCommonSettings-Output",
|
||||
"default": {
|
||||
"provider_settings": {},
|
||||
"providers": [
|
||||
"FeedInTariffFixed",
|
||||
"FeedInTariffImport"
|
||||
]
|
||||
}
|
||||
"$ref": "#/components/schemas/FeedInTariffCommonSettings-Output"
|
||||
},
|
||||
"load": {
|
||||
"$ref": "#/components/schemas/LoadCommonSettings-Output",
|
||||
"default": {
|
||||
"provider_settings": {},
|
||||
"providers": [
|
||||
"LoadAkkudoktor",
|
||||
"LoadAkkudoktorAdjusted",
|
||||
"LoadVrm",
|
||||
"LoadImport"
|
||||
]
|
||||
}
|
||||
"$ref": "#/components/schemas/LoadCommonSettings-Output"
|
||||
},
|
||||
"pvforecast": {
|
||||
"$ref": "#/components/schemas/PVForecastCommonSettings-Output",
|
||||
"default": {
|
||||
"provider_settings": {},
|
||||
"max_planes": 0,
|
||||
"providers": [
|
||||
"PVForecastAkkudoktor",
|
||||
"PVForecastVrm",
|
||||
"PVForecastImport"
|
||||
],
|
||||
"planes_peakpower": [],
|
||||
"planes_azimuth": [],
|
||||
"planes_tilt": [],
|
||||
"planes_userhorizon": [],
|
||||
"planes_inverter_paco": []
|
||||
}
|
||||
"$ref": "#/components/schemas/PVForecastCommonSettings-Output"
|
||||
},
|
||||
"weather": {
|
||||
"$ref": "#/components/schemas/WeatherCommonSettings-Output",
|
||||
"default": {
|
||||
"provider_settings": {},
|
||||
"providers": [
|
||||
"BrightSky",
|
||||
"ClearOutside",
|
||||
"WeatherImport"
|
||||
]
|
||||
}
|
||||
"$ref": "#/components/schemas/WeatherCommonSettings-Output"
|
||||
},
|
||||
"server": {
|
||||
"$ref": "#/components/schemas/ServerCommonSettings",
|
||||
"default": {
|
||||
"host": "127.0.0.1",
|
||||
"port": 8503,
|
||||
"verbose": false,
|
||||
"startup_eosdash": true
|
||||
}
|
||||
"$ref": "#/components/schemas/ServerCommonSettings"
|
||||
},
|
||||
"utils": {
|
||||
"$ref": "#/components/schemas/UtilsCommonSettings",
|
||||
"default": {}
|
||||
"$ref": "#/components/schemas/UtilsCommonSettings"
|
||||
},
|
||||
"adapter": {
|
||||
"$ref": "#/components/schemas/AdapterCommonSettings-Output",
|
||||
"default": {
|
||||
"homeassistant": {
|
||||
"eos_device_instruction_entity_ids": [],
|
||||
"eos_solution_entity_ids": [],
|
||||
"homeassistant_entity_ids": []
|
||||
},
|
||||
"nodered": {
|
||||
"host": "127.0.0.1",
|
||||
"port": 1880
|
||||
},
|
||||
"providers": [
|
||||
"HomeAssistant",
|
||||
"NodeRED"
|
||||
]
|
||||
}
|
||||
"$ref": "#/components/schemas/AdapterCommonSettings-Output"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"type": "object",
|
||||
"title": "ConfigEOS",
|
||||
"description": "Singleton configuration handler for the EOS application.\n\nConfigEOS extends `SettingsEOS` with support for default configuration paths and automatic\ninitialization.\n\n`ConfigEOS` ensures that only one instance of the class is created throughout the application,\nallowing consistent access to EOS configuration settings. This singleton instance loads\nconfiguration data from a predefined set of directories or creates a default configuration if\nnone is found.\n\nInitialization Process:\n - Upon instantiation, the singleton instance attempts to load a configuration file in this order:\n 1. The directory specified by the `EOS_CONFIG_DIR` environment variable\n 2. The directory specified by the `EOS_DIR` environment variable.\n 3. A platform specific default directory for EOS.\n 4. The current working directory.\n - The first available configuration file found in these directories is loaded.\n - If no configuration file is found, a default configuration file is created in the platform\n specific default directory, and default settings are loaded into it.\n\nAttributes from the loaded configuration are accessible directly as instance attributes of\n`ConfigEOS`, providing a centralized, shared configuration object for EOS.\n\nSingleton Behavior:\n - This class uses the `SingletonMixin` to ensure that all requests for `ConfigEOS` return\n the same instance, which contains the most up-to-date configuration. Modifying the configuration\n in one part of the application reflects across all references to this class.\n\nAttributes:\n config_folder_path (Optional[Path]): Path to the configuration directory.\n config_file_path (Optional[Path]): Path to the configuration file.\n\nRaises:\n FileNotFoundError: If no configuration file is found, and creating a default configuration fails.\n\nExample:\n To initialize and access configuration attributes (only one instance is created):\n .. code-block:: python\n\n config_eos = ConfigEOS() # Always returns the same instance\n print(config_eos.prediction.hours) # Access a setting from the loaded configuration"
|
||||
"description": "Singleton configuration handler for the EOS application.\n\nConfigEOS extends `SettingsEOS` with support for default configuration paths and automatic\ninitialization.\n\n`ConfigEOS` ensures that only one instance of the class is created throughout the application,\nallowing consistent access to EOS configuration settings. This singleton instance loads\nconfiguration data from a predefined set of directories or creates a default configuration if\nnone is found.\n\nInitialization Process:\n - Upon instantiation, the singleton instance attempts to load a configuration file in this order:\n 1. The directory specified by the `EOS_CONFIG_DIR` environment variable\n 2. The directory specified by the `EOS_DIR` environment variable.\n 3. A platform specific default directory for EOS.\n 4. The current working directory.\n - The first available configuration file found in these directories is loaded.\n - If no configuration file is found, a default configuration file is created in the platform\n specific default directory, and default settings are loaded into it.\n\nAttributes from the loaded configuration are accessible directly as instance attributes of\n`ConfigEOS`, providing a centralized, shared configuration object for EOS.\n\nSingleton Behavior:\n - This class uses the `SingletonMixin` to ensure that all requests for `ConfigEOS` return\n the same instance, which contains the most up-to-date configuration. Modifying the configuration\n in one part of the application reflects across all references to this class.\n\nRaises:\n FileNotFoundError: If no configuration file is found, and creating a default configuration fails.\n\nExample:\n To initialize and access configuration attributes (only one instance is created):\n .. code-block:: python\n\n config_eos = ConfigEOS() # Always returns the same instance\n print(config_eos.prediction.hours) # Access a setting from the loaded configuration"
|
||||
},
|
||||
"DDBCActuatorStatus": {
|
||||
"properties": {
|
||||
@@ -2809,6 +2754,240 @@
|
||||
"title": "DDBCInstruction",
|
||||
"description": "Instruction for Demand Driven Based Control (DDBC).\n\nContains information about when and how to activate a specific operation mode\nfor an actuator. Used to command resources to change their operation at a specified time."
|
||||
},
|
||||
"DatabaseCommonSettings-Input": {
|
||||
"properties": {
|
||||
"provider": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Provider",
|
||||
"description": "Database provider id of provider to be used.",
|
||||
"examples": [
|
||||
"LMDB"
|
||||
]
|
||||
},
|
||||
"compression_level": {
|
||||
"type": "integer",
|
||||
"maximum": 9.0,
|
||||
"minimum": 0.0,
|
||||
"title": "Compression Level",
|
||||
"description": "Compression level for database record data.",
|
||||
"default": 9,
|
||||
"examples": [
|
||||
0,
|
||||
9
|
||||
]
|
||||
},
|
||||
"initial_load_window_h": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Initial Load Window H",
|
||||
"description": "Specifies the default duration of the initial load window when loading records from the database, in hours. If set to None, the full available range is loaded. The window is centered around the current time by default, unless a different center time is specified. Different database namespaces may define their own default windows.",
|
||||
"examples": [
|
||||
"48",
|
||||
"None"
|
||||
]
|
||||
},
|
||||
"keep_duration_h": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Keep Duration H",
|
||||
"description": "Default maximum duration records shall be kept in database [hours, none].\nNone indicates forever. Database namespaces may have diverging definitions.",
|
||||
"examples": [
|
||||
48,
|
||||
"none"
|
||||
]
|
||||
},
|
||||
"autosave_interval_sec": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 5.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Autosave Interval Sec",
|
||||
"description": "Automatic saving interval [seconds].\nSet to None to disable automatic saving.",
|
||||
"default": 10,
|
||||
"examples": [
|
||||
5
|
||||
]
|
||||
},
|
||||
"compaction_interval_sec": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Compaction Interval Sec",
|
||||
"description": "Interval in between automatic tiered compaction runs [seconds].\nCompaction downsamples old records to reduce storage while retaining coverage. Set to None to disable automatic compaction.",
|
||||
"default": 604800,
|
||||
"examples": [
|
||||
604800
|
||||
]
|
||||
},
|
||||
"batch_size": {
|
||||
"type": "integer",
|
||||
"title": "Batch Size",
|
||||
"description": "Number of records to process in batch operations.",
|
||||
"default": 100,
|
||||
"examples": [
|
||||
100
|
||||
]
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
"title": "DatabaseCommonSettings",
|
||||
"description": "Configuration model for database settings.\n\nAttributes:\n provider: Optional provider identifier (e.g. \"LMDB\").\n max_records_in_memory: Maximum records kept in memory before auto-save.\n auto_save: Whether to auto-save when threshold exceeded.\n batch_size: Batch size for batch operations."
|
||||
},
|
||||
"DatabaseCommonSettings-Output": {
|
||||
"properties": {
|
||||
"provider": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Provider",
|
||||
"description": "Database provider id of provider to be used.",
|
||||
"examples": [
|
||||
"LMDB"
|
||||
]
|
||||
},
|
||||
"compression_level": {
|
||||
"type": "integer",
|
||||
"maximum": 9.0,
|
||||
"minimum": 0.0,
|
||||
"title": "Compression Level",
|
||||
"description": "Compression level for database record data.",
|
||||
"default": 9,
|
||||
"examples": [
|
||||
0,
|
||||
9
|
||||
]
|
||||
},
|
||||
"initial_load_window_h": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Initial Load Window H",
|
||||
"description": "Specifies the default duration of the initial load window when loading records from the database, in hours. If set to None, the full available range is loaded. The window is centered around the current time by default, unless a different center time is specified. Different database namespaces may define their own default windows.",
|
||||
"examples": [
|
||||
"48",
|
||||
"None"
|
||||
]
|
||||
},
|
||||
"keep_duration_h": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Keep Duration H",
|
||||
"description": "Default maximum duration records shall be kept in database [hours, none].\nNone indicates forever. Database namespaces may have diverging definitions.",
|
||||
"examples": [
|
||||
48,
|
||||
"none"
|
||||
]
|
||||
},
|
||||
"autosave_interval_sec": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 5.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Autosave Interval Sec",
|
||||
"description": "Automatic saving interval [seconds].\nSet to None to disable automatic saving.",
|
||||
"default": 10,
|
||||
"examples": [
|
||||
5
|
||||
]
|
||||
},
|
||||
"compaction_interval_sec": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Compaction Interval Sec",
|
||||
"description": "Interval in between automatic tiered compaction runs [seconds].\nCompaction downsamples old records to reduce storage while retaining coverage. Set to None to disable automatic compaction.",
|
||||
"default": 604800,
|
||||
"examples": [
|
||||
604800
|
||||
]
|
||||
},
|
||||
"batch_size": {
|
||||
"type": "integer",
|
||||
"title": "Batch Size",
|
||||
"description": "Number of records to process in batch operations.",
|
||||
"default": 100,
|
||||
"examples": [
|
||||
100
|
||||
]
|
||||
},
|
||||
"providers": {
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "array",
|
||||
"title": "Providers",
|
||||
"description": "Return available database provider ids.",
|
||||
"readOnly": true
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
"required": [
|
||||
"providers"
|
||||
],
|
||||
"title": "DatabaseCommonSettings",
|
||||
"description": "Configuration model for database settings.\n\nAttributes:\n provider: Optional provider identifier (e.g. \"LMDB\").\n max_records_in_memory: Maximum records kept in memory before auto-save.\n auto_save: Whether to auto-save when threshold exceeded.\n batch_size: Batch size for batch operations."
|
||||
},
|
||||
"DevicesCommonSettings-Input": {
|
||||
"properties": {
|
||||
"batteries": {
|
||||
@@ -3583,16 +3762,11 @@
|
||||
"default": 5
|
||||
},
|
||||
"interval": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "number"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"type": "number",
|
||||
"minimum": 60.0,
|
||||
"title": "Interval",
|
||||
"description": "Intervall in seconds between EOS energy management runs.",
|
||||
"description": "Intervall between EOS energy management runs [seconds].",
|
||||
"default": 300.0,
|
||||
"examples": [
|
||||
"300"
|
||||
]
|
||||
@@ -4268,28 +4442,22 @@
|
||||
},
|
||||
"GeneralSettings-Input": {
|
||||
"properties": {
|
||||
"home_assistant_addon": {
|
||||
"type": "boolean",
|
||||
"title": "Home Assistant Addon",
|
||||
"description": "EOS is running as home assistant add-on."
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"title": "Version",
|
||||
"description": "Configuration file version. Used to check compatibility.",
|
||||
"default": "0.2.0.dev84352035"
|
||||
"default": "0.2.0.dev58204789"
|
||||
},
|
||||
"data_folder_path": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string",
|
||||
"format": "path"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"type": "string",
|
||||
"format": "path",
|
||||
"title": "Data Folder Path",
|
||||
"description": "Path to EOS data directory.",
|
||||
"examples": [
|
||||
null,
|
||||
"/home/eos/data"
|
||||
]
|
||||
"description": "Path to EOS data folder."
|
||||
},
|
||||
"data_output_subpath": {
|
||||
"anyOf": [
|
||||
@@ -4302,7 +4470,7 @@
|
||||
}
|
||||
],
|
||||
"title": "Data Output Subpath",
|
||||
"description": "Sub-path for the EOS output data directory.",
|
||||
"description": "Sub-path for the EOS output data folder.",
|
||||
"default": "output"
|
||||
},
|
||||
"latitude": {
|
||||
@@ -4346,24 +4514,13 @@
|
||||
"type": "string",
|
||||
"title": "Version",
|
||||
"description": "Configuration file version. Used to check compatibility.",
|
||||
"default": "0.2.0.dev84352035"
|
||||
"default": "0.2.0.dev58204789"
|
||||
},
|
||||
"data_folder_path": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string",
|
||||
"format": "path"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"type": "string",
|
||||
"format": "path",
|
||||
"title": "Data Folder Path",
|
||||
"description": "Path to EOS data directory.",
|
||||
"examples": [
|
||||
null,
|
||||
"/home/eos/data"
|
||||
]
|
||||
"description": "Path to EOS data folder."
|
||||
},
|
||||
"data_output_subpath": {
|
||||
"anyOf": [
|
||||
@@ -4376,7 +4533,7 @@
|
||||
}
|
||||
],
|
||||
"title": "Data Output Subpath",
|
||||
"description": "Sub-path for the EOS output data directory.",
|
||||
"description": "Sub-path for the EOS output data folder.",
|
||||
"default": "output"
|
||||
},
|
||||
"latitude": {
|
||||
@@ -4463,12 +4620,6 @@
|
||||
"title": "Config File Path",
|
||||
"description": "Path to EOS configuration file.",
|
||||
"readOnly": true
|
||||
},
|
||||
"home_assistant_addon": {
|
||||
"type": "boolean",
|
||||
"title": "Home Assistant Addon",
|
||||
"description": "EOS is running as home assistant add-on.",
|
||||
"readOnly": true
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
@@ -4476,8 +4627,7 @@
|
||||
"timezone",
|
||||
"data_output_path",
|
||||
"config_folder_path",
|
||||
"config_file_path",
|
||||
"home_assistant_addon"
|
||||
"config_file_path"
|
||||
],
|
||||
"title": "GeneralSettings",
|
||||
"description": "General settings."
|
||||
@@ -6091,6 +6241,23 @@
|
||||
},
|
||||
"MeasurementCommonSettings-Input": {
|
||||
"properties": {
|
||||
"historic_hours": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Historic Hours",
|
||||
"description": "Number of hours into the past for measurement data",
|
||||
"default": 17520,
|
||||
"examples": [
|
||||
17520
|
||||
]
|
||||
},
|
||||
"load_emr_keys": {
|
||||
"anyOf": [
|
||||
{
|
||||
@@ -6178,6 +6345,23 @@
|
||||
},
|
||||
"MeasurementCommonSettings-Output": {
|
||||
"properties": {
|
||||
"historic_hours": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0.0
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Historic Hours",
|
||||
"description": "Number of hours into the past for measurement data",
|
||||
"default": 17520,
|
||||
"examples": [
|
||||
17520
|
||||
]
|
||||
},
|
||||
"load_emr_keys": {
|
||||
"anyOf": [
|
||||
{
|
||||
@@ -8105,6 +8289,17 @@
|
||||
],
|
||||
"description": "Cache Settings"
|
||||
},
|
||||
"database": {
|
||||
"anyOf": [
|
||||
{
|
||||
"$ref": "#/components/schemas/DatabaseCommonSettings-Input"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"description": "Database Settings"
|
||||
},
|
||||
"ems": {
|
||||
"anyOf": [
|
||||
{
|
||||
|
||||
@@ -14,12 +14,11 @@ markdown-it-py==4.0.0
|
||||
mdit-py-plugins==0.5.0
|
||||
bokeh==3.8.2
|
||||
uvicorn==0.40.0
|
||||
scikit-learn==1.8.0
|
||||
scipy==1.17.0
|
||||
tzfpy==1.1.1
|
||||
deap==1.4.3
|
||||
requests==2.32.5
|
||||
pandas==2.3.3
|
||||
pandas==3.0.0
|
||||
pendulum==3.2.0
|
||||
platformdirs==4.9.2
|
||||
psutil==7.2.2
|
||||
@@ -27,6 +26,7 @@ pvlib==0.15.0
|
||||
pydantic==2.12.5
|
||||
pydantic_extra_types==2.11.0
|
||||
statsmodels==0.14.6
|
||||
pydantic-settings==2.11.0
|
||||
pydantic-settings==2.12.0
|
||||
linkify-it-py==2.0.3
|
||||
loguru==0.7.3
|
||||
lmdb==1.7.5
|
||||
|
||||
@@ -14,7 +14,8 @@ from loguru import logger
|
||||
from pydantic.fields import ComputedFieldInfo, FieldInfo
|
||||
from pydantic_core import PydanticUndefined
|
||||
|
||||
from akkudoktoreos.config.config import ConfigEOS, GeneralSettings, get_config
|
||||
from akkudoktoreos.config.config import ConfigEOS, default_data_folder_path
|
||||
from akkudoktoreos.core.coreabc import get_config, singletons_init
|
||||
from akkudoktoreos.core.pydantic import PydanticBaseModel
|
||||
from akkudoktoreos.utils.datetimeutil import to_datetime
|
||||
|
||||
@@ -361,12 +362,6 @@ def generate_config_md(file_path: Optional[Union[str, Path]], config_eos: Config
|
||||
Returns:
|
||||
str: The Markdown representation of the configuration spec.
|
||||
"""
|
||||
# Fix file path for general settings to not show local/test file path
|
||||
GeneralSettings._config_file_path = Path(
|
||||
"/home/user/.config/net.akkudoktoreos.net/EOS.config.json"
|
||||
)
|
||||
GeneralSettings._config_folder_path = config_eos.general.config_file_path.parent
|
||||
|
||||
markdown = ""
|
||||
|
||||
if file_path:
|
||||
@@ -446,6 +441,19 @@ def write_to_file(file_path: Optional[Union[str, Path]], config_md: str):
|
||||
'/home/user/.local/share/net.akkudoktor.eos/output/eos.log',
|
||||
config_md
|
||||
)
|
||||
# Assure pathes are set to default for documentation
|
||||
replacements = [
|
||||
("data_folder_path", "/home/user/.local/share/net.akkudoktoreos.net"),
|
||||
("data_output_path", "/home/user/.local/share/net.akkudoktoreos.net/output"),
|
||||
("config_folder_path", "/home/user/.config/net.akkudoktoreos.net"),
|
||||
("config_file_path", "/home/user/.config/net.akkudoktoreos.net/EOS.config.json"),
|
||||
]
|
||||
for key, value in replacements:
|
||||
config_md = re.sub(
|
||||
rf'("{key}":\s*)"[^"]*"',
|
||||
rf'\1"{value}"',
|
||||
config_md
|
||||
)
|
||||
|
||||
# Assure timezone name does not leak to documentation
|
||||
tz_name = to_datetime().timezone_name
|
||||
@@ -477,16 +485,31 @@ def main():
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
config_eos = get_config()
|
||||
|
||||
# Ensure we are in documentation mode
|
||||
ConfigEOS._force_documentation_mode = True
|
||||
|
||||
# Make minimal config to make the generation reproducable
|
||||
config_eos = get_config(init={
|
||||
"with_init_settings": True,
|
||||
"with_env_settings": False,
|
||||
"with_dotenv_settings": False,
|
||||
"with_file_settings": False,
|
||||
"with_file_secret_settings": False,
|
||||
})
|
||||
|
||||
# Also init other singletons to get same list of e.g. providers
|
||||
singletons_init()
|
||||
|
||||
try:
|
||||
config_md = generate_config_md(args.output_file, config_eos)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error during Configuration Specification generation: {e}", file=sys.stderr)
|
||||
# keep throwing error to debug potential problems (e.g. invalid examples)
|
||||
raise e
|
||||
|
||||
finally:
|
||||
# Ensure we are out of documentation mode
|
||||
ConfigEOS._force_documentation_mode = False
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
@@ -19,32 +19,44 @@ import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
from fastapi.openapi.utils import get_openapi
|
||||
|
||||
from akkudoktoreos.core.coreabc import get_config
|
||||
from akkudoktoreos.server.eos import app
|
||||
|
||||
|
||||
def generate_openapi() -> dict:
|
||||
"""Generate the OpenAPI specification.
|
||||
# Make minimal config to make the generation reproducable
|
||||
config_eos = get_config(init={
|
||||
"with_init_settings": True,
|
||||
"with_env_settings": False,
|
||||
"with_dotenv_settings": False,
|
||||
"with_file_settings": False,
|
||||
"with_file_secret_settings": False,
|
||||
})
|
||||
|
||||
Returns:
|
||||
openapi_spec (dict): OpenAPI specification.
|
||||
"""
|
||||
openapi_spec = get_openapi(
|
||||
title=app.title,
|
||||
version=app.version,
|
||||
openapi_version=app.openapi_version,
|
||||
description=app.description,
|
||||
routes=app.routes,
|
||||
openapi_spec = app.openapi()
|
||||
|
||||
config_schema = (
|
||||
openapi_spec
|
||||
.get("components", {})
|
||||
.get("schemas", {})
|
||||
.get("ConfigEOS", {})
|
||||
.get("properties", {})
|
||||
)
|
||||
|
||||
# Fix file path for general settings to not show local/test file path
|
||||
general = openapi_spec["components"]["schemas"]["ConfigEOS"]["properties"]["general"]["default"]
|
||||
general["config_file_path"] = "/home/user/.config/net.akkudoktoreos.net/EOS.config.json"
|
||||
general["config_folder_path"] = "/home/user/.config/net.akkudoktoreos.net"
|
||||
# Fix file path for logging settings to not show local/test file path
|
||||
logging = openapi_spec["components"]["schemas"]["ConfigEOS"]["properties"]["logging"]["default"]
|
||||
logging["file_path"] = "/home/user/.local/share/net.akkudoktoreos.net/output/eos.log"
|
||||
# ---- General settings ----
|
||||
general = config_schema.get("general", {}).get("default")
|
||||
if general:
|
||||
general.update({
|
||||
"config_file_path": "/home/user/.config/net.akkudoktoreos.net/EOS.config.json",
|
||||
"config_folder_path": "/home/user/.config/net.akkudoktoreos.net",
|
||||
"data_folder_path": "/home/user/.local/share/net.akkudoktoreos.net",
|
||||
"data_output_path": "/home/user/.local/share/net.akkudoktoreos.net/output",
|
||||
})
|
||||
|
||||
# ---- Logging settings ----
|
||||
logging_cfg = config_schema.get("logging", {}).get("default")
|
||||
if logging_cfg:
|
||||
logging_cfg["file_path"] = "/home/user/.local/share/net.akkudoktoreos.net/output/eos.log"
|
||||
|
||||
return openapi_spec
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import TYPE_CHECKING, Optional, Union
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import Field, computed_field, field_validator
|
||||
|
||||
@@ -10,9 +10,6 @@ from akkudoktoreos.adapter.homeassistant import (
|
||||
from akkudoktoreos.adapter.nodered import NodeREDAdapter, NodeREDAdapterCommonSettings
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
|
||||
if TYPE_CHECKING:
|
||||
adapter_providers: list[str]
|
||||
|
||||
|
||||
class AdapterCommonSettings(SettingsBaseModel):
|
||||
"""Adapter Configuration."""
|
||||
@@ -38,8 +35,9 @@ class AdapterCommonSettings(SettingsBaseModel):
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def providers(self) -> list[str]:
|
||||
"""Available electricity price provider ids."""
|
||||
return adapter_providers
|
||||
"""Available adapter provider ids."""
|
||||
adapter_provider_ids = [provider.provider_id() for provider in adapter_providers()]
|
||||
return adapter_provider_ids
|
||||
|
||||
# Validators
|
||||
@field_validator("provider", mode="after")
|
||||
@@ -47,48 +45,39 @@ class AdapterCommonSettings(SettingsBaseModel):
|
||||
def validate_provider(cls, value: Optional[list[str]]) -> Optional[list[str]]:
|
||||
if value is None:
|
||||
return value
|
||||
adapter_provider_ids = [provider.provider_id() for provider in adapter_providers()]
|
||||
for provider_id in value:
|
||||
if provider_id not in adapter_providers:
|
||||
if provider_id not in adapter_provider_ids:
|
||||
raise ValueError(
|
||||
f"Provider '{value}' is not a valid adapter provider: {adapter_providers}."
|
||||
f"Provider '{value}' is not a valid adapter provider: {adapter_provider_ids}."
|
||||
)
|
||||
return value
|
||||
|
||||
|
||||
class Adapter(AdapterContainer):
|
||||
"""Adapter container to manage multiple adapter providers.
|
||||
|
||||
Attributes:
|
||||
providers (List[Union[PVForecastAkkudoktor, WeatherBrightSky, WeatherClearOutside]]):
|
||||
List of forecast provider instances, in the order they should be updated.
|
||||
Providers may depend on updates from others.
|
||||
"""
|
||||
|
||||
providers: list[
|
||||
Union[
|
||||
HomeAssistantAdapter,
|
||||
NodeREDAdapter,
|
||||
]
|
||||
] = Field(default_factory=list, json_schema_extra={"description": "List of adapter providers"})
|
||||
|
||||
|
||||
# Initialize adapter providers, all are singletons.
|
||||
homeassistant_adapter = HomeAssistantAdapter()
|
||||
nodered_adapter = NodeREDAdapter()
|
||||
|
||||
|
||||
def get_adapter() -> Adapter:
|
||||
"""Gets the EOS adapter data."""
|
||||
# Initialize Adapter instance with providers in the required order
|
||||
# Care for provider sequence as providers may rely on others to be updated before.
|
||||
adapter = Adapter(
|
||||
providers=[
|
||||
homeassistant_adapter,
|
||||
nodered_adapter,
|
||||
def adapter_providers() -> list[Union["HomeAssistantAdapter", "NodeREDAdapter"]]:
|
||||
"""Return list of adapter providers."""
|
||||
global homeassistant_adapter, nodered_adapter
|
||||
|
||||
return [
|
||||
homeassistant_adapter,
|
||||
nodered_adapter,
|
||||
]
|
||||
|
||||
|
||||
class Adapter(AdapterContainer):
|
||||
"""Adapter container to manage multiple adapter providers."""
|
||||
|
||||
providers: list[
|
||||
Union[
|
||||
HomeAssistantAdapter,
|
||||
NodeREDAdapter,
|
||||
]
|
||||
] = Field(
|
||||
default_factory=adapter_providers,
|
||||
json_schema_extra={"description": "List of adapter providers"},
|
||||
)
|
||||
return adapter
|
||||
|
||||
|
||||
# Valid adapter providers
|
||||
adapter_providers = [provider.provider_id() for provider in get_adapter().providers]
|
||||
|
||||
@@ -10,12 +10,12 @@ from pydantic import Field, computed_field, field_validator
|
||||
|
||||
from akkudoktoreos.adapter.adapterabc import AdapterProvider
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.core.coreabc import get_adapter
|
||||
from akkudoktoreos.core.emplan import (
|
||||
DDBCInstruction,
|
||||
FRBCInstruction,
|
||||
)
|
||||
from akkudoktoreos.core.ems import EnergyManagementStage
|
||||
from akkudoktoreos.devices.devices import get_resource_registry
|
||||
from akkudoktoreos.utils.datetimeutil import to_datetime
|
||||
|
||||
# Supervisor API endpoint and token (injected automatically in add-on container)
|
||||
@@ -29,8 +29,6 @@ HEADERS = {
|
||||
|
||||
HOMEASSISTANT_ENTITY_ID_PREFIX = "sensor.eos_"
|
||||
|
||||
resources_eos = get_resource_registry()
|
||||
|
||||
|
||||
class HomeAssistantAdapterCommonSettings(SettingsBaseModel):
|
||||
"""Common settings for the home assistant adapter."""
|
||||
@@ -146,8 +144,6 @@ class HomeAssistantAdapterCommonSettings(SettingsBaseModel):
|
||||
def homeassistant_entity_ids(self) -> list[str]:
|
||||
"""Entity IDs available at Home Assistant."""
|
||||
try:
|
||||
from akkudoktoreos.adapter.adapter import get_adapter
|
||||
|
||||
adapter_eos = get_adapter()
|
||||
result = adapter_eos.provider_by_id("HomeAssistant").get_homeassistant_entity_ids()
|
||||
except:
|
||||
@@ -159,8 +155,6 @@ class HomeAssistantAdapterCommonSettings(SettingsBaseModel):
|
||||
def eos_solution_entity_ids(self) -> list[str]:
|
||||
"""Entity IDs for optimization solution available at EOS."""
|
||||
try:
|
||||
from akkudoktoreos.adapter.adapter import get_adapter
|
||||
|
||||
adapter_eos = get_adapter()
|
||||
result = adapter_eos.provider_by_id("HomeAssistant").get_eos_solution_entity_ids()
|
||||
except:
|
||||
@@ -172,8 +166,6 @@ class HomeAssistantAdapterCommonSettings(SettingsBaseModel):
|
||||
def eos_device_instruction_entity_ids(self) -> list[str]:
|
||||
"""Entity IDs for energy management instructions available at EOS."""
|
||||
try:
|
||||
from akkudoktoreos.adapter.adapter import get_adapter
|
||||
|
||||
adapter_eos = get_adapter()
|
||||
result = adapter_eos.provider_by_id(
|
||||
"HomeAssistant"
|
||||
|
||||
@@ -11,6 +11,7 @@ Key features:
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from typing import Any, ClassVar, Optional, Type, Union
|
||||
@@ -26,6 +27,7 @@ from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.config.configmigrate import migrate_config_data, migrate_config_file
|
||||
from akkudoktoreos.core.cachesettings import CacheCommonSettings
|
||||
from akkudoktoreos.core.coreabc import SingletonMixin
|
||||
from akkudoktoreos.core.database import DatabaseCommonSettings
|
||||
from akkudoktoreos.core.decorators import classproperty
|
||||
from akkudoktoreos.core.emsettings import (
|
||||
EnergyManagementCommonSettings,
|
||||
@@ -65,16 +67,66 @@ def get_absolute_path(
|
||||
return None
|
||||
|
||||
|
||||
def is_home_assistant_addon() -> bool:
|
||||
"""Detect Home Assistant add-on environment.
|
||||
|
||||
Home Assistant sets this environment variable automatically.
|
||||
"""
|
||||
return "HASSIO_TOKEN" in os.environ or "SUPERVISOR_TOKEN" in os.environ
|
||||
|
||||
|
||||
def default_data_folder_path() -> Path:
|
||||
"""Provide default data folder path.
|
||||
|
||||
1. From EOS_DATA_DIR env
|
||||
2. From EOS_DIR env
|
||||
3. From platform specific default path
|
||||
4. Current working directory
|
||||
|
||||
Note:
|
||||
When running as Home Assistant add-on the path is fixed to /data.
|
||||
"""
|
||||
if is_home_assistant_addon():
|
||||
return Path("/data")
|
||||
|
||||
# 1. From EOS_DATA_DIR env
|
||||
if env_dir := os.getenv(ConfigEOS.EOS_DATA_DIR):
|
||||
try:
|
||||
data_dir = Path(env_dir).resolve()
|
||||
data_dir.mkdir(parents=True, exist_ok=True)
|
||||
return data_dir
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not setup data folder {data_dir}: {e}")
|
||||
|
||||
# 2. From EOS_DIR env
|
||||
if env_dir := os.getenv(ConfigEOS.EOS_DIR):
|
||||
try:
|
||||
data_dir = Path(env_dir).resolve()
|
||||
data_dir.mkdir(parents=True, exist_ok=True)
|
||||
return data_dir
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not setup data folder {data_dir}: {e}")
|
||||
|
||||
# 3. From platform specific default path
|
||||
try:
|
||||
data_dir = Path(user_data_dir(ConfigEOS.APP_NAME, ConfigEOS.APP_AUTHOR))
|
||||
if data_dir is not None:
|
||||
data_dir.mkdir(parents=True, exist_ok=True)
|
||||
return data_dir
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not setup data folder {data_dir}: {e}")
|
||||
|
||||
# 4. Current working directory
|
||||
return Path.cwd()
|
||||
|
||||
|
||||
class GeneralSettings(SettingsBaseModel):
|
||||
"""General settings."""
|
||||
|
||||
_config_folder_path: ClassVar[Optional[Path]] = None
|
||||
_config_file_path: ClassVar[Optional[Path]] = None
|
||||
|
||||
# Detect Home Assistant add-on environment
|
||||
# Home Assistant sets this environment variable automatically
|
||||
_home_assistant_addon: ClassVar[bool] = (
|
||||
"HASSIO_TOKEN" in os.environ or "SUPERVISOR_TOKEN" in os.environ
|
||||
home_assistant_addon: bool = Field(
|
||||
default_factory=is_home_assistant_addon,
|
||||
json_schema_extra={"description": "EOS is running as home assistant add-on."},
|
||||
exclude=True,
|
||||
)
|
||||
|
||||
version: str = Field(
|
||||
@@ -84,17 +136,16 @@ class GeneralSettings(SettingsBaseModel):
|
||||
},
|
||||
)
|
||||
|
||||
data_folder_path: Optional[Path] = Field(
|
||||
default=None,
|
||||
data_folder_path: Path = Field(
|
||||
default_factory=default_data_folder_path,
|
||||
json_schema_extra={
|
||||
"description": "Path to EOS data directory.",
|
||||
"examples": [None, "/home/eos/data"],
|
||||
"description": "Path to EOS data folder.",
|
||||
},
|
||||
)
|
||||
|
||||
data_output_subpath: Optional[Path] = Field(
|
||||
default="output",
|
||||
json_schema_extra={"description": "Sub-path for the EOS output data directory."},
|
||||
json_schema_extra={"description": "Sub-path for the EOS output data folder."},
|
||||
)
|
||||
|
||||
latitude: Optional[float] = Field(
|
||||
@@ -134,19 +185,13 @@ class GeneralSettings(SettingsBaseModel):
|
||||
@property
|
||||
def config_folder_path(self) -> Optional[Path]:
|
||||
"""Path to EOS configuration directory."""
|
||||
return self._config_folder_path
|
||||
return self.config._config_file_path.parent
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def config_file_path(self) -> Optional[Path]:
|
||||
"""Path to EOS configuration file."""
|
||||
return self._config_file_path
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def home_assistant_addon(self) -> bool:
|
||||
"""EOS is running as home assistant add-on."""
|
||||
return self._home_assistant_addon
|
||||
return self.config._config_file_path
|
||||
|
||||
compatible_versions: ClassVar[list[str]] = [__version__]
|
||||
|
||||
@@ -164,17 +209,19 @@ class GeneralSettings(SettingsBaseModel):
|
||||
|
||||
@field_validator("data_folder_path", mode="after")
|
||||
@classmethod
|
||||
def validate_data_folder_path(cls, value: Optional[Union[str, Path]]) -> Optional[Path]:
|
||||
def validate_data_folder_path(cls, value: Optional[Union[str, Path]]) -> Path:
|
||||
"""Ensure dir is available."""
|
||||
if cls._home_assistant_addon:
|
||||
if is_home_assistant_addon():
|
||||
# Force to home assistant add-on /data directory
|
||||
return Path("/data")
|
||||
if value is None:
|
||||
return None
|
||||
return default_data_folder_path()
|
||||
if isinstance(value, str):
|
||||
value = Path(value)
|
||||
value.resolve()
|
||||
if not value.is_dir():
|
||||
try:
|
||||
value.resolve()
|
||||
value.mkdir(parents=True, exist_ok=True)
|
||||
except Exception:
|
||||
raise ValueError(f"Data folder path '{value}' is not a directory.")
|
||||
return value
|
||||
|
||||
@@ -191,6 +238,9 @@ class SettingsEOS(pydantic_settings.BaseSettings, PydanticModelNestedValueMixin)
|
||||
cache: Optional[CacheCommonSettings] = Field(
|
||||
default=None, json_schema_extra={"description": "Cache Settings"}
|
||||
)
|
||||
database: Optional[DatabaseCommonSettings] = Field(
|
||||
default=None, json_schema_extra={"description": "Database Settings"}
|
||||
)
|
||||
ems: Optional[EnergyManagementCommonSettings] = Field(
|
||||
default=None, json_schema_extra={"description": "Energy Management Settings"}
|
||||
)
|
||||
@@ -248,22 +298,23 @@ class SettingsEOSDefaults(SettingsEOS):
|
||||
Used by ConfigEOS instance to make all fields available.
|
||||
"""
|
||||
|
||||
general: GeneralSettings = GeneralSettings()
|
||||
cache: CacheCommonSettings = CacheCommonSettings()
|
||||
ems: EnergyManagementCommonSettings = EnergyManagementCommonSettings()
|
||||
logging: LoggingCommonSettings = LoggingCommonSettings()
|
||||
devices: DevicesCommonSettings = DevicesCommonSettings()
|
||||
measurement: MeasurementCommonSettings = MeasurementCommonSettings()
|
||||
optimization: OptimizationCommonSettings = OptimizationCommonSettings()
|
||||
prediction: PredictionCommonSettings = PredictionCommonSettings()
|
||||
elecprice: ElecPriceCommonSettings = ElecPriceCommonSettings()
|
||||
feedintariff: FeedInTariffCommonSettings = FeedInTariffCommonSettings()
|
||||
load: LoadCommonSettings = LoadCommonSettings()
|
||||
pvforecast: PVForecastCommonSettings = PVForecastCommonSettings()
|
||||
weather: WeatherCommonSettings = WeatherCommonSettings()
|
||||
server: ServerCommonSettings = ServerCommonSettings()
|
||||
utils: UtilsCommonSettings = UtilsCommonSettings()
|
||||
adapter: AdapterCommonSettings = AdapterCommonSettings()
|
||||
general: GeneralSettings = Field(default_factory=GeneralSettings)
|
||||
cache: CacheCommonSettings = Field(default_factory=CacheCommonSettings)
|
||||
database: DatabaseCommonSettings = Field(default_factory=DatabaseCommonSettings)
|
||||
ems: EnergyManagementCommonSettings = Field(default_factory=EnergyManagementCommonSettings)
|
||||
logging: LoggingCommonSettings = Field(default_factory=LoggingCommonSettings)
|
||||
devices: DevicesCommonSettings = Field(default_factory=DevicesCommonSettings)
|
||||
measurement: MeasurementCommonSettings = Field(default_factory=MeasurementCommonSettings)
|
||||
optimization: OptimizationCommonSettings = Field(default_factory=OptimizationCommonSettings)
|
||||
prediction: PredictionCommonSettings = Field(default_factory=PredictionCommonSettings)
|
||||
elecprice: ElecPriceCommonSettings = Field(default_factory=ElecPriceCommonSettings)
|
||||
feedintariff: FeedInTariffCommonSettings = Field(default_factory=FeedInTariffCommonSettings)
|
||||
load: LoadCommonSettings = Field(default_factory=LoadCommonSettings)
|
||||
pvforecast: PVForecastCommonSettings = Field(default_factory=PVForecastCommonSettings)
|
||||
weather: WeatherCommonSettings = Field(default_factory=WeatherCommonSettings)
|
||||
server: ServerCommonSettings = Field(default_factory=ServerCommonSettings)
|
||||
utils: UtilsCommonSettings = Field(default_factory=UtilsCommonSettings)
|
||||
adapter: AdapterCommonSettings = Field(default_factory=AdapterCommonSettings)
|
||||
|
||||
def __hash__(self) -> int:
|
||||
# Just for usage in configmigrate, finally overwritten when used by ConfigEOS.
|
||||
@@ -300,10 +351,6 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
|
||||
the same instance, which contains the most up-to-date configuration. Modifying the configuration
|
||||
in one part of the application reflects across all references to this class.
|
||||
|
||||
Attributes:
|
||||
config_folder_path (Optional[Path]): Path to the configuration directory.
|
||||
config_file_path (Optional[Path]): Path to the configuration file.
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If no configuration file is found, and creating a default configuration fails.
|
||||
|
||||
@@ -323,6 +370,15 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
|
||||
EOS_CONFIG_DIR: ClassVar[str] = "EOS_CONFIG_DIR"
|
||||
ENCODING: ClassVar[str] = "UTF-8"
|
||||
CONFIG_FILE_NAME: ClassVar[str] = "EOS.config.json"
|
||||
_init_config_eos: ClassVar[dict[str, bool]] = {
|
||||
"with_init_settings": True,
|
||||
"with_env_settings": True,
|
||||
"with_dotenv_settings": True,
|
||||
"with_file_settings": True,
|
||||
"with_file_secret_settings": True,
|
||||
}
|
||||
_config_file_path: ClassVar[Optional[Path]] = None
|
||||
_force_documentation_mode = False
|
||||
|
||||
def __hash__(self) -> int:
|
||||
# ConfigEOS is a singleton
|
||||
@@ -377,31 +433,156 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
|
||||
configuration directory cannot be created.
|
||||
- It ensures that a fallback to a default configuration file is always possible.
|
||||
"""
|
||||
# Ensure we know and have the config folder path and the config file
|
||||
config_file = cls._setup_config_file()
|
||||
|
||||
def lazy_config_file_settings() -> dict:
|
||||
"""Config file settings.
|
||||
|
||||
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
|
||||
is recreated this function is run.
|
||||
"""
|
||||
config_file_path, exists = cls._get_config_file_path()
|
||||
if not exists:
|
||||
# Create minimum config file
|
||||
config_minimum_content = '{ "general": { "version": "' + __version__ + '" } }'
|
||||
if config_file_path.is_relative_to(ConfigEOS.package_root_path):
|
||||
# Never write into package directory
|
||||
error_msg = (
|
||||
f"Could not create minimum config file. "
|
||||
f"Config file path '{config_file_path}' is within package root "
|
||||
f"'{ConfigEOS.package_root_path}'"
|
||||
)
|
||||
logger.error(error_msg)
|
||||
raise RuntimeError(error_msg)
|
||||
try:
|
||||
config_file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
config_file_path.write_text(config_minimum_content, encoding="utf-8")
|
||||
except Exception as exc:
|
||||
# Create minimum config in temporary config directory as last resort
|
||||
error_msg = (
|
||||
f"Could not create minimum config file in {config_file_path.parent}: {exc}"
|
||||
)
|
||||
logger.error(error_msg)
|
||||
temp_dir = Path(tempfile.mkdtemp())
|
||||
info_msg = f"Using temporary config directory {temp_dir}"
|
||||
logger.info(info_msg)
|
||||
config_file_path = temp_dir / config_file_path.name
|
||||
config_file_path.write_text(config_minimum_content, encoding="utf-8")
|
||||
|
||||
# Remember for other lazy settings and computed_field
|
||||
cls._config_file_path = config_file_path
|
||||
|
||||
return {}
|
||||
|
||||
def lazy_data_folder_path_settings() -> dict:
|
||||
"""Data folder path settings.
|
||||
|
||||
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
|
||||
is recreated this function is run.
|
||||
"""
|
||||
# Updates path to the data directory.
|
||||
data_folder_settings = {
|
||||
"general": {
|
||||
"data_folder_path": default_data_folder_path(),
|
||||
},
|
||||
}
|
||||
|
||||
return data_folder_settings
|
||||
|
||||
def lazy_init_settings() -> dict:
|
||||
"""Init settings.
|
||||
|
||||
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
|
||||
is recreated this function is run.
|
||||
"""
|
||||
if not cls._init_config_eos.get("with_init_settings", True):
|
||||
logger.debug("Config initialisation with init settings is disabled.")
|
||||
return {}
|
||||
|
||||
settings = init_settings()
|
||||
|
||||
return settings
|
||||
|
||||
def lazy_env_settings() -> dict:
|
||||
"""Env settings.
|
||||
|
||||
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
|
||||
is recreated this function is run.
|
||||
"""
|
||||
if not cls._init_config_eos.get("with_env_settings", True):
|
||||
logger.debug("Config initialisation with env settings is disabled.")
|
||||
return {}
|
||||
|
||||
return env_settings()
|
||||
|
||||
def lazy_dotenv_settings() -> dict:
|
||||
"""Dotenv settings.
|
||||
|
||||
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
|
||||
is recreated this function is run.
|
||||
"""
|
||||
if not cls._init_config_eos.get("with_dotenv_settings", True):
|
||||
logger.debug("Config initialisation with dotenv settings is disabled.")
|
||||
return {}
|
||||
|
||||
return dotenv_settings()
|
||||
|
||||
def lazy_file_settings() -> dict:
|
||||
"""File settings.
|
||||
|
||||
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
|
||||
is recreated this function is run.
|
||||
|
||||
Ensures the config file exists and creates a backup if necessary.
|
||||
"""
|
||||
if not cls._init_config_eos.get("with_file_settings", True):
|
||||
logger.debug("Config initialisation with file settings is disabled.")
|
||||
return {}
|
||||
|
||||
config_file = cls._config_file_path # provided by lazy_config_file_settings
|
||||
if config_file is None:
|
||||
# This should not happen
|
||||
raise RuntimeError("Config file path not set.")
|
||||
|
||||
try:
|
||||
backup_file = config_file.with_suffix(f".{to_datetime(as_string='YYYYMMDDHHmmss')}")
|
||||
if migrate_config_file(config_file, backup_file):
|
||||
# If the config file does have the correct version add it as settings source
|
||||
settings = pydantic_settings.JsonConfigSettingsSource(
|
||||
settings_cls, json_file=config_file
|
||||
)()
|
||||
except Exception as ex:
|
||||
logger.error(
|
||||
f"Error reading config file '{config_file}' (falling back to default config): {ex}"
|
||||
)
|
||||
settings = {}
|
||||
|
||||
return settings
|
||||
|
||||
def lazy_file_secret_settings() -> dict:
|
||||
"""File secret settings.
|
||||
|
||||
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
|
||||
is recreated this function is run.
|
||||
"""
|
||||
if not cls._init_config_eos.get("with_file_secret_settings", True):
|
||||
logger.debug("Config initialisation with file secret settings is disabled.")
|
||||
return {}
|
||||
|
||||
return file_secret_settings()
|
||||
|
||||
# All the settings sources in priority sequence
|
||||
# The settings are all lazyly evaluated at instance creation time to allow for
|
||||
# runtime configuration.
|
||||
setting_sources = [
|
||||
init_settings,
|
||||
env_settings,
|
||||
dotenv_settings,
|
||||
lazy_config_file_settings, # Prio high
|
||||
lazy_init_settings,
|
||||
lazy_env_settings,
|
||||
lazy_dotenv_settings,
|
||||
lazy_file_settings,
|
||||
lazy_data_folder_path_settings,
|
||||
lazy_file_secret_settings, # Prio low
|
||||
]
|
||||
|
||||
# Append file settings to sources
|
||||
file_settings: Optional[pydantic_settings.JsonConfigSettingsSource] = None
|
||||
try:
|
||||
backup_file = config_file.with_suffix(f".{to_datetime(as_string='YYYYMMDDHHmmss')}")
|
||||
if migrate_config_file(config_file, backup_file):
|
||||
# If the config file does have the correct version add it as settings source
|
||||
file_settings = pydantic_settings.JsonConfigSettingsSource(
|
||||
settings_cls, json_file=config_file
|
||||
)
|
||||
setting_sources.append(file_settings)
|
||||
except Exception as ex:
|
||||
logger.error(
|
||||
f"Error reading config file '{config_file}' (falling back to default config): {ex}"
|
||||
)
|
||||
|
||||
return tuple(setting_sources)
|
||||
|
||||
@classproperty
|
||||
@@ -409,30 +590,41 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
|
||||
"""Compute the package root path."""
|
||||
return Path(__file__).parent.parent.resolve()
|
||||
|
||||
@classmethod
|
||||
def documentation_mode(cls) -> bool:
|
||||
"""Are we running in documentation mode.
|
||||
|
||||
Some checks may be relaxed to allow for proper documentation execution.
|
||||
"""
|
||||
# Detect if Sphinx is importing this module
|
||||
is_sphinx = "sphinx" in sys.modules or getattr(sys, "_called_from_sphinx", False)
|
||||
return cls._force_documentation_mode or is_sphinx
|
||||
|
||||
def __init__(self, *args: Any, **kwargs: Any) -> None:
|
||||
"""Initializes the singleton ConfigEOS instance.
|
||||
|
||||
Configuration data is loaded from a configuration file or a default one is created if none
|
||||
exists.
|
||||
"""
|
||||
logger.debug("Config init with parameters {} {}", args, kwargs)
|
||||
# Check for singleton guard
|
||||
if hasattr(self, "_initialized"):
|
||||
logger.debug("Config init called again with parameters {} {}", args, kwargs)
|
||||
return
|
||||
logger.debug("Config init with parameters {} {}", args, kwargs)
|
||||
self._setup(self, *args, **kwargs)
|
||||
|
||||
def _setup(self, *args: Any, **kwargs: Any) -> None:
|
||||
"""Re-initialize global settings."""
|
||||
logger.debug("Config setup with parameters {} {}", args, kwargs)
|
||||
|
||||
# Assure settings base knows the singleton EOS configuration
|
||||
SettingsBaseModel.config = self
|
||||
|
||||
# (Re-)load settings - call base class init
|
||||
SettingsEOSDefaults.__init__(self, *args, **kwargs)
|
||||
# Init config file and data folder pathes
|
||||
self._setup_config_file()
|
||||
self._update_data_folder_path()
|
||||
|
||||
self._initialized = True
|
||||
logger.debug("Config setup:\n{}", self)
|
||||
logger.debug(f"Config setup:\n{self}")
|
||||
|
||||
def merge_settings(self, settings: SettingsEOS) -> None:
|
||||
"""Merges the provided settings into the global settings for EOS, with optional overwrite.
|
||||
@@ -562,48 +754,6 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
|
||||
|
||||
return result
|
||||
|
||||
def _update_data_folder_path(self) -> None:
|
||||
"""Updates path to the data directory."""
|
||||
# From Settings
|
||||
if data_dir := self.general.data_folder_path:
|
||||
try:
|
||||
data_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.general.data_folder_path = data_dir
|
||||
return
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not setup data dir {data_dir}: {e}")
|
||||
# From EOS_DATA_DIR env
|
||||
if env_dir := os.getenv(self.EOS_DATA_DIR):
|
||||
try:
|
||||
data_dir = Path(env_dir).resolve()
|
||||
data_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.general.data_folder_path = data_dir
|
||||
return
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not setup data dir {data_dir}: {e}")
|
||||
# From EOS_DIR env
|
||||
if env_dir := os.getenv(self.EOS_DIR):
|
||||
try:
|
||||
data_dir = Path(env_dir).resolve()
|
||||
data_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.general.data_folder_path = data_dir
|
||||
return
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not setup data dir {data_dir}: {e}")
|
||||
# From platform specific default path
|
||||
try:
|
||||
data_dir = Path(user_data_dir(self.APP_NAME, self.APP_AUTHOR))
|
||||
if data_dir is not None:
|
||||
data_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.general.data_folder_path = data_dir
|
||||
return
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not setup data dir {data_dir}: {e}")
|
||||
# Current working directory
|
||||
data_dir = Path.cwd()
|
||||
logger.warning(f"Using data dir {data_dir}")
|
||||
self.general.data_folder_path = data_dir
|
||||
|
||||
@classmethod
|
||||
def _get_config_file_path(cls) -> tuple[Path, bool]:
|
||||
"""Find a valid configuration file or return the desired path for a new config file.
|
||||
@@ -618,32 +768,80 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
|
||||
Returns:
|
||||
tuple[Path, bool]: The path to the configuration file and if there is already a config file there
|
||||
"""
|
||||
if GeneralSettings._home_assistant_addon:
|
||||
if is_home_assistant_addon():
|
||||
# Only /data is persistent for home assistant add-on
|
||||
cfile = Path("/data/config") / cls.CONFIG_FILE_NAME
|
||||
logger.debug(f"Config file forced to: '{cfile}'")
|
||||
return cfile, cfile.exists()
|
||||
|
||||
config_dirs = []
|
||||
env_eos_dir = os.getenv(cls.EOS_DIR)
|
||||
logger.debug(f"Environment EOS_DIR: '{env_eos_dir}'")
|
||||
|
||||
env_eos_config_dir = os.getenv(cls.EOS_CONFIG_DIR)
|
||||
logger.debug(f"Environment EOS_CONFIG_DIR: '{env_eos_config_dir}'")
|
||||
env_config_dir = get_absolute_path(env_eos_dir, env_eos_config_dir)
|
||||
logger.debug(f"Resulting environment config dir: '{env_config_dir}'")
|
||||
# 1. Directory specified by EOS_CONFIG_DIR
|
||||
config_dir: Optional[Union[Path, str]] = os.getenv(cls.EOS_CONFIG_DIR)
|
||||
if config_dir:
|
||||
logger.debug(f"Environment EOS_CONFIG_DIR: '{config_dir}'")
|
||||
config_dir = Path(config_dir).resolve()
|
||||
if config_dir.exists():
|
||||
config_dirs.append(config_dir)
|
||||
else:
|
||||
logger.info(f"Environment EOS_CONFIG_DIR: '{config_dir}' does not exist.")
|
||||
|
||||
if env_config_dir is not None:
|
||||
config_dirs.append(env_config_dir.resolve())
|
||||
config_dirs.append(Path(user_config_dir(cls.APP_NAME, cls.APP_AUTHOR)))
|
||||
config_dirs.append(Path.cwd())
|
||||
# 2. Directory specified by EOS_DIR / EOS_CONFIG_DIR
|
||||
eos_dir = os.getenv(cls.EOS_DIR)
|
||||
eos_config_dir = os.getenv(cls.EOS_CONFIG_DIR)
|
||||
if eos_dir and eos_config_dir:
|
||||
logger.debug(f"Environment EOS_DIR/EOS_CONFIG_DIR: '{eos_dir}/{eos_config_dir}'")
|
||||
config_dir = get_absolute_path(eos_dir, eos_config_dir)
|
||||
if config_dir:
|
||||
config_dir = Path(config_dir).resolve()
|
||||
if config_dir.exists():
|
||||
config_dirs.append(config_dir)
|
||||
else:
|
||||
logger.info(
|
||||
f"Environment EOS_DIR/EOS_CONFIG_DIR: '{config_dir}' does not exist."
|
||||
)
|
||||
else:
|
||||
logger.debug(
|
||||
f"Environment EOS_DIR/EOS_CONFIG_DIR: '{eos_dir}/{eos_config_dir}' not a valid path"
|
||||
)
|
||||
|
||||
# 3. Directory specified by EOS_DIR
|
||||
config_dir = os.getenv(cls.EOS_DIR)
|
||||
if config_dir:
|
||||
logger.debug(f"Environment EOS_DIR: '{config_dir}'")
|
||||
config_dir = Path(config_dir).resolve()
|
||||
if config_dir.exists():
|
||||
config_dirs.append(config_dir)
|
||||
else:
|
||||
logger.info(f"Environment EOS_DIR: '{config_dir}' does not exist.")
|
||||
|
||||
# 4. User configuration directory
|
||||
config_dir = Path(user_config_dir(cls.APP_NAME, cls.APP_AUTHOR)).resolve()
|
||||
logger.debug(f"User config dir: '{config_dir}'")
|
||||
if config_dir.exists():
|
||||
config_dirs.append(config_dir)
|
||||
else:
|
||||
logger.info(f"User config dir: '{config_dir}' does not exist.")
|
||||
|
||||
# 5. Current working directory
|
||||
config_dir = Path.cwd()
|
||||
logger.debug(f"Current working dir: '{config_dir}'")
|
||||
if config_dir.exists():
|
||||
config_dirs.append(config_dir)
|
||||
else:
|
||||
logger.info(f"Current working dir: '{config_dir}' does not exist.")
|
||||
|
||||
# Search for file
|
||||
for cdir in config_dirs:
|
||||
cfile = cdir.joinpath(cls.CONFIG_FILE_NAME)
|
||||
if cfile.exists():
|
||||
logger.debug(f"Found config file: '{cfile}'")
|
||||
return cfile, True
|
||||
|
||||
return config_dirs[0].joinpath(cls.CONFIG_FILE_NAME), False
|
||||
# Return highest priority directory with standard file name appended
|
||||
default_config_file = config_dirs[0].joinpath(cls.CONFIG_FILE_NAME)
|
||||
logger.debug(f"No config file found. Defaulting to: '{default_config_file}'")
|
||||
return default_config_file, False
|
||||
|
||||
@classmethod
|
||||
def _setup_config_file(cls) -> Path:
|
||||
@@ -714,8 +912,3 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
|
||||
The first non None value in priority order is taken.
|
||||
"""
|
||||
self._setup(**self.model_dump())
|
||||
|
||||
|
||||
def get_config() -> ConfigEOS:
|
||||
"""Gets the EOS configuration data."""
|
||||
return ConfigEOS()
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
import json
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Set, Tuple, Union
|
||||
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Set, Tuple, Union, cast
|
||||
|
||||
from loguru import logger
|
||||
|
||||
@@ -13,19 +13,33 @@ if TYPE_CHECKING:
|
||||
# There are circular dependencies - only import here for type checking
|
||||
from akkudoktoreos.config.config import SettingsEOSDefaults
|
||||
|
||||
|
||||
_KEEP_DEFAULT = object()
|
||||
|
||||
# -----------------------------
|
||||
# Global migration map constant
|
||||
# -----------------------------
|
||||
# key: old JSON path, value: either
|
||||
# - str (new model path)
|
||||
# - tuple[str, Callable[[Any], Any]] (new path + transform)
|
||||
# - _KEEP_DEFAULT (keep new default if old value is none or not given)
|
||||
# - None (drop)
|
||||
MIGRATION_MAP: Dict[str, Union[str, Tuple[str, Callable[[Any], Any]], None]] = {
|
||||
MIGRATION_MAP: Dict[
|
||||
str,
|
||||
Union[
|
||||
str, # simple rename
|
||||
Tuple[str, Callable[[Any], Any]], # rename + transform
|
||||
Tuple[str, object], # rename + _KEEP_DEFAULT
|
||||
Tuple[str, object, Callable[[Any], Any]], # rename + _KEEP_DEFAULT + transform
|
||||
None, # drop
|
||||
],
|
||||
] = {
|
||||
# 0.2.0.dev -> 0.2.0.dev
|
||||
"adapter/homeassistant/optimization_solution_entity_ids": (
|
||||
"adapter/homeassistant/solution_entity_ids",
|
||||
lambda v: v if isinstance(v, list) else None,
|
||||
),
|
||||
"general/data_folder_path": ("general/data_folder_path", _KEEP_DEFAULT),
|
||||
# 0.2.0 -> 0.2.0+dev
|
||||
"elecprice/provider_settings/ElecPriceImport/import_file_path": "elecprice/elecpriceimport/import_file_path",
|
||||
"elecprice/provider_settings/ElecPriceImport/import_json": "elecprice/elecpriceimport/import_json",
|
||||
@@ -91,20 +105,32 @@ def migrate_config_data(config_data: Dict[str, Any]) -> "SettingsEOSDefaults":
|
||||
for old_path, mapping in MIGRATION_MAP.items():
|
||||
new_path = None
|
||||
transform = None
|
||||
keep_default = False
|
||||
|
||||
if mapping is None:
|
||||
migrated_source_paths.add(old_path.strip("/"))
|
||||
logger.debug(f"🗑️ Migration map: dropping '{old_path}'")
|
||||
continue
|
||||
if isinstance(mapping, tuple):
|
||||
new_path, transform = mapping
|
||||
new_path = mapping[0]
|
||||
for m in mapping[1:]:
|
||||
if m is _KEEP_DEFAULT:
|
||||
keep_default = True
|
||||
elif callable(m):
|
||||
transform = cast(Callable[[Any], Any], m)
|
||||
else:
|
||||
new_path = mapping
|
||||
|
||||
old_value = _get_json_nested_value(config_data, old_path)
|
||||
if old_value is None:
|
||||
migrated_source_paths.add(old_path.strip("/"))
|
||||
mapped_count += 1
|
||||
logger.debug(f"✅ Migrated mapped '{old_path}' → 'None'")
|
||||
if keep_default:
|
||||
migrated_source_paths.add(old_path.strip("/"))
|
||||
mapped_count += 1
|
||||
logger.debug(f"✅ Migrated mapped '{old_path}' → keeping new default")
|
||||
else:
|
||||
migrated_source_paths.add(old_path.strip("/"))
|
||||
mapped_count += 1
|
||||
logger.debug(f"✅ Migrated mapped '{old_path}' → 'None'")
|
||||
continue
|
||||
|
||||
try:
|
||||
|
||||
@@ -13,6 +13,7 @@ import os
|
||||
import pickle
|
||||
import tempfile
|
||||
import threading
|
||||
from pathlib import Path
|
||||
from typing import (
|
||||
IO,
|
||||
Any,
|
||||
@@ -236,6 +237,24 @@ Param = ParamSpec("Param")
|
||||
RetType = TypeVar("RetType")
|
||||
|
||||
|
||||
def cache_clear(clear_all: Optional[bool] = None) -> None:
|
||||
"""Cleanup expired cache files."""
|
||||
if clear_all:
|
||||
CacheFileStore().clear(clear_all=True)
|
||||
else:
|
||||
CacheFileStore().clear(before_datetime=to_datetime())
|
||||
|
||||
|
||||
def cache_load() -> dict:
|
||||
"""Load cache from cachefilestore.json."""
|
||||
return CacheFileStore().load_store()
|
||||
|
||||
|
||||
def cache_save() -> dict:
|
||||
"""Save cache to cachefilestore.json."""
|
||||
return CacheFileStore().save_store()
|
||||
|
||||
|
||||
class CacheFileRecord(PydanticBaseModel):
|
||||
cache_file: Any = Field(
|
||||
..., json_schema_extra={"description": "File descriptor of the cache file."}
|
||||
@@ -284,9 +303,16 @@ class CacheFileStore(ConfigMixin, SingletonMixin):
|
||||
return
|
||||
self._store: Dict[str, CacheFileRecord] = {}
|
||||
self._store_lock = threading.RLock()
|
||||
self._store_file = self.config.cache.path().joinpath("cachefilestore.json")
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def _store_file(self) -> Optional[Path]:
|
||||
"""Get file to store the cache."""
|
||||
try:
|
||||
return self.config.cache.path().joinpath("cachefilestore.json")
|
||||
except Exception:
|
||||
logger.error("Path for cache files missing. Please configure!")
|
||||
return None
|
||||
|
||||
def _until_datetime_by_options(
|
||||
self,
|
||||
until_date: Optional[Any] = None,
|
||||
@@ -496,10 +522,18 @@ class CacheFileStore(ConfigMixin, SingletonMixin):
|
||||
# File already available
|
||||
cache_file_obj = cache_item.cache_file
|
||||
else:
|
||||
self.config.cache.path().mkdir(parents=True, exist_ok=True)
|
||||
cache_file_obj = tempfile.NamedTemporaryFile(
|
||||
mode=mode, delete=delete, suffix=suffix, dir=self.config.cache.path()
|
||||
)
|
||||
# Create cache file
|
||||
store_file = self._store_file()
|
||||
if store_file:
|
||||
store_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
cache_file_obj = tempfile.NamedTemporaryFile(
|
||||
mode=mode, delete=delete, suffix=suffix, dir=store_file.parent
|
||||
)
|
||||
else:
|
||||
# Cache storage not configured, use temporary path
|
||||
cache_file_obj = tempfile.NamedTemporaryFile(
|
||||
mode=mode, delete=delete, suffix=suffix
|
||||
)
|
||||
self._store[cache_file_key] = CacheFileRecord(
|
||||
cache_file=cache_file_obj,
|
||||
until_datetime=until_datetime_dt,
|
||||
@@ -766,10 +800,14 @@ class CacheFileStore(ConfigMixin, SingletonMixin):
|
||||
Returns:
|
||||
data (dict): cache management data that was saved.
|
||||
"""
|
||||
store_file = self._store_file()
|
||||
if store_file is None:
|
||||
return {}
|
||||
|
||||
with self._store_lock:
|
||||
self._store_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
store_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
store_to_save = self.current_store()
|
||||
with self._store_file.open("w", encoding="utf-8", newline="\n") as f:
|
||||
with store_file.open("w", encoding="utf-8", newline="\n") as f:
|
||||
try:
|
||||
json.dump(store_to_save, f, indent=4)
|
||||
except Exception as e:
|
||||
@@ -782,18 +820,22 @@ class CacheFileStore(ConfigMixin, SingletonMixin):
|
||||
Returns:
|
||||
data (dict): cache management data that was loaded.
|
||||
"""
|
||||
store_file = self._store_file()
|
||||
if store_file is None:
|
||||
return {}
|
||||
|
||||
with self._store_lock:
|
||||
store_loaded = {}
|
||||
if self._store_file.exists():
|
||||
with self._store_file.open("r", encoding="utf-8", newline=None) as f:
|
||||
if store_file.exists():
|
||||
with store_file.open("r", encoding="utf-8", newline=None) as f:
|
||||
try:
|
||||
store_to_load = json.load(f)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error loading cache file store: {e}\n"
|
||||
+ f"Deleting the store file {self._store_file}."
|
||||
+ f"Deleting the store file {store_file}."
|
||||
)
|
||||
self._store_file.unlink()
|
||||
store_file.unlink()
|
||||
return {}
|
||||
for key, record in store_to_load.items():
|
||||
if record is None:
|
||||
|
||||
@@ -20,7 +20,8 @@ class CacheCommonSettings(SettingsBaseModel):
|
||||
)
|
||||
|
||||
cleanup_interval: float = Field(
|
||||
default=5 * 60,
|
||||
default=5.0 * 60,
|
||||
ge=5.0,
|
||||
json_schema_extra={"description": "Intervall in seconds for EOS file cache cleanup."},
|
||||
)
|
||||
|
||||
|
||||
@@ -1,28 +1,76 @@
|
||||
"""Abstract and base classes for EOS core.
|
||||
|
||||
This module provides foundational classes for handling configuration and prediction functionality
|
||||
in EOS. It includes base classes that provide convenient access to global
|
||||
configuration and prediction instances through properties.
|
||||
|
||||
Classes:
|
||||
- ConfigMixin: Mixin class for managing and accessing global configuration.
|
||||
- PredictionMixin: Mixin class for managing and accessing global prediction data.
|
||||
- SingletonMixin: Mixin class to create singletons.
|
||||
This module provides foundational classes and functions to access global EOS resources.
|
||||
"""
|
||||
|
||||
from __future__ import (
|
||||
annotations, # use types lazy as strings, helps to prevent circular dependencies
|
||||
)
|
||||
|
||||
import threading
|
||||
from typing import Any, ClassVar, Dict, Optional, Type
|
||||
from typing import TYPE_CHECKING, Any, ClassVar, Dict, Optional, Type, Union
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from akkudoktoreos.core.decorators import classproperty
|
||||
from akkudoktoreos.utils.datetimeutil import DateTime
|
||||
|
||||
adapter_eos: Any = None
|
||||
config_eos: Any = None
|
||||
measurement_eos: Any = None
|
||||
prediction_eos: Any = None
|
||||
ems_eos: Any = None
|
||||
if TYPE_CHECKING:
|
||||
# Prevents circular dependies
|
||||
from akkudoktoreos.adapter.adapter import Adapter
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
from akkudoktoreos.core.database import Database
|
||||
from akkudoktoreos.core.ems import EnergyManagement
|
||||
from akkudoktoreos.devices.devices import ResourceRegistry
|
||||
from akkudoktoreos.measurement.measurement import Measurement
|
||||
from akkudoktoreos.prediction.prediction import Prediction
|
||||
|
||||
|
||||
# Module level singleton cache
|
||||
_adapter_eos: Optional[Adapter] = None
|
||||
_config_eos: Optional[ConfigEOS] = None
|
||||
_ems_eos: Optional[EnergyManagement] = None
|
||||
_database_eos: Optional[Database] = None
|
||||
_measurement_eos: Optional[Measurement] = None
|
||||
_prediction_eos: Optional[Prediction] = None
|
||||
_resource_registry_eos: Optional[ResourceRegistry] = None
|
||||
|
||||
|
||||
def get_adapter(init: bool = False) -> Adapter:
|
||||
"""Retrieve the singleton EOS Adapter instance.
|
||||
|
||||
This function provides access to the global EOS Adapter instance. The Adapter
|
||||
object is created on first access if `init` is True. If the instance is
|
||||
accessed before initialization and `init` is False, a RuntimeError is raised.
|
||||
|
||||
Args:
|
||||
init (bool): If True, create the Adapter instance if it does not exist.
|
||||
Default is False.
|
||||
|
||||
Returns:
|
||||
Adapter: The global EOS Adapter instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If accessed before initialization with `init=False`.
|
||||
|
||||
Usage:
|
||||
.. code-block:: python
|
||||
|
||||
adapter = get_adapter(init=True) # Initialize and retrieve
|
||||
adapter.do_something()
|
||||
"""
|
||||
global _adapter_eos
|
||||
if _adapter_eos is None:
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
|
||||
if not init and not ConfigEOS.documentation_mode():
|
||||
raise RuntimeError("Adapter access before init.")
|
||||
|
||||
from akkudoktoreos.adapter.adapter import Adapter
|
||||
|
||||
_adapter_eos = Adapter()
|
||||
|
||||
return _adapter_eos
|
||||
|
||||
|
||||
class AdapterMixin:
|
||||
@@ -49,20 +97,84 @@ class AdapterMixin:
|
||||
"""
|
||||
|
||||
@classproperty
|
||||
def adapter(cls) -> Any:
|
||||
def adapter(cls) -> Adapter:
|
||||
"""Convenience class method/ attribute to retrieve the EOS adapters.
|
||||
|
||||
Returns:
|
||||
Adapter: The adapters.
|
||||
"""
|
||||
# avoid circular dependency at import time
|
||||
global adapter_eos
|
||||
if adapter_eos is None:
|
||||
from akkudoktoreos.adapter.adapter import get_adapter
|
||||
return get_adapter()
|
||||
|
||||
adapter_eos = get_adapter()
|
||||
|
||||
return adapter_eos
|
||||
def get_config(init: Union[bool, dict[str, bool]] = False) -> ConfigEOS:
|
||||
"""Retrieve the singleton EOS configuration instance.
|
||||
|
||||
This function provides controlled access to the global EOS configuration
|
||||
singleton (`ConfigEOS`). The configuration is created lazily on first
|
||||
access and can be initialized with a configurable set of settings sources.
|
||||
|
||||
By default, accessing the configuration without prior initialization
|
||||
raises a `RuntimeError`. Passing `init=True` or an initialization
|
||||
configuration dictionary enables creation of the singleton.
|
||||
|
||||
Args:
|
||||
init (Union[bool, dict[str, bool]]):
|
||||
Controls initialization of the configuration.
|
||||
|
||||
- ``False`` (default): Do not initialize. Raises ``RuntimeError``
|
||||
if the configuration does not yet exist.
|
||||
- ``True``: Initialize the configuration using default
|
||||
initialization behavior (all settings sources enabled).
|
||||
- ``dict[str, bool]``: Initialize the configuration with fine-grained
|
||||
control over which settings sources are enabled. Missing keys
|
||||
default to ``True``.
|
||||
|
||||
Supported keys include:
|
||||
- ``with_init_settings``
|
||||
- ``with_env_settings``
|
||||
- ``with_dotenv_settings``
|
||||
- ``with_file_settings``
|
||||
- ``with_file_secret_settings``
|
||||
|
||||
Returns:
|
||||
ConfigEOS: The global EOS configuration singleton instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError:
|
||||
If the configuration has not been initialized and ``init`` is
|
||||
``False``.
|
||||
|
||||
Usage:
|
||||
.. code-block:: python
|
||||
|
||||
# Initialize with default behavior (all sources enabled)
|
||||
config = get_config(init=True)
|
||||
|
||||
# Initialize with explicit source control
|
||||
config = get_config(init={
|
||||
"with_init_settings": True,
|
||||
"with_env_settings": True,
|
||||
"with_dotenv_settings": True,
|
||||
"with_file_settings": False,
|
||||
"with_file_secret_settings": False,
|
||||
})
|
||||
|
||||
# Access existing configuration
|
||||
host = get_config().server.host
|
||||
"""
|
||||
global _config_eos
|
||||
if _config_eos is None:
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
|
||||
if not init and not ConfigEOS.documentation_mode():
|
||||
raise RuntimeError("Config access before init.")
|
||||
|
||||
if isinstance(init, dict):
|
||||
ConfigEOS._init_config_eos = init
|
||||
|
||||
_config_eos = ConfigEOS()
|
||||
|
||||
return _config_eos
|
||||
|
||||
|
||||
class ConfigMixin:
|
||||
@@ -89,20 +201,51 @@ class ConfigMixin:
|
||||
"""
|
||||
|
||||
@classproperty
|
||||
def config(cls) -> Any:
|
||||
def config(cls) -> ConfigEOS:
|
||||
"""Convenience class method/ attribute to retrieve the EOS configuration data.
|
||||
|
||||
Returns:
|
||||
ConfigEOS: The configuration.
|
||||
"""
|
||||
# avoid circular dependency at import time
|
||||
global config_eos
|
||||
if config_eos is None:
|
||||
from akkudoktoreos.config.config import get_config
|
||||
return get_config()
|
||||
|
||||
config_eos = get_config()
|
||||
|
||||
return config_eos
|
||||
def get_measurement(init: bool = False) -> Measurement:
|
||||
"""Retrieve the singleton EOS Measurement instance.
|
||||
|
||||
This function provides access to the global EOS Measurement object. The
|
||||
Measurement instance is created on first access if `init` is True. If the
|
||||
instance is accessed before initialization and `init` is False, a RuntimeError
|
||||
is raised.
|
||||
|
||||
Args:
|
||||
init (bool): If True, create the Measurement instance if it does not exist.
|
||||
Default is False.
|
||||
|
||||
Returns:
|
||||
Measurement: The global EOS Measurement instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If accessed before initialization with `init=False`.
|
||||
|
||||
Usage:
|
||||
.. code-block:: python
|
||||
|
||||
measurement = get_measurement(init=True) # Initialize and retrieve
|
||||
measurement.read_sensor_data()
|
||||
"""
|
||||
global _measurement_eos
|
||||
if _measurement_eos is None:
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
|
||||
if not init and not ConfigEOS.documentation_mode():
|
||||
raise RuntimeError("Measurement access before init.")
|
||||
|
||||
from akkudoktoreos.measurement.measurement import Measurement
|
||||
|
||||
_measurement_eos = Measurement()
|
||||
|
||||
return _measurement_eos
|
||||
|
||||
|
||||
class MeasurementMixin:
|
||||
@@ -130,20 +273,51 @@ class MeasurementMixin:
|
||||
"""
|
||||
|
||||
@classproperty
|
||||
def measurement(cls) -> Any:
|
||||
def measurement(cls) -> Measurement:
|
||||
"""Convenience class method/ attribute to retrieve the EOS measurement data.
|
||||
|
||||
Returns:
|
||||
Measurement: The measurement.
|
||||
"""
|
||||
# avoid circular dependency at import time
|
||||
global measurement_eos
|
||||
if measurement_eos is None:
|
||||
from akkudoktoreos.measurement.measurement import get_measurement
|
||||
return get_measurement()
|
||||
|
||||
measurement_eos = get_measurement()
|
||||
|
||||
return measurement_eos
|
||||
def get_prediction(init: bool = False) -> Prediction:
|
||||
"""Retrieve the singleton EOS Prediction instance.
|
||||
|
||||
This function provides access to the global EOS Prediction object. The
|
||||
Prediction instance is created on first access if `init` is True. If the
|
||||
instance is accessed before initialization and `init` is False, a RuntimeError
|
||||
is raised.
|
||||
|
||||
Args:
|
||||
init (bool): If True, create the Prediction instance if it does not exist.
|
||||
Default is False.
|
||||
|
||||
Returns:
|
||||
Prediction: The global EOS Prediction instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If accessed before initialization with `init=False`.
|
||||
|
||||
Usage:
|
||||
.. code-block:: python
|
||||
|
||||
prediction = get_prediction(init=True) # Initialize and retrieve
|
||||
prediction.forecast_next_hour()
|
||||
"""
|
||||
global _prediction_eos
|
||||
if _prediction_eos is None:
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
|
||||
if not init and not ConfigEOS.documentation_mode():
|
||||
raise RuntimeError("Prediction access before init.")
|
||||
|
||||
from akkudoktoreos.prediction.prediction import Prediction
|
||||
|
||||
_prediction_eos = Prediction()
|
||||
|
||||
return _prediction_eos
|
||||
|
||||
|
||||
class PredictionMixin:
|
||||
@@ -171,20 +345,50 @@ class PredictionMixin:
|
||||
"""
|
||||
|
||||
@classproperty
|
||||
def prediction(cls) -> Any:
|
||||
def prediction(cls) -> Prediction:
|
||||
"""Convenience class method/ attribute to retrieve the EOS prediction data.
|
||||
|
||||
Returns:
|
||||
Prediction: The prediction.
|
||||
"""
|
||||
# avoid circular dependency at import time
|
||||
global prediction_eos
|
||||
if prediction_eos is None:
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
return get_prediction()
|
||||
|
||||
prediction_eos = get_prediction()
|
||||
|
||||
return prediction_eos
|
||||
def get_ems(init: bool = False) -> EnergyManagement:
|
||||
"""Retrieve the singleton EOS Energy Management System (EMS) instance.
|
||||
|
||||
This function provides access to the global EOS EMS instance. The instance
|
||||
is created on first access if `init` is True. If the instance is accessed
|
||||
before initialization and `init` is False, a RuntimeError is raised.
|
||||
|
||||
Args:
|
||||
init (bool): If True, create the EMS instance if it does not exist.
|
||||
Default is False.
|
||||
|
||||
Returns:
|
||||
EnergyManagement: The global EOS EMS instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If accessed before initialization with `init=False`.
|
||||
|
||||
Usage:
|
||||
.. code-block:: python
|
||||
|
||||
ems = get_ems(init=True) # Initialize and retrieve
|
||||
ems.start_energy_management_loop()
|
||||
"""
|
||||
global _ems_eos
|
||||
if _ems_eos is None:
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
|
||||
if not init and not ConfigEOS.documentation_mode():
|
||||
raise RuntimeError("EMS access before init.")
|
||||
|
||||
from akkudoktoreos.core.ems import EnergyManagement
|
||||
|
||||
_ems_eos = EnergyManagement()
|
||||
|
||||
return _ems_eos
|
||||
|
||||
|
||||
class EnergyManagementSystemMixin:
|
||||
@@ -200,7 +404,7 @@ class EnergyManagementSystemMixin:
|
||||
global EnergyManagementSystem instance lazily to avoid import-time circular dependencies.
|
||||
|
||||
Attributes:
|
||||
ems (EnergyManagementSystem): Property to access the global EOS energy management system.
|
||||
ems (EnergyManagement): Property to access the global EOS energy management system.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
@@ -213,20 +417,120 @@ class EnergyManagementSystemMixin:
|
||||
"""
|
||||
|
||||
@classproperty
|
||||
def ems(cls) -> Any:
|
||||
def ems(cls) -> EnergyManagement:
|
||||
"""Convenience class method/ attribute to retrieve the EOS energy management system.
|
||||
|
||||
Returns:
|
||||
EnergyManagementSystem: The energy management system.
|
||||
"""
|
||||
# avoid circular dependency at import time
|
||||
global ems_eos
|
||||
if ems_eos is None:
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
return get_ems()
|
||||
|
||||
ems_eos = get_ems()
|
||||
|
||||
return ems_eos
|
||||
def get_database(init: bool = False) -> Database:
|
||||
"""Retrieve the singleton EOS database instance.
|
||||
|
||||
This function provides access to the global EOS Database instance. The
|
||||
instance is created on first access if `init` is True. If the instance is
|
||||
accessed before initialization and `init` is False, a RuntimeError is raised.
|
||||
|
||||
Args:
|
||||
init (bool): If True, create the Database instance if it does not exist.
|
||||
Default is False.
|
||||
|
||||
Returns:
|
||||
Database: The global EOS database instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If accessed before initialization with `init=False`.
|
||||
|
||||
Usage:
|
||||
.. code-block:: python
|
||||
|
||||
db = get_database(init=True) # Initialize and retrieve
|
||||
db.insert_measurement(...)
|
||||
"""
|
||||
global _database_eos
|
||||
if _database_eos is None:
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
|
||||
if not init and not ConfigEOS.documentation_mode():
|
||||
raise RuntimeError("Database access before init.")
|
||||
|
||||
from akkudoktoreos.core.database import Database
|
||||
|
||||
_database_eos = Database()
|
||||
|
||||
return _database_eos
|
||||
|
||||
|
||||
class DatabaseMixin:
|
||||
"""Mixin class for managing EOS database access.
|
||||
|
||||
This class serves as a foundational component for EOS-related classes requiring access
|
||||
to the EOS database. It provides a `database` property that dynamically retrieves
|
||||
the database instance.
|
||||
|
||||
Usage:
|
||||
Subclass this base class to gain access to the `database` attribute, which retrieves the
|
||||
global database instance lazily to avoid import-time circular dependencies.
|
||||
|
||||
Attributes:
|
||||
database (Database): Property to access the global EOS database.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
class MyOptimizationClass(PredictionMixin):
|
||||
def store something(self):
|
||||
db = self.database
|
||||
|
||||
"""
|
||||
|
||||
@classproperty
|
||||
def database(cls) -> Database:
|
||||
"""Convenience class method/ attribute to retrieve the EOS database.
|
||||
|
||||
Returns:
|
||||
Database: The database.
|
||||
"""
|
||||
return get_database()
|
||||
|
||||
|
||||
def get_resource_registry(init: bool = False) -> ResourceRegistry:
|
||||
"""Retrieve the singleton EOS Resource Registry instance.
|
||||
|
||||
This function provides access to the global EOS ResourceRegistry instance.
|
||||
The instance is created on first access if `init` is True. If the instance
|
||||
is accessed before initialization and `init` is False, a RuntimeError is raised.
|
||||
|
||||
Args:
|
||||
init (bool): If True, create the ResourceRegistry instance if it does not exist.
|
||||
Default is False.
|
||||
|
||||
Returns:
|
||||
ResourceRegistry: The global EOS Resource Registry instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If accessed before initialization with `init=False`.
|
||||
|
||||
Usage:
|
||||
.. code-block:: python
|
||||
|
||||
registry = get_resource_registry(init=True) # Initialize and retrieve
|
||||
registry.register_device(my_device)
|
||||
"""
|
||||
global _resource_registry_eos
|
||||
if _resource_registry_eos is None:
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
|
||||
if not init and not ConfigEOS.documentation_mode():
|
||||
raise RuntimeError("ResourceRegistry access before init.")
|
||||
|
||||
from akkudoktoreos.devices.devices import ResourceRegistry
|
||||
|
||||
_resource_registry_eos = ResourceRegistry()
|
||||
|
||||
return _resource_registry_eos
|
||||
|
||||
|
||||
class StartMixin(EnergyManagementSystemMixin):
|
||||
@@ -243,14 +547,7 @@ class StartMixin(EnergyManagementSystemMixin):
|
||||
Returns:
|
||||
DateTime: The starting datetime of the current or latest energy management, or None.
|
||||
"""
|
||||
# avoid circular dependency at import time
|
||||
global ems_eos
|
||||
if ems_eos is None:
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
|
||||
ems_eos = get_ems()
|
||||
|
||||
return ems_eos.start_datetime
|
||||
return get_ems().start_datetime
|
||||
|
||||
|
||||
class SingletonMixin:
|
||||
@@ -332,3 +629,43 @@ class SingletonMixin:
|
||||
if not hasattr(self, "_initialized"):
|
||||
super().__init__(*args, **kwargs)
|
||||
self._initialized = True
|
||||
|
||||
|
||||
_singletons_init_running: bool = False
|
||||
|
||||
|
||||
def singletons_init() -> None:
|
||||
"""Initialize the singletons for adapter, config, measurement, prediction, database, resource registry."""
|
||||
# Prevent recursive calling
|
||||
global \
|
||||
_singletons_init_running, \
|
||||
_adapter_eos, \
|
||||
_config_eos, \
|
||||
_database_eos, \
|
||||
_measurement_eos, \
|
||||
_prediction_eos, \
|
||||
_ems_eos, \
|
||||
_resource_registry_eos
|
||||
|
||||
if _singletons_init_running:
|
||||
return
|
||||
|
||||
_singletons_init_running = True
|
||||
|
||||
try:
|
||||
if _config_eos is None:
|
||||
get_config(init=True)
|
||||
if _adapter_eos is None:
|
||||
get_adapter(init=True)
|
||||
if _database_eos is None:
|
||||
get_database(init=True)
|
||||
if _ems_eos is None:
|
||||
get_ems(init=True)
|
||||
if _measurement_eos is None:
|
||||
get_measurement(init=True)
|
||||
if _prediction_eos is None:
|
||||
get_prediction(init=True)
|
||||
if _resource_registry_eos is None:
|
||||
get_resource_registry(init=True)
|
||||
finally:
|
||||
_singletons_init_running = False
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
1178
src/akkudoktoreos/core/database.py
Normal file
1178
src/akkudoktoreos/core/database.py
Normal file
File diff suppressed because it is too large
Load Diff
2194
src/akkudoktoreos/core/databaseabc.py
Normal file
2194
src/akkudoktoreos/core/databaseabc.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -24,7 +24,7 @@ from akkudoktoreos.optimization.genetic.geneticparams import (
|
||||
)
|
||||
from akkudoktoreos.optimization.genetic.geneticsolution import GeneticSolution
|
||||
from akkudoktoreos.optimization.optimization import OptimizationSolution
|
||||
from akkudoktoreos.utils.datetimeutil import DateTime, compare_datetimes, to_datetime
|
||||
from akkudoktoreos.utils.datetimeutil import DateTime, to_datetime
|
||||
|
||||
# The executor to execute the CPU heavy energy management run
|
||||
executor = ThreadPoolExecutor(max_workers=1)
|
||||
@@ -44,6 +44,15 @@ class EnergyManagementStage(Enum):
|
||||
return self.value
|
||||
|
||||
|
||||
async def ems_manage_energy() -> None:
|
||||
"""Repeating task for managing energy.
|
||||
|
||||
This task should be executed by the server regularly
|
||||
to ensure proper energy management.
|
||||
"""
|
||||
await EnergyManagement().run()
|
||||
|
||||
|
||||
class EnergyManagement(
|
||||
SingletonMixin, ConfigMixin, PredictionMixin, AdapterMixin, PydanticBaseModel
|
||||
):
|
||||
@@ -286,6 +295,9 @@ class EnergyManagement(
|
||||
error_msg = f"Adapter update failed - phase {cls._stage}: {e}\n{trace}"
|
||||
logger.error(error_msg)
|
||||
|
||||
# Remember energy run datetime.
|
||||
EnergyManagement._last_run_datetime = to_datetime()
|
||||
|
||||
# energy management run finished
|
||||
cls._stage = EnergyManagementStage.IDLE
|
||||
|
||||
@@ -346,73 +358,3 @@ class EnergyManagement(
|
||||
)
|
||||
# Run optimization in background thread to avoid blocking event loop
|
||||
await loop.run_in_executor(executor, func)
|
||||
|
||||
async def manage_energy(self) -> None:
|
||||
"""Repeating task for managing energy.
|
||||
|
||||
This task should be executed by the server regularly (e.g., every 10 seconds)
|
||||
to ensure proper energy management. Configuration changes to the energy management interval
|
||||
will only take effect if this task is executed.
|
||||
|
||||
- Initializes and runs the energy management for the first time if it has never been run
|
||||
before.
|
||||
- If the energy management interval is not configured or invalid (NaN), the task will not
|
||||
trigger any repeated energy management runs.
|
||||
- Compares the current time with the last run time and runs the energy management if the
|
||||
interval has elapsed.
|
||||
- Logs any exceptions that occur during the initialization or execution of the energy
|
||||
management.
|
||||
|
||||
Note: The task maintains the interval even if some intervals are missed.
|
||||
"""
|
||||
current_datetime = to_datetime()
|
||||
interval = self.config.ems.interval # interval maybe changed in between
|
||||
|
||||
if EnergyManagement._last_run_datetime is None:
|
||||
# Never run before
|
||||
try:
|
||||
# Remember energy run datetime.
|
||||
EnergyManagement._last_run_datetime = current_datetime
|
||||
# Try to run a first energy management. May fail due to config incomplete.
|
||||
await self.run()
|
||||
except Exception as e:
|
||||
trace = "".join(traceback.TracebackException.from_exception(e).format())
|
||||
message = f"EOS init: {e}\n{trace}"
|
||||
logger.error(message)
|
||||
return
|
||||
|
||||
if interval is None or interval == float("nan"):
|
||||
# No Repetition
|
||||
return
|
||||
|
||||
if (
|
||||
compare_datetimes(current_datetime, EnergyManagement._last_run_datetime).time_diff
|
||||
< interval
|
||||
):
|
||||
# Wait for next run
|
||||
return
|
||||
|
||||
try:
|
||||
await self.run()
|
||||
except Exception as e:
|
||||
trace = "".join(traceback.TracebackException.from_exception(e).format())
|
||||
message = f"EOS run: {e}\n{trace}"
|
||||
logger.error(message)
|
||||
|
||||
# Remember the energy management run - keep on interval even if we missed some intervals
|
||||
while (
|
||||
compare_datetimes(current_datetime, EnergyManagement._last_run_datetime).time_diff
|
||||
>= interval
|
||||
):
|
||||
EnergyManagement._last_run_datetime = EnergyManagement._last_run_datetime.add(
|
||||
seconds=interval
|
||||
)
|
||||
|
||||
|
||||
# Initialize the Energy Management System, it is a singleton.
|
||||
ems = EnergyManagement()
|
||||
|
||||
|
||||
def get_ems() -> EnergyManagement:
|
||||
"""Gets the EOS Energy Management System."""
|
||||
return ems
|
||||
|
||||
@@ -29,10 +29,11 @@ class EnergyManagementCommonSettings(SettingsBaseModel):
|
||||
},
|
||||
)
|
||||
|
||||
interval: Optional[float] = Field(
|
||||
default=None,
|
||||
interval: float = Field(
|
||||
default=300.0,
|
||||
ge=60.0,
|
||||
json_schema_extra={
|
||||
"description": "Intervall in seconds between EOS energy management runs.",
|
||||
"description": "Intervall between EOS energy management runs [seconds].",
|
||||
"examples": ["300"],
|
||||
},
|
||||
)
|
||||
|
||||
@@ -47,7 +47,12 @@ from pydantic import (
|
||||
)
|
||||
from pydantic.fields import ComputedFieldInfo, FieldInfo
|
||||
|
||||
from akkudoktoreos.utils.datetimeutil import DateTime, to_datetime, to_duration
|
||||
from akkudoktoreos.utils.datetimeutil import (
|
||||
DateTime,
|
||||
to_datetime,
|
||||
to_duration,
|
||||
to_timezone,
|
||||
)
|
||||
|
||||
# Global weakref dictionary to hold external state per model instance
|
||||
# Used as a workaround for PrivateAttr not working in e.g. Mixin Classes
|
||||
@@ -683,13 +688,8 @@ class PydanticBaseModel(PydanticModelNestedValueMixin, BaseModel):
|
||||
self, *args: Any, include_computed_fields: bool = True, **kwargs: Any
|
||||
) -> dict[str, Any]:
|
||||
"""Custom dump method to serialize computed fields by default."""
|
||||
result = super().model_dump(*args, **kwargs)
|
||||
|
||||
if not include_computed_fields:
|
||||
for computed_field_name in self.__class__.model_computed_fields:
|
||||
result.pop(computed_field_name, None)
|
||||
|
||||
return result
|
||||
kwargs.setdefault("exclude_computed_fields", not include_computed_fields)
|
||||
return super().model_dump(*args, **kwargs)
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert this PredictionRecord instance to a dictionary representation.
|
||||
@@ -1061,8 +1061,8 @@ class PydanticDateTimeDataFrame(PydanticBaseModel):
|
||||
valid_base_dtypes = {"int64", "float64", "bool", "object", "string"}
|
||||
|
||||
def is_valid_dtype(dtype: str) -> bool:
|
||||
# Allow timezone-aware or naive datetime64
|
||||
if dtype.startswith("datetime64[ns"):
|
||||
# Allow timezone-aware or naive datetime64 - pandas 3.0 also has us
|
||||
if dtype.startswith("datetime64[ns") or dtype.startswith("datetime64[us"):
|
||||
return True
|
||||
return dtype in valid_base_dtypes
|
||||
|
||||
@@ -1102,7 +1102,7 @@ class PydanticDateTimeDataFrame(PydanticBaseModel):
|
||||
|
||||
# Apply dtypes
|
||||
for col, dtype in self.dtypes.items():
|
||||
if dtype.startswith("datetime64[ns"):
|
||||
if dtype.startswith("datetime64[ns") or dtype.startswith("datetime64[us"):
|
||||
df[col] = pd.to_datetime(df[col], utc=True)
|
||||
elif dtype in dtype_mapping.keys():
|
||||
df[col] = df[col].astype(dtype_mapping[dtype])
|
||||
@@ -1111,20 +1111,59 @@ class PydanticDateTimeDataFrame(PydanticBaseModel):
|
||||
|
||||
return df
|
||||
|
||||
@classmethod
|
||||
def _detect_data_tz(cls, df: pd.DataFrame) -> Optional[str]:
|
||||
"""Detect timezone of pandas data."""
|
||||
# Index first (strongest signal)
|
||||
if isinstance(df.index, pd.DatetimeIndex) and df.index.tz is not None:
|
||||
return str(df.index.tz)
|
||||
|
||||
# Then datetime columns
|
||||
for col in df.columns:
|
||||
if is_datetime64_any_dtype(df[col]):
|
||||
tz = getattr(df[col].dt, "tz", None)
|
||||
if tz is not None:
|
||||
return str(tz)
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def from_dataframe(
|
||||
cls, df: pd.DataFrame, tz: Optional[str] = None
|
||||
) -> "PydanticDateTimeDataFrame":
|
||||
"""Create a PydanticDateTimeDataFrame instance from a pandas DataFrame."""
|
||||
index = pd.Index([to_datetime(dt, as_string=True, in_timezone=tz) for dt in df.index])
|
||||
# resolve timezone
|
||||
data_tz = cls._detect_data_tz(df)
|
||||
|
||||
if tz is not None:
|
||||
if data_tz and data_tz != tz:
|
||||
raise ValueError(f"Timezone mismatch: tz='{tz}' but data uses '{data_tz}'")
|
||||
resolved_tz = tz
|
||||
else:
|
||||
if data_tz:
|
||||
resolved_tz = data_tz
|
||||
else:
|
||||
# Use local timezone
|
||||
resolved_tz = to_timezone(as_string=True)
|
||||
|
||||
# normalize index
|
||||
index = pd.Index(
|
||||
[to_datetime(dt, as_string=True, in_timezone=resolved_tz) for dt in df.index]
|
||||
)
|
||||
df.index = index
|
||||
|
||||
# normalize datetime columns
|
||||
datetime_columns = [col for col in df.columns if is_datetime64_any_dtype(df[col])]
|
||||
for col in datetime_columns:
|
||||
if df[col].dt.tz is None:
|
||||
df[col] = df[col].dt.tz_localize(resolved_tz)
|
||||
else:
|
||||
df[col] = df[col].dt.tz_convert(resolved_tz)
|
||||
|
||||
return cls(
|
||||
data=df.to_dict(orient="index"),
|
||||
dtypes={col: str(dtype) for col, dtype in df.dtypes.items()},
|
||||
tz=tz,
|
||||
tz=resolved_tz,
|
||||
datetime_columns=datetime_columns,
|
||||
)
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import hashlib
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from fnmatch import fnmatch
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
@@ -16,14 +17,117 @@ HASH_EOS = ""
|
||||
# Number of digits to append to .dev to identify a development version
|
||||
VERSION_DEV_PRECISION = 8
|
||||
|
||||
# Hashing configuration
|
||||
DIR_PACKAGE_ROOT = Path(__file__).resolve().parent.parent
|
||||
ALLOWED_SUFFIXES: set[str] = {".py", ".md", ".json"}
|
||||
EXCLUDED_DIR_PATTERNS: set[str] = {"*_autosum", "*__pycache__", "*_generated"}
|
||||
EXCLUDED_FILES: set[Path] = set()
|
||||
|
||||
|
||||
# ------------------------------
|
||||
# Helpers for version generation
|
||||
# ------------------------------
|
||||
|
||||
|
||||
def is_excluded_dir(path: Path, excluded_dir_patterns: set[str]) -> bool:
|
||||
"""Check whether a directory should be excluded based on name patterns."""
|
||||
return any(fnmatch(path.name, pattern) for pattern in excluded_dir_patterns)
|
||||
@dataclass
|
||||
class HashConfig:
|
||||
"""Configuration for file hashing."""
|
||||
|
||||
paths: list[Path]
|
||||
allowed_suffixes: set[str]
|
||||
excluded_dir_patterns: set[str]
|
||||
excluded_files: set[Path]
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
"""Validate configuration."""
|
||||
for path in self.paths:
|
||||
if not path.exists():
|
||||
raise ValueError(f"Path does not exist: {path}")
|
||||
|
||||
|
||||
def is_excluded_dir(path: Path, patterns: set[str]) -> bool:
|
||||
"""Check if directory matches any exclusion pattern.
|
||||
|
||||
Args:
|
||||
path: Directory path to check
|
||||
patterns: set of glob-like patterns (e.g., {``*__pycache__``, ``*_test``})
|
||||
|
||||
Returns:
|
||||
True if directory should be excluded
|
||||
"""
|
||||
dir_name = path.name
|
||||
return any(fnmatch(dir_name, pattern) for pattern in patterns)
|
||||
|
||||
|
||||
def collect_files(config: HashConfig) -> list[Path]:
|
||||
"""Collect all files that should be included in the hash.
|
||||
|
||||
This function only collects files - it doesn't hash them.
|
||||
Makes it easy to inspect what will be hashed.
|
||||
|
||||
Args:
|
||||
config: Hash configuration
|
||||
|
||||
Returns:
|
||||
Sorted list of files to be hashed
|
||||
|
||||
Example:
|
||||
>>> config = HashConfig(
|
||||
... paths=[Path('src')],
|
||||
... allowed_suffixes={'.py'},
|
||||
... excluded_dir_patterns={'*__pycache__'},
|
||||
... excluded_files=set()
|
||||
... )
|
||||
>>> files = collect_files(config)
|
||||
>>> print(f"Will hash {len(files)} files")
|
||||
>>> for f in files[:5]:
|
||||
... print(f" {f}")
|
||||
"""
|
||||
collected_files: list[Path] = []
|
||||
|
||||
for root in config.paths:
|
||||
for p in sorted(root.rglob("*")):
|
||||
# Skip excluded directories
|
||||
if p.is_dir() and is_excluded_dir(p, config.excluded_dir_patterns):
|
||||
continue
|
||||
|
||||
# Skip files inside excluded directories
|
||||
if any(is_excluded_dir(parent, config.excluded_dir_patterns) for parent in p.parents):
|
||||
continue
|
||||
|
||||
# Skip excluded files
|
||||
if p.resolve() in config.excluded_files:
|
||||
continue
|
||||
|
||||
# Collect only allowed file types
|
||||
if p.is_file() and p.suffix.lower() in config.allowed_suffixes:
|
||||
collected_files.append(p.resolve())
|
||||
|
||||
return sorted(collected_files)
|
||||
|
||||
|
||||
def hash_files(files: list[Path]) -> str:
|
||||
"""Calculate SHA256 hash of file contents.
|
||||
|
||||
Args:
|
||||
files: list of files to hash (order matters!)
|
||||
|
||||
Returns:
|
||||
SHA256 hex digest
|
||||
|
||||
Example:
|
||||
>>> files = [Path('file1.py'), Path('file2.py')]
|
||||
>>> hash_value = hash_files(files)
|
||||
"""
|
||||
h = hashlib.sha256()
|
||||
|
||||
for file_path in files:
|
||||
if not file_path.exists():
|
||||
continue
|
||||
|
||||
h.update(file_path.read_bytes())
|
||||
|
||||
return h.hexdigest()
|
||||
|
||||
|
||||
def hash_tree(
|
||||
@@ -31,80 +135,93 @@ def hash_tree(
|
||||
allowed_suffixes: set[str],
|
||||
excluded_dir_patterns: set[str],
|
||||
excluded_files: Optional[set[Path]] = None,
|
||||
) -> str:
|
||||
"""Return SHA256 hash for files under `paths`.
|
||||
) -> tuple[str, list[Path]]:
|
||||
"""Return SHA256 hash for files under `paths` and the list of files hashed.
|
||||
|
||||
Restricted by suffix, excluding excluded directory patterns and excluded_files.
|
||||
Args:
|
||||
paths: list of root paths to hash
|
||||
allowed_suffixes: set of file suffixes to include (e.g., {'.py', '.json'})
|
||||
excluded_dir_patterns: set of directory patterns to exclude
|
||||
excluded_files: Optional set of specific files to exclude
|
||||
|
||||
Returns:
|
||||
tuple of (hash_digest, list_of_hashed_files)
|
||||
|
||||
Example:
|
||||
>>> hash_digest, files = hash_tree(
|
||||
... paths=[Path('src')],
|
||||
... allowed_suffixes={'.py'},
|
||||
... excluded_dir_patterns={'*__pycache__'},
|
||||
... )
|
||||
>>> print(f"Hash: {hash_digest}")
|
||||
>>> print(f"Based on {len(files)} files")
|
||||
"""
|
||||
h = hashlib.sha256()
|
||||
excluded_files = excluded_files or set()
|
||||
config = HashConfig(
|
||||
paths=paths,
|
||||
allowed_suffixes=allowed_suffixes,
|
||||
excluded_dir_patterns=excluded_dir_patterns,
|
||||
excluded_files=excluded_files or set(),
|
||||
)
|
||||
|
||||
for root in paths:
|
||||
if not root.exists():
|
||||
raise ValueError(f"Root path does not exist: {root}")
|
||||
for p in sorted(root.rglob("*")):
|
||||
# Skip excluded directories
|
||||
if p.is_dir() and is_excluded_dir(p, excluded_dir_patterns):
|
||||
continue
|
||||
files = collect_files(config)
|
||||
digest = hash_files(files)
|
||||
|
||||
# Skip files inside excluded directories
|
||||
if any(is_excluded_dir(parent, excluded_dir_patterns) for parent in p.parents):
|
||||
continue
|
||||
return digest, files
|
||||
|
||||
# Skip excluded files
|
||||
if p.resolve() in excluded_files:
|
||||
continue
|
||||
|
||||
# Hash only allowed file types
|
||||
if p.is_file() and p.suffix.lower() in allowed_suffixes:
|
||||
h.update(p.read_bytes())
|
||||
|
||||
digest = h.hexdigest()
|
||||
|
||||
return digest
|
||||
# ---------------------
|
||||
# Version hash function
|
||||
# ---------------------
|
||||
|
||||
|
||||
def _version_hash() -> str:
|
||||
"""Calculate project hash.
|
||||
|
||||
Only package file ins src/akkudoktoreos can be hashed to make it work also for packages.
|
||||
Only package files in src/akkudoktoreos can be hashed to make it work also for packages.
|
||||
|
||||
Returns:
|
||||
SHA256 hash of the project files
|
||||
"""
|
||||
DIR_PACKAGE_ROOT = Path(__file__).resolve().parent.parent
|
||||
if not str(DIR_PACKAGE_ROOT).endswith("src/akkudoktoreos"):
|
||||
error_msg = f"DIR_PACKAGE_ROOT does not end with src/akkudoktoreos: {DIR_PACKAGE_ROOT}"
|
||||
raise ValueError(error_msg)
|
||||
|
||||
# Allowed file suffixes to consider
|
||||
ALLOWED_SUFFIXES: set[str] = {".py", ".md", ".json"}
|
||||
|
||||
# Directory patterns to exclude (glob-like)
|
||||
EXCLUDED_DIR_PATTERNS: set[str] = {"*_autosum", "*__pycache__", "*_generated"}
|
||||
|
||||
# Files to exclude
|
||||
EXCLUDED_FILES: set[Path] = set()
|
||||
|
||||
# Directories whose changes shall be part of the project hash
|
||||
# Configuration
|
||||
watched_paths = [DIR_PACKAGE_ROOT]
|
||||
|
||||
hash_current = hash_tree(
|
||||
watched_paths, ALLOWED_SUFFIXES, EXCLUDED_DIR_PATTERNS, excluded_files=EXCLUDED_FILES
|
||||
# Collect files and calculate hash
|
||||
hash_digest, hashed_files = hash_tree(
|
||||
watched_paths,
|
||||
ALLOWED_SUFFIXES,
|
||||
EXCLUDED_DIR_PATTERNS,
|
||||
excluded_files=EXCLUDED_FILES,
|
||||
)
|
||||
return hash_current
|
||||
|
||||
return hash_digest
|
||||
|
||||
|
||||
def _version_calculate() -> str:
|
||||
"""Compute version."""
|
||||
global HASH_EOS
|
||||
HASH_EOS = _version_hash()
|
||||
if VERSION_BASE.endswith("dev"):
|
||||
"""Calculate the full version string.
|
||||
|
||||
For release versions: "x.y.z"
|
||||
For dev versions: "x.y.z.dev<hash>"
|
||||
|
||||
Returns:
|
||||
Full version string
|
||||
"""
|
||||
if VERSION_BASE.endswith(".dev"):
|
||||
# After dev only digits are allowed - convert hexdigest to digits
|
||||
hash_value = int(HASH_EOS, 16)
|
||||
hash_value = int(_version_hash(), 16)
|
||||
hash_digits = str(hash_value % (10**VERSION_DEV_PRECISION)).zfill(VERSION_DEV_PRECISION)
|
||||
return f"{VERSION_BASE}{hash_digits}"
|
||||
else:
|
||||
# Release version - use base as-is
|
||||
return VERSION_BASE
|
||||
|
||||
|
||||
# ---------------------------
|
||||
# Project version information
|
||||
# ----------------------------
|
||||
# ---------------------------
|
||||
|
||||
# The version
|
||||
__version__ = _version_calculate()
|
||||
@@ -114,16 +231,13 @@ __version__ = _version_calculate()
|
||||
# Version info access
|
||||
# -------------------
|
||||
|
||||
|
||||
# Regular expression to split the version string into pieces
|
||||
VERSION_RE = re.compile(
|
||||
r"""
|
||||
^(?P<base>\d+\.\d+\.\d+) # x.y.z
|
||||
(?:[\.\+\-] # .dev<hash> starts here
|
||||
(?:
|
||||
(?P<dev>dev) # literal 'dev'
|
||||
(?:(?P<hash>[A-Za-z0-9]+))? # optional <hash>
|
||||
)
|
||||
(?:\. # .dev<hash> starts here
|
||||
(?P<dev>dev) # literal 'dev'
|
||||
(?P<hash>[a-f0-9]+)? # optional <hash> (hex digits)
|
||||
)?
|
||||
$
|
||||
""",
|
||||
@@ -143,7 +257,7 @@ def version() -> dict[str, Optional[str]]:
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"version": "0.2.0+dev.a96a65",
|
||||
"version": "0.2.0.dev.a96a65",
|
||||
"base": "x.y.z",
|
||||
"dev": "dev" or None,
|
||||
"hash": "<hash>" or None,
|
||||
@@ -153,7 +267,7 @@ def version() -> dict[str, Optional[str]]:
|
||||
|
||||
match = VERSION_RE.match(__version__)
|
||||
if not match:
|
||||
raise ValueError(f"Invalid version format: {version}")
|
||||
raise ValueError(f"Invalid version format: {__version__}") # Fixed: was 'version'
|
||||
|
||||
info = match.groupdict()
|
||||
info["version"] = __version__
|
||||
|
||||
@@ -431,8 +431,3 @@ class ResourceRegistry(SingletonMixin, ConfigMixin, PydanticBaseModel):
|
||||
self.history = loaded.history
|
||||
except Exception as e:
|
||||
logger.error("Can not load resource registry: {}", e)
|
||||
|
||||
|
||||
def get_resource_registry() -> ResourceRegistry:
|
||||
"""Gets the EOS resource registry."""
|
||||
return ResourceRegistry()
|
||||
|
||||
@@ -87,7 +87,7 @@ class Battery:
|
||||
def reset(self) -> None:
|
||||
"""Resets the battery state to its initial values."""
|
||||
self.soc_wh = (self.initial_soc_percentage / 100) * self.capacity_wh
|
||||
self.soc_wh = min(max(self.soc_wh, self.min_soc_wh), self.max_soc_wh)
|
||||
self.soc_wh = min(self.soc_wh, self.max_soc_wh) # Only clamp to max
|
||||
self.discharge_array = np.full(self.prediction_hours, 0)
|
||||
self.charge_array = np.full(self.prediction_hours, 0)
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ data records for measurements.
|
||||
The measurements can be added programmatically or imported from a file or JSON string.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
|
||||
import numpy as np
|
||||
@@ -16,12 +17,26 @@ from pydantic import Field, computed_field
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.core.coreabc import SingletonMixin
|
||||
from akkudoktoreos.core.dataabc import DataImportMixin, DataRecord, DataSequence
|
||||
from akkudoktoreos.utils.datetimeutil import DateTime, Duration, to_duration
|
||||
from akkudoktoreos.utils.datetimeutil import (
|
||||
DateTime,
|
||||
Duration,
|
||||
to_datetime,
|
||||
to_duration,
|
||||
)
|
||||
|
||||
|
||||
class MeasurementCommonSettings(SettingsBaseModel):
|
||||
"""Measurement Configuration."""
|
||||
|
||||
historic_hours: Optional[int] = Field(
|
||||
default=2 * 365 * 24,
|
||||
ge=0,
|
||||
json_schema_extra={
|
||||
"description": "Number of hours into the past for measurement data",
|
||||
"examples": [2 * 365 * 24],
|
||||
},
|
||||
)
|
||||
|
||||
load_emr_keys: Optional[list[str]] = Field(
|
||||
default=None,
|
||||
json_schema_extra={
|
||||
@@ -94,6 +109,16 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
|
||||
return
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def _measurement_file_path(self) -> Optional[Path]:
|
||||
"""Path to measurements file (may be used optional to database)."""
|
||||
try:
|
||||
return self.config.general.data_folder_path / "measurement.json"
|
||||
except Exception:
|
||||
logger.error(
|
||||
"Path for measurements is missing. Please configure data folder path or database!"
|
||||
)
|
||||
return None
|
||||
|
||||
def _interval_count(
|
||||
self, start_datetime: DateTime, end_datetime: DateTime, interval: Duration
|
||||
) -> int:
|
||||
@@ -143,30 +168,32 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
|
||||
np.ndarray: A NumPy Array of the energy [kWh] per interval values calculated from
|
||||
the meter readings.
|
||||
"""
|
||||
# Add one interval to end_datetime to assure we have a energy value interval for all
|
||||
# datetimes from start_datetime (inclusive) to end_datetime (exclusive)
|
||||
end_datetime += interval
|
||||
size = self._interval_count(start_datetime, end_datetime, interval)
|
||||
|
||||
energy_mr_array = self.key_to_array(
|
||||
key=key, start_datetime=start_datetime, end_datetime=end_datetime, interval=interval
|
||||
key=key,
|
||||
start_datetime=start_datetime,
|
||||
end_datetime=end_datetime + interval,
|
||||
interval=interval,
|
||||
fill_method="time",
|
||||
boundary="context",
|
||||
)
|
||||
if energy_mr_array.size != size:
|
||||
if energy_mr_array.size != size + 1:
|
||||
logging_msg = (
|
||||
f"'{key}' meter reading array size: {energy_mr_array.size}"
|
||||
f" does not fit to expected size: {size}, {energy_mr_array}"
|
||||
f" does not fit to expected size: {size + 1}, {energy_mr_array}"
|
||||
)
|
||||
if energy_mr_array.size != 0:
|
||||
logger.error(logging_msg)
|
||||
raise ValueError(logging_msg)
|
||||
logger.debug(logging_msg)
|
||||
energy_array = np.zeros(size - 1)
|
||||
energy_array = np.zeros(size)
|
||||
elif np.any(energy_mr_array == None):
|
||||
# 'key_to_array()' creates None values array if no data records are available.
|
||||
# Array contains None value -> ignore
|
||||
debug_msg = f"'{key}' meter reading None: {energy_mr_array}"
|
||||
logger.debug(debug_msg)
|
||||
energy_array = np.zeros(size - 1)
|
||||
energy_array = np.zeros(size)
|
||||
else:
|
||||
# Calculate load per interval
|
||||
debug_msg = f"'{key}' meter reading: {energy_mr_array}"
|
||||
@@ -193,6 +220,9 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
|
||||
np.ndarray: A NumPy Array of the total load energy [kWh] per interval values calculated from
|
||||
the load meter readings.
|
||||
"""
|
||||
if interval is None:
|
||||
interval = to_duration("1 hour")
|
||||
|
||||
if len(self) < 1:
|
||||
# No data available
|
||||
if start_datetime is None or end_datetime is None:
|
||||
@@ -200,14 +230,14 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
|
||||
else:
|
||||
size = self._interval_count(start_datetime, end_datetime, interval)
|
||||
return np.zeros(size)
|
||||
if interval is None:
|
||||
interval = to_duration("1 hour")
|
||||
|
||||
if start_datetime is None:
|
||||
start_datetime = self[0].date_time
|
||||
start_datetime = self.min_datetime
|
||||
if end_datetime is None:
|
||||
end_datetime = self[-1].date_time
|
||||
end_datetime = self.max_datetime.add(seconds=1)
|
||||
size = self._interval_count(start_datetime, end_datetime, interval)
|
||||
load_total_kwh_array = np.zeros(size)
|
||||
|
||||
# Loop through all loads
|
||||
if isinstance(self.config.measurement.load_emr_keys, list):
|
||||
for key in self.config.measurement.load_emr_keys:
|
||||
@@ -225,7 +255,66 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
|
||||
|
||||
return load_total_kwh_array
|
||||
|
||||
# ----------------------- Measurement Database Protocol ---------------------
|
||||
|
||||
def get_measurement() -> Measurement:
|
||||
"""Gets the EOS measurement data."""
|
||||
return Measurement()
|
||||
def db_namespace(self) -> str:
|
||||
return "Measurement"
|
||||
|
||||
def db_keep_datetime(self) -> Optional[DateTime]:
|
||||
"""Earliest datetime from which database records should be retained.
|
||||
|
||||
Used when removing old records from database to free space.
|
||||
|
||||
Returns:
|
||||
Datetime or None.
|
||||
"""
|
||||
return to_datetime().subtract(hours=self.config.measurement.historic_hours)
|
||||
|
||||
def save(self) -> bool:
|
||||
"""Save the measurements to persistent storage.
|
||||
|
||||
Returns:
|
||||
True in case the measurements were saved, False otherwise.
|
||||
"""
|
||||
# Use db storage if available
|
||||
saved_to_db = DataSequence.save(self)
|
||||
if not saved_to_db:
|
||||
measurement_file_path = self._measurement_file_path()
|
||||
if measurement_file_path is None:
|
||||
return False
|
||||
try:
|
||||
measurement_file_path.write_text(
|
||||
self.model_dump_json(indent=4),
|
||||
encoding="utf-8",
|
||||
newline="\n",
|
||||
)
|
||||
except Exception as e:
|
||||
logger.exception("Cannot save measurements")
|
||||
return True
|
||||
|
||||
def load(self) -> bool:
|
||||
"""Load measurements from persistent storage.
|
||||
|
||||
Returns:
|
||||
True in case the measurements were loaded, False otherwise.
|
||||
"""
|
||||
# Use db storage if available
|
||||
loaded_from_db = DataSequence.load(self)
|
||||
if not loaded_from_db:
|
||||
measurement_file_path = self._measurement_file_path()
|
||||
if measurement_file_path is None:
|
||||
return False
|
||||
if not measurement_file_path.exists():
|
||||
return False
|
||||
try:
|
||||
# Validate into a temporary instance
|
||||
loaded = self.__class__.model_validate_json(
|
||||
measurement_file_path.read_text(encoding="utf-8")
|
||||
)
|
||||
|
||||
# Explicitly add data records to the existing singleton
|
||||
for record in loaded.records:
|
||||
self.insert_by_datetime(record)
|
||||
except Exception as e:
|
||||
logger.exception("Cannot load measurements")
|
||||
return True
|
||||
|
||||
@@ -18,6 +18,7 @@ from akkudoktoreos.core.coreabc import (
|
||||
ConfigMixin,
|
||||
MeasurementMixin,
|
||||
PredictionMixin,
|
||||
get_ems,
|
||||
)
|
||||
from akkudoktoreos.optimization.genetic.geneticabc import GeneticParametersBaseModel
|
||||
from akkudoktoreos.optimization.genetic.geneticdevices import (
|
||||
@@ -161,9 +162,6 @@ class GeneticOptimizationParameters(
|
||||
Raises:
|
||||
ValueError: If required configuration values like start time are missing.
|
||||
"""
|
||||
# Avoid circular dependency
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
|
||||
ems = get_ems()
|
||||
|
||||
# The optimization paramters
|
||||
@@ -439,6 +437,7 @@ class GeneticOptimizationParameters(
|
||||
initial_soc_factor = cls.measurement.key_to_value(
|
||||
key=battery_config.measurement_key_soc_factor,
|
||||
target_datetime=ems.start_datetime,
|
||||
time_window=to_duration(to_duration("48 hours")),
|
||||
)
|
||||
if initial_soc_factor > 1.0 or initial_soc_factor < 0.0:
|
||||
logger.error(
|
||||
@@ -510,6 +509,7 @@ class GeneticOptimizationParameters(
|
||||
initial_soc_factor = cls.measurement.key_to_value(
|
||||
key=electric_vehicle_config.measurement_key_soc_factor,
|
||||
target_datetime=ems.start_datetime,
|
||||
time_window=to_duration(to_duration("48 hours")),
|
||||
)
|
||||
if initial_soc_factor > 1.0 or initial_soc_factor < 0.0:
|
||||
logger.error(
|
||||
|
||||
@@ -8,6 +8,8 @@ from pydantic import Field, field_validator
|
||||
|
||||
from akkudoktoreos.core.coreabc import (
|
||||
ConfigMixin,
|
||||
get_ems,
|
||||
get_prediction,
|
||||
)
|
||||
from akkudoktoreos.core.emplan import (
|
||||
DDBCInstruction,
|
||||
@@ -22,7 +24,6 @@ from akkudoktoreos.devices.devicesabc import (
|
||||
from akkudoktoreos.devices.genetic.battery import Battery
|
||||
from akkudoktoreos.optimization.genetic.geneticdevices import GeneticParametersBaseModel
|
||||
from akkudoktoreos.optimization.optimization import OptimizationSolution
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
from akkudoktoreos.utils.datetimeutil import to_datetime, to_duration
|
||||
from akkudoktoreos.utils.utils import NumpyEncoder
|
||||
|
||||
@@ -272,8 +273,6 @@ class GeneticSolution(ConfigMixin, GeneticParametersBaseModel):
|
||||
- GRID_SUPPORT_EXPORT: ac_charge == 0 and discharge_allowed == 1
|
||||
- GRID_SUPPORT_IMPORT: ac_charge > 0 and discharge_allowed == 0 or 1
|
||||
"""
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
|
||||
start_datetime = get_ems().start_datetime
|
||||
start_day_hour = start_datetime.in_timezone(self.config.general.timezone).hour
|
||||
interval_hours = 1
|
||||
@@ -567,8 +566,6 @@ class GeneticSolution(ConfigMixin, GeneticParametersBaseModel):
|
||||
|
||||
def energy_management_plan(self) -> EnergyManagementPlan:
|
||||
"""Provide the genetic solution as an energy management plan."""
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
|
||||
start_datetime = get_ems().start_datetime
|
||||
start_day_hour = start_datetime.in_timezone(self.config.general.timezone).hour
|
||||
plan = EnergyManagementPlan(
|
||||
|
||||
@@ -3,6 +3,7 @@ from typing import Optional, Union
|
||||
from pydantic import Field, computed_field, model_validator
|
||||
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.core.pydantic import (
|
||||
PydanticBaseModel,
|
||||
PydanticDateTimeDataFrame,
|
||||
@@ -91,10 +92,14 @@ class OptimizationCommonSettings(SettingsBaseModel):
|
||||
@property
|
||||
def keys(self) -> list[str]:
|
||||
"""The keys of the solution."""
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
try:
|
||||
ems_eos = get_ems()
|
||||
except:
|
||||
# ems might not be initialized
|
||||
return []
|
||||
|
||||
key_list = []
|
||||
optimization_solution = get_ems().optimization_solution()
|
||||
optimization_solution = ems_eos.optimization_solution()
|
||||
if optimization_solution:
|
||||
# Prepare mapping
|
||||
df = optimization_solution.solution.to_dataframe()
|
||||
|
||||
@@ -3,21 +3,28 @@ from typing import Optional
|
||||
from pydantic import Field, computed_field, field_validator
|
||||
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.core.coreabc import get_prediction
|
||||
from akkudoktoreos.prediction.elecpriceabc import ElecPriceProvider
|
||||
from akkudoktoreos.prediction.elecpriceenergycharts import (
|
||||
ElecPriceEnergyChartsCommonSettings,
|
||||
)
|
||||
from akkudoktoreos.prediction.elecpriceimport import ElecPriceImportCommonSettings
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
|
||||
prediction_eos = get_prediction()
|
||||
|
||||
# Valid elecprice providers
|
||||
elecprice_providers = [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, ElecPriceProvider)
|
||||
]
|
||||
def elecprice_provider_ids() -> list[str]:
|
||||
"""Valid elecprice provider ids."""
|
||||
try:
|
||||
prediction_eos = get_prediction()
|
||||
except:
|
||||
# Prediction may not be initialized
|
||||
# Return at least provider used in example
|
||||
return ["ElecPriceAkkudoktor"]
|
||||
|
||||
return [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, ElecPriceProvider)
|
||||
]
|
||||
|
||||
|
||||
class ElecPriceCommonSettings(SettingsBaseModel):
|
||||
@@ -61,14 +68,14 @@ class ElecPriceCommonSettings(SettingsBaseModel):
|
||||
@property
|
||||
def providers(self) -> list[str]:
|
||||
"""Available electricity price provider ids."""
|
||||
return elecprice_providers
|
||||
return elecprice_provider_ids()
|
||||
|
||||
# Validators
|
||||
@field_validator("provider", mode="after")
|
||||
@classmethod
|
||||
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
|
||||
if value is None or value in elecprice_providers:
|
||||
if value is None or value in elecprice_provider_ids():
|
||||
return value
|
||||
raise ValueError(
|
||||
f"Provider '{value}' is not a valid electricity price provider: {elecprice_providers}."
|
||||
f"Provider '{value}' is not a valid electricity price provider: {elecprice_provider_ids()}."
|
||||
)
|
||||
|
||||
@@ -3,19 +3,26 @@ from typing import Optional
|
||||
from pydantic import Field, computed_field, field_validator
|
||||
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.core.coreabc import get_prediction
|
||||
from akkudoktoreos.prediction.feedintariffabc import FeedInTariffProvider
|
||||
from akkudoktoreos.prediction.feedintarifffixed import FeedInTariffFixedCommonSettings
|
||||
from akkudoktoreos.prediction.feedintariffimport import FeedInTariffImportCommonSettings
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
|
||||
prediction_eos = get_prediction()
|
||||
|
||||
# Valid feedintariff providers
|
||||
feedintariff_providers = [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, FeedInTariffProvider)
|
||||
]
|
||||
def elecprice_provider_ids() -> list[str]:
|
||||
"""Valid feedintariff provider ids."""
|
||||
try:
|
||||
prediction_eos = get_prediction()
|
||||
except:
|
||||
# Prediction may not be initialized
|
||||
# Return at least provider used in example
|
||||
return ["FeedInTariffFixed", "FeedInTarifImport"]
|
||||
|
||||
return [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, FeedInTariffProvider)
|
||||
]
|
||||
|
||||
|
||||
class FeedInTariffCommonProviderSettings(SettingsBaseModel):
|
||||
@@ -60,14 +67,14 @@ class FeedInTariffCommonSettings(SettingsBaseModel):
|
||||
@property
|
||||
def providers(self) -> list[str]:
|
||||
"""Available feed in tariff provider ids."""
|
||||
return feedintariff_providers
|
||||
return elecprice_provider_ids()
|
||||
|
||||
# Validators
|
||||
@field_validator("provider", mode="after")
|
||||
@classmethod
|
||||
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
|
||||
if value is None or value in feedintariff_providers:
|
||||
if value is None or value in elecprice_provider_ids():
|
||||
return value
|
||||
raise ValueError(
|
||||
f"Provider '{value}' is not a valid feed in tariff provider: {feedintariff_providers}."
|
||||
f"Provider '{value}' is not a valid feed in tariff provider: {elecprice_provider_ids()}."
|
||||
)
|
||||
|
||||
@@ -5,20 +5,27 @@ from typing import Optional
|
||||
from pydantic import Field, computed_field, field_validator
|
||||
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.core.coreabc import get_prediction
|
||||
from akkudoktoreos.prediction.loadabc import LoadProvider
|
||||
from akkudoktoreos.prediction.loadakkudoktor import LoadAkkudoktorCommonSettings
|
||||
from akkudoktoreos.prediction.loadimport import LoadImportCommonSettings
|
||||
from akkudoktoreos.prediction.loadvrm import LoadVrmCommonSettings
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
|
||||
prediction_eos = get_prediction()
|
||||
|
||||
# Valid load providers
|
||||
load_providers = [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, LoadProvider)
|
||||
]
|
||||
def load_providers() -> list[str]:
|
||||
"""Valid load provider ids."""
|
||||
try:
|
||||
prediction_eos = get_prediction()
|
||||
except:
|
||||
# Prediction may not be initialized
|
||||
# Return at least provider used in example
|
||||
return ["LoadAkkudoktor", "LoadVrm", "LoadImport"]
|
||||
|
||||
return [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, LoadProvider)
|
||||
]
|
||||
|
||||
|
||||
class LoadCommonProviderSettings(SettingsBaseModel):
|
||||
@@ -66,12 +73,12 @@ class LoadCommonSettings(SettingsBaseModel):
|
||||
@property
|
||||
def providers(self) -> list[str]:
|
||||
"""Available load provider ids."""
|
||||
return load_providers
|
||||
return load_providers()
|
||||
|
||||
# Validators
|
||||
@field_validator("provider", mode="after")
|
||||
@classmethod
|
||||
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
|
||||
if value is None or value in load_providers:
|
||||
if value is None or value in load_providers():
|
||||
return value
|
||||
raise ValueError(f"Provider '{value}' is not a valid load provider: {load_providers}.")
|
||||
raise ValueError(f"Provider '{value}' is not a valid load provider: {load_providers()}.")
|
||||
|
||||
@@ -132,23 +132,32 @@ class LoadAkkudoktorAdjusted(LoadAkkudoktor):
|
||||
compare_dt = compare_start
|
||||
for i in range(len(load_total_kwh_array)):
|
||||
load_total_wh = load_total_kwh_array[i] * 1000
|
||||
hour = compare_dt.hour
|
||||
|
||||
# Weight calculated by distance in days to the latest measurement
|
||||
weight = 1 / ((compare_end - compare_dt).days + 1)
|
||||
|
||||
# Extract mean (index 0) and standard deviation (index 1) for the given day and hour
|
||||
# Day indexing starts at 0, -1 because of that
|
||||
hourly_stats = data_year_energy[compare_dt.day_of_year - 1, :, compare_dt.hour]
|
||||
weight = 1 / ((compare_end - compare_dt).days + 1)
|
||||
day_idx = compare_dt.day_of_year - 1
|
||||
hourly_stats = data_year_energy[day_idx, :, hour]
|
||||
|
||||
# Calculate adjustments (working days and weekend)
|
||||
if compare_dt.day_of_week < 5:
|
||||
weekday_adjust[compare_dt.hour] += (load_total_wh - hourly_stats[0]) * weight
|
||||
weekday_adjust_weight[compare_dt.hour] += weight
|
||||
weekday_adjust[hour] += (load_total_wh - hourly_stats[0]) * weight
|
||||
weekday_adjust_weight[hour] += weight
|
||||
else:
|
||||
weekend_adjust[compare_dt.hour] += (load_total_wh - hourly_stats[0]) * weight
|
||||
weekend_adjust_weight[compare_dt.hour] += weight
|
||||
weekend_adjust[hour] += (load_total_wh - hourly_stats[0]) * weight
|
||||
weekend_adjust_weight[hour] += weight
|
||||
|
||||
compare_dt += compare_interval
|
||||
|
||||
# Calculate mean
|
||||
for i in range(24):
|
||||
if weekday_adjust_weight[i] > 0:
|
||||
weekday_adjust[i] = weekday_adjust[i] / weekday_adjust_weight[i]
|
||||
if weekend_adjust_weight[i] > 0:
|
||||
weekend_adjust[i] = weekend_adjust[i] / weekend_adjust_weight[i]
|
||||
for hour in range(24):
|
||||
if weekday_adjust_weight[hour] > 0:
|
||||
weekday_adjust[hour] = weekday_adjust[hour] / weekday_adjust_weight[hour]
|
||||
if weekend_adjust_weight[hour] > 0:
|
||||
weekend_adjust[hour] = weekend_adjust[hour] / weekend_adjust_weight[hour]
|
||||
|
||||
return (weekday_adjust, weekend_adjust)
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ Attributes:
|
||||
weather_clearoutside (WeatherClearOutside): Weather forecast provider using ClearOutside.
|
||||
"""
|
||||
|
||||
from typing import List, Optional, Union
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
@@ -69,38 +69,6 @@ class PredictionCommonSettings(SettingsBaseModel):
|
||||
)
|
||||
|
||||
|
||||
class Prediction(PredictionContainer):
|
||||
"""Prediction container to manage multiple prediction providers.
|
||||
|
||||
Attributes:
|
||||
providers (List[Union[PVForecastAkkudoktor, WeatherBrightSky, WeatherClearOutside]]):
|
||||
List of forecast provider instances, in the order they should be updated.
|
||||
Providers may depend on updates from others.
|
||||
"""
|
||||
|
||||
providers: List[
|
||||
Union[
|
||||
ElecPriceAkkudoktor,
|
||||
ElecPriceEnergyCharts,
|
||||
ElecPriceImport,
|
||||
FeedInTariffFixed,
|
||||
FeedInTariffImport,
|
||||
LoadAkkudoktor,
|
||||
LoadAkkudoktorAdjusted,
|
||||
LoadVrm,
|
||||
LoadImport,
|
||||
PVForecastAkkudoktor,
|
||||
PVForecastVrm,
|
||||
PVForecastImport,
|
||||
WeatherBrightSky,
|
||||
WeatherClearOutside,
|
||||
WeatherImport,
|
||||
]
|
||||
] = Field(
|
||||
default_factory=list, json_schema_extra={"description": "List of prediction providers"}
|
||||
)
|
||||
|
||||
|
||||
# Initialize forecast providers, all are singletons.
|
||||
elecprice_akkudoktor = ElecPriceAkkudoktor()
|
||||
elecprice_energy_charts = ElecPriceEnergyCharts()
|
||||
@@ -119,42 +87,85 @@ weather_clearoutside = WeatherClearOutside()
|
||||
weather_import = WeatherImport()
|
||||
|
||||
|
||||
def get_prediction() -> Prediction:
|
||||
"""Gets the EOS prediction data."""
|
||||
# Initialize Prediction instance with providers in the required order
|
||||
def prediction_providers() -> list[
|
||||
Union[
|
||||
ElecPriceAkkudoktor,
|
||||
ElecPriceEnergyCharts,
|
||||
ElecPriceImport,
|
||||
FeedInTariffFixed,
|
||||
FeedInTariffImport,
|
||||
LoadAkkudoktor,
|
||||
LoadAkkudoktorAdjusted,
|
||||
LoadVrm,
|
||||
LoadImport,
|
||||
PVForecastAkkudoktor,
|
||||
PVForecastVrm,
|
||||
PVForecastImport,
|
||||
WeatherBrightSky,
|
||||
WeatherClearOutside,
|
||||
WeatherImport,
|
||||
]
|
||||
]:
|
||||
"""Return list of prediction providers."""
|
||||
global \
|
||||
elecprice_akkudoktor, \
|
||||
elecprice_energy_charts, \
|
||||
elecprice_import, \
|
||||
feedintariff_fixed, \
|
||||
feedintariff_import, \
|
||||
loadforecast_akkudoktor, \
|
||||
loadforecast_akkudoktor_adjusted, \
|
||||
loadforecast_vrm, \
|
||||
loadforecast_import, \
|
||||
pvforecast_akkudoktor, \
|
||||
pvforecast_vrm, \
|
||||
pvforecast_import, \
|
||||
weather_brightsky, \
|
||||
weather_clearoutside, \
|
||||
weather_import
|
||||
|
||||
# Care for provider sequence as providers may rely on others to be updated before.
|
||||
prediction = Prediction(
|
||||
providers=[
|
||||
elecprice_akkudoktor,
|
||||
elecprice_energy_charts,
|
||||
elecprice_import,
|
||||
feedintariff_fixed,
|
||||
feedintariff_import,
|
||||
loadforecast_akkudoktor,
|
||||
loadforecast_akkudoktor_adjusted,
|
||||
loadforecast_vrm,
|
||||
loadforecast_import,
|
||||
pvforecast_akkudoktor,
|
||||
pvforecast_vrm,
|
||||
pvforecast_import,
|
||||
weather_brightsky,
|
||||
weather_clearoutside,
|
||||
weather_import,
|
||||
return [
|
||||
elecprice_akkudoktor,
|
||||
elecprice_energy_charts,
|
||||
elecprice_import,
|
||||
feedintariff_fixed,
|
||||
feedintariff_import,
|
||||
loadforecast_akkudoktor,
|
||||
loadforecast_akkudoktor_adjusted,
|
||||
loadforecast_vrm,
|
||||
loadforecast_import,
|
||||
pvforecast_akkudoktor,
|
||||
pvforecast_vrm,
|
||||
pvforecast_import,
|
||||
weather_brightsky,
|
||||
weather_clearoutside,
|
||||
weather_import,
|
||||
]
|
||||
|
||||
|
||||
class Prediction(PredictionContainer):
|
||||
"""Prediction container to manage multiple prediction providers."""
|
||||
|
||||
providers: list[
|
||||
Union[
|
||||
ElecPriceAkkudoktor,
|
||||
ElecPriceEnergyCharts,
|
||||
ElecPriceImport,
|
||||
FeedInTariffFixed,
|
||||
FeedInTariffImport,
|
||||
LoadAkkudoktor,
|
||||
LoadAkkudoktorAdjusted,
|
||||
LoadVrm,
|
||||
LoadImport,
|
||||
PVForecastAkkudoktor,
|
||||
PVForecastVrm,
|
||||
PVForecastImport,
|
||||
WeatherBrightSky,
|
||||
WeatherClearOutside,
|
||||
WeatherImport,
|
||||
]
|
||||
] = Field(
|
||||
default_factory=prediction_providers,
|
||||
json_schema_extra={"description": "List of prediction providers"},
|
||||
)
|
||||
return prediction
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Main function to update and display predictions.
|
||||
|
||||
This function initializes and updates the forecast providers in sequence
|
||||
according to the `Prediction` instance, then prints the updated prediction data.
|
||||
"""
|
||||
prediction = get_prediction()
|
||||
prediction.update_data()
|
||||
print(f"Prediction: {prediction}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
@@ -15,17 +15,17 @@ from pydantic import Field, computed_field
|
||||
|
||||
from akkudoktoreos.core.coreabc import MeasurementMixin
|
||||
from akkudoktoreos.core.dataabc import (
|
||||
DataBase,
|
||||
DataABC,
|
||||
DataContainer,
|
||||
DataImportProvider,
|
||||
DataProvider,
|
||||
DataRecord,
|
||||
DataSequence,
|
||||
)
|
||||
from akkudoktoreos.utils.datetimeutil import DateTime, to_duration
|
||||
from akkudoktoreos.utils.datetimeutil import DateTime, Duration, to_duration
|
||||
|
||||
|
||||
class PredictionBase(DataBase, MeasurementMixin):
|
||||
class PredictionABC(DataABC, MeasurementMixin):
|
||||
"""Base class for handling prediction data.
|
||||
|
||||
Enables access to EOS configuration data (attribute `config`) and EOS measurement data
|
||||
@@ -95,7 +95,7 @@ class PredictionSequence(DataSequence):
|
||||
)
|
||||
|
||||
|
||||
class PredictionStartEndKeepMixin(PredictionBase):
|
||||
class PredictionStartEndKeepMixin(PredictionABC):
|
||||
"""A mixin to manage start, end, and historical retention datetimes for prediction data.
|
||||
|
||||
The starting datetime for prediction data generation is provided by the energy management
|
||||
@@ -196,6 +196,35 @@ class PredictionProvider(PredictionStartEndKeepMixin, DataProvider):
|
||||
Derived classes have to provide their own records field with correct record type set.
|
||||
"""
|
||||
|
||||
def db_keep_datetime(self) -> Optional[DateTime]:
|
||||
"""Earliest datetime from which database records should be retained.
|
||||
|
||||
Used when removing old records from database to free space.
|
||||
|
||||
Subclasses may override this method to provide a domain-specific default.
|
||||
|
||||
Returns:
|
||||
Datetime or None.
|
||||
"""
|
||||
return self.keep_datetime
|
||||
|
||||
def db_initial_time_window(self) -> Optional[Duration]:
|
||||
"""Return the initial time window used for database loading.
|
||||
|
||||
This window defines the initial symmetric time span around a target datetime
|
||||
that should be loaded from the database when no explicit search time window
|
||||
is specified. It serves as a loading hint and may be expanded by the caller
|
||||
if no records are found within the initial range.
|
||||
|
||||
Subclasses may override this method to provide a domain-specific default.
|
||||
|
||||
Returns:
|
||||
The initial loading time window as a Duration, or ``None`` to indicate
|
||||
that no initial window constraint should be applied.
|
||||
"""
|
||||
hours = max(self.config.prediction.hours, self.config.prediction_historic_hours, 24)
|
||||
return to_duration(hours * 3600)
|
||||
|
||||
def update_data(
|
||||
self,
|
||||
force_enable: Optional[bool] = False,
|
||||
@@ -219,9 +248,6 @@ class PredictionProvider(PredictionStartEndKeepMixin, DataProvider):
|
||||
# Call the custom update logic
|
||||
self._update_data(force_update=force_update)
|
||||
|
||||
# Assure records are sorted.
|
||||
self.sort_by_datetime()
|
||||
|
||||
|
||||
class PredictionImportProvider(PredictionProvider, DataImportProvider):
|
||||
"""Abstract base class for prediction providers that import prediction data.
|
||||
|
||||
@@ -5,19 +5,26 @@ from typing import Any, List, Optional, Self
|
||||
from pydantic import Field, computed_field, field_validator, model_validator
|
||||
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
from akkudoktoreos.core.coreabc import get_prediction
|
||||
from akkudoktoreos.prediction.pvforecastabc import PVForecastProvider
|
||||
from akkudoktoreos.prediction.pvforecastimport import PVForecastImportCommonSettings
|
||||
from akkudoktoreos.prediction.pvforecastvrm import PVForecastVrmCommonSettings
|
||||
|
||||
prediction_eos = get_prediction()
|
||||
|
||||
# Valid PV forecast providers
|
||||
pvforecast_providers = [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, PVForecastProvider)
|
||||
]
|
||||
def pvforecast_provider_ids() -> list[str]:
|
||||
"""Valid PV forecast providers."""
|
||||
try:
|
||||
prediction_eos = get_prediction()
|
||||
except:
|
||||
# Prediction may not be initialized
|
||||
# Return at least provider used in example
|
||||
return ["PVForecastAkkudoktor", "PVForecastImport", "PVForecastVrm"]
|
||||
|
||||
return [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, PVForecastProvider)
|
||||
]
|
||||
|
||||
|
||||
class PVForecastPlaneSetting(SettingsBaseModel):
|
||||
@@ -264,16 +271,16 @@ class PVForecastCommonSettings(SettingsBaseModel):
|
||||
@property
|
||||
def providers(self) -> list[str]:
|
||||
"""Available PVForecast provider ids."""
|
||||
return pvforecast_providers
|
||||
return pvforecast_provider_ids()
|
||||
|
||||
# Validators
|
||||
@field_validator("provider", mode="after")
|
||||
@classmethod
|
||||
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
|
||||
if value is None or value in pvforecast_providers:
|
||||
if value is None or value in pvforecast_provider_ids():
|
||||
return value
|
||||
raise ValueError(
|
||||
f"Provider '{value}' is not a valid PV forecast provider: {pvforecast_providers}."
|
||||
f"Provider '{value}' is not a valid PV forecast provider: {pvforecast_provider_ids()}."
|
||||
)
|
||||
|
||||
## Computed fields
|
||||
|
||||
@@ -5,18 +5,25 @@ from typing import Optional
|
||||
from pydantic import Field, computed_field, field_validator
|
||||
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
from akkudoktoreos.core.coreabc import get_prediction
|
||||
from akkudoktoreos.prediction.weatherabc import WeatherProvider
|
||||
from akkudoktoreos.prediction.weatherimport import WeatherImportCommonSettings
|
||||
|
||||
prediction_eos = get_prediction()
|
||||
|
||||
# Valid weather providers
|
||||
weather_providers = [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, WeatherProvider)
|
||||
]
|
||||
def weather_provider_ids() -> list[str]:
|
||||
"""Valid weather provider ids."""
|
||||
try:
|
||||
prediction_eos = get_prediction()
|
||||
except:
|
||||
# Prediction may not be initialized
|
||||
# Return at least provider used in example
|
||||
return ["WeatherImport"]
|
||||
|
||||
return [
|
||||
provider.provider_id()
|
||||
for provider in prediction_eos.providers
|
||||
if isinstance(provider, WeatherProvider)
|
||||
]
|
||||
|
||||
|
||||
class WeatherCommonProviderSettings(SettingsBaseModel):
|
||||
@@ -56,14 +63,14 @@ class WeatherCommonSettings(SettingsBaseModel):
|
||||
@property
|
||||
def providers(self) -> list[str]:
|
||||
"""Available weather provider ids."""
|
||||
return weather_providers
|
||||
return weather_provider_ids()
|
||||
|
||||
# Validators
|
||||
@field_validator("provider", mode="after")
|
||||
@classmethod
|
||||
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
|
||||
if value is None or value in weather_providers:
|
||||
if value is None or value in weather_provider_ids():
|
||||
return value
|
||||
raise ValueError(
|
||||
f"Provider '{value}' is not a valid weather provider: {weather_providers}."
|
||||
f"Provider '{value}' is not a valid weather provider: {weather_provider_ids()}."
|
||||
)
|
||||
|
||||
@@ -402,6 +402,75 @@ def AdminConfig(
|
||||
)
|
||||
|
||||
|
||||
def AdminDatabase(
|
||||
eos_host: str, eos_port: Union[str, int], data: Optional[dict], config: Optional[dict[str, Any]]
|
||||
) -> tuple[str, Union[Card, list[Card]]]:
|
||||
"""Creates a cache management card.
|
||||
|
||||
Args:
|
||||
eos_host (str): The hostname of the EOS server.
|
||||
eos_port (Union[str, int]): The port of the EOS server.
|
||||
data (Optional[dict]): Incoming data containing action and category for processing.
|
||||
|
||||
Returns:
|
||||
tuple[str, Union[Card, list[Card]]]: A tuple containing the cache category label and the `Card` UI component.
|
||||
"""
|
||||
server = f"http://{eos_host}:{eos_port}"
|
||||
eos_hostname = "EOS server"
|
||||
eosdash_hostname = "EOSdash server"
|
||||
|
||||
category = "database"
|
||||
|
||||
status_vacuum = (None,)
|
||||
if data and data.get("category", None) == category:
|
||||
# This data is for us
|
||||
if data["action"] == "vacuum":
|
||||
# Remove old records from database
|
||||
try:
|
||||
result = requests.post(f"{server}/v1/admin/database/vacuum", timeout=30)
|
||||
result.raise_for_status()
|
||||
status_vacuum = Success(
|
||||
f"Removed old data records from database on '{eos_hostname}'"
|
||||
)
|
||||
except requests.exceptions.HTTPError as e:
|
||||
detail = result.json()["detail"]
|
||||
status_vacuum = Error(
|
||||
f"Can not remove old data records from database on '{eos_hostname}': {e}, {detail}"
|
||||
)
|
||||
except Exception as e:
|
||||
status_vacuum = Error(
|
||||
f"Can not remove old data records from database on '{eos_hostname}': {e}"
|
||||
)
|
||||
|
||||
return (
|
||||
category,
|
||||
[
|
||||
Card(
|
||||
Details(
|
||||
Summary(
|
||||
Grid(
|
||||
DivHStacked(
|
||||
UkIcon(icon="play"),
|
||||
ConfigButton(
|
||||
"Vacuum",
|
||||
hx_post=request_url_for("/eosdash/admin"),
|
||||
hx_target="#page-content",
|
||||
hx_swap="innerHTML",
|
||||
hx_vals='{"category": "database", "action": "vacuum"}',
|
||||
),
|
||||
P(f"Remove old data records from database on '{eos_hostname}'"),
|
||||
),
|
||||
status_vacuum,
|
||||
),
|
||||
cls="list-none",
|
||||
),
|
||||
P(f"Remove old data records from database on '{eos_hostname}'."),
|
||||
),
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
def Admin(eos_host: str, eos_port: Union[str, int], data: Optional[dict] = None) -> Div:
|
||||
"""Generates the administrative dashboard layout.
|
||||
|
||||
@@ -450,6 +519,7 @@ def Admin(eos_host: str, eos_port: Union[str, int], data: Optional[dict] = None)
|
||||
for category, admin in [
|
||||
AdminCache(eos_host, eos_port, data, config),
|
||||
AdminConfig(eos_host, eos_port, data, config, config_backup),
|
||||
AdminDatabase(eos_host, eos_port, data, config),
|
||||
]:
|
||||
if category != last_category:
|
||||
rows.append(H3(category))
|
||||
|
||||
@@ -7,7 +7,7 @@ from monsterui.franken import A, ButtonT, DivFullySpaced, P
|
||||
from requests.exceptions import RequestException
|
||||
|
||||
import akkudoktoreos.server.dash.eosstatus as eosstatus
|
||||
from akkudoktoreos.config.config import get_config
|
||||
from akkudoktoreos.core.coreabc import get_config
|
||||
|
||||
|
||||
def get_alive(eos_host: str, eos_port: Union[str, int]) -> str:
|
||||
|
||||
@@ -206,13 +206,20 @@ def SolutionCard(solution: OptimizationSolution, config: SettingsEOS, data: Opti
|
||||
else:
|
||||
continue
|
||||
# Adjust to similar y-axis 0-point
|
||||
values_min_max = [
|
||||
(energy_wh_min, energy_wh_max),
|
||||
(amt_kwh_min, amt_kwh_max),
|
||||
(amt_min, amt_max),
|
||||
(soc_factor_min, soc_factor_max),
|
||||
]
|
||||
# First get the maximum factor for the min value related the maximum value
|
||||
min_max_factor = max(
|
||||
(energy_wh_min * -1.0) / energy_wh_max,
|
||||
(amt_kwh_min * -1.0) / amt_kwh_max,
|
||||
(amt_min * -1.0) / amt_max,
|
||||
(soc_factor_min * -1.0) / soc_factor_max,
|
||||
)
|
||||
min_max_factor = 0.0
|
||||
for value_min, value_max in values_min_max:
|
||||
if value_max > 0:
|
||||
value_factor = (value_min * -1.0) / value_max
|
||||
if value_factor > min_max_factor:
|
||||
min_max_factor = value_factor
|
||||
|
||||
# Adapt the min values to have the same relative min/max factor on all y-axis
|
||||
energy_wh_min = min_max_factor * energy_wh_max * -1.0
|
||||
amt_kwh_min = min_max_factor * amt_kwh_max * -1.0
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -12,7 +12,7 @@ from monsterui.core import FastHTML, Theme
|
||||
from starlette.middleware import Middleware
|
||||
from starlette.requests import Request
|
||||
|
||||
from akkudoktoreos.config.config import get_config
|
||||
from akkudoktoreos.core.coreabc import get_config
|
||||
from akkudoktoreos.core.logabc import LOGGING_LEVELS
|
||||
from akkudoktoreos.core.logging import logging_track_config
|
||||
from akkudoktoreos.core.version import __version__
|
||||
@@ -39,7 +39,7 @@ from akkudoktoreos.server.server import (
|
||||
)
|
||||
from akkudoktoreos.utils.stringutil import str2bool
|
||||
|
||||
config_eos = get_config()
|
||||
config_eos = get_config(init=True)
|
||||
|
||||
|
||||
# ------------------------------------
|
||||
|
||||
149
src/akkudoktoreos/server/rest/cli.py
Normal file
149
src/akkudoktoreos/server/rest/cli.py
Normal file
@@ -0,0 +1,149 @@
|
||||
import argparse
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from akkudoktoreos.core.coreabc import get_config
|
||||
from akkudoktoreos.core.logabc import LOGGING_LEVELS
|
||||
from akkudoktoreos.server.server import get_default_host
|
||||
from akkudoktoreos.utils.stringutil import str2bool
|
||||
|
||||
|
||||
def cli_argument_parser() -> argparse.ArgumentParser:
|
||||
"""Build argument parser for EOS cli."""
|
||||
parser = argparse.ArgumentParser(description="Start EOS server.")
|
||||
|
||||
parser.add_argument(
|
||||
"--host",
|
||||
type=str,
|
||||
help="Host for the EOS server (default: value from config)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port",
|
||||
type=int,
|
||||
help="Port for the EOS server (default: value from config)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--log_level",
|
||||
type=str,
|
||||
default="none",
|
||||
help='Log level for the server console. Options: "critical", "error", "warning", "info", "debug", "trace" (default: "none")',
|
||||
)
|
||||
parser.add_argument(
|
||||
"--reload",
|
||||
type=str2bool,
|
||||
default=False,
|
||||
help="Enable or disable auto-reload. Useful for development. Options: True or False (default: False)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--startup_eosdash",
|
||||
type=str2bool,
|
||||
default=None,
|
||||
help="Enable or disable automatic EOSdash startup. Options: True or False (default: value from config)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--run_as_user",
|
||||
type=str,
|
||||
help="The unprivileged user account the EOS server shall switch to after performing root-level startup tasks.",
|
||||
)
|
||||
return parser
|
||||
|
||||
|
||||
def cli_parse_args(
|
||||
argv: list[str] | None = None,
|
||||
) -> tuple[argparse.Namespace, list[str]]:
|
||||
"""Parse command-line arguments for the EOS CLI.
|
||||
|
||||
This function parses known EOS-specific command-line arguments and
|
||||
returns any remaining unknown arguments unmodified. Unknown arguments
|
||||
can be forwarded to other subsystems (e.g. Uvicorn).
|
||||
|
||||
If ``argv`` is ``None``, arguments are read from ``sys.argv[1:]``.
|
||||
If ``argv`` is provided, it is used instead.
|
||||
|
||||
Args:
|
||||
argv: Optional list of command-line arguments to parse. If omitted,
|
||||
the arguments are taken from ``sys.argv[1:]``.
|
||||
|
||||
Returns:
|
||||
A tuple containing:
|
||||
- A namespace with parsed EOS CLI arguments.
|
||||
- A list of unparsed (unknown) command-line arguments.
|
||||
"""
|
||||
args, args_unknown = cli_argument_parser().parse_known_args(argv)
|
||||
return args, args_unknown
|
||||
|
||||
|
||||
def cli_apply_args_to_config(args: argparse.Namespace) -> None:
|
||||
"""Apply parsed CLI arguments to the EOS configuration.
|
||||
|
||||
This function updates the EOS configuration with values provided via
|
||||
the command line. For each parameter, the precedence is:
|
||||
|
||||
CLI argument > existing config value > default value
|
||||
|
||||
Currently handled arguments:
|
||||
|
||||
- log_level: Updates "logging/console_level" in config.
|
||||
- host: Updates "server/host" in config.
|
||||
- port: Updates "server/port" in config.
|
||||
- startup_eosdash: Updates "server/startup_eosdash" in config.
|
||||
- eosdash_host/port: Initialized if EOSdash is enabled and not already set.
|
||||
|
||||
Args:
|
||||
args: Parsed command-line arguments from argparse.
|
||||
"""
|
||||
config_eos = get_config()
|
||||
|
||||
# Setup parameters from args, config_eos and default
|
||||
# Remember parameters in config
|
||||
|
||||
# Setup EOS logging level - first to have the other logging messages logged
|
||||
if args.log_level is not None:
|
||||
log_level = args.log_level.upper()
|
||||
# Ensure log_level from command line is in config settings
|
||||
if log_level in LOGGING_LEVELS:
|
||||
# Setup console logging level using nested value
|
||||
# - triggers logging configuration by logging_track_config
|
||||
config_eos.set_nested_value("logging/console_level", log_level)
|
||||
logger.debug(f"logging/console_level configuration set by argument to {log_level}")
|
||||
|
||||
# Setup EOS server host
|
||||
if args.host:
|
||||
host = args.host
|
||||
logger.debug(f"server/host configuration set by argument to {host}")
|
||||
elif config_eos.server.host:
|
||||
host = config_eos.server.host
|
||||
else:
|
||||
host = get_default_host()
|
||||
# Ensure host from command line is in config settings
|
||||
config_eos.set_nested_value("server/host", host)
|
||||
|
||||
# Setup EOS server port
|
||||
if args.port:
|
||||
port = args.port
|
||||
logger.debug(f"server/port configuration set by argument to {port}")
|
||||
elif config_eos.server.port:
|
||||
port = config_eos.server.port
|
||||
else:
|
||||
port = 8503
|
||||
# Ensure port from command line is in config settings
|
||||
config_eos.set_nested_value("server/port", port)
|
||||
|
||||
# Setup EOSdash startup
|
||||
if args.startup_eosdash is not None:
|
||||
# Ensure startup_eosdash from command line is in config settings
|
||||
config_eos.set_nested_value("server/startup_eosdash", args.startup_eosdash)
|
||||
logger.debug(
|
||||
f"server/startup_eosdash configuration set by argument to {args.startup_eosdash}"
|
||||
)
|
||||
|
||||
if config_eos.server.startup_eosdash:
|
||||
# Ensure EOSdash host and port config settings are at least set to default values
|
||||
|
||||
# Setup EOS server host
|
||||
if config_eos.server.eosdash_host is None:
|
||||
config_eos.set_nested_value("server/eosdash_host", host)
|
||||
|
||||
# Setup EOS server host
|
||||
if config_eos.server.eosdash_port is None:
|
||||
config_eos.set_nested_value("server/eosdash_port", port + 1)
|
||||
@@ -8,14 +8,12 @@ from typing import Any, MutableMapping
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from akkudoktoreos.config.config import get_config
|
||||
from akkudoktoreos.core.coreabc import get_config
|
||||
from akkudoktoreos.server.server import (
|
||||
validate_ip_or_hostname,
|
||||
wait_for_port_free,
|
||||
)
|
||||
|
||||
config_eos = get_config()
|
||||
|
||||
# Loguru to HA stdout
|
||||
logger.add(sys.stdout, format="{time} | {level} | {message}", enqueue=True)
|
||||
|
||||
@@ -277,14 +275,18 @@ async def forward_stream(stream: asyncio.StreamReader, prefix: str = "") -> None
|
||||
_emit_drop_warning()
|
||||
|
||||
|
||||
# Path to eosdash
|
||||
eosdash_path = Path(__file__).parent.resolve().joinpath("eosdash.py")
|
||||
|
||||
|
||||
async def run_eosdash_supervisor() -> None:
|
||||
"""Starts EOSdash, pipes its logs, restarts it if it crashes.
|
||||
|
||||
Runs forever.
|
||||
"""
|
||||
global eosdash_log_queue
|
||||
global eosdash_log_queue, eosdash_path
|
||||
|
||||
eosdash_path = Path(__file__).parent.resolve().joinpath("eosdash.py")
|
||||
config_eos = get_config()
|
||||
|
||||
while True:
|
||||
await asyncio.sleep(5)
|
||||
|
||||
@@ -90,3 +90,73 @@ def repeat_every(
|
||||
return wrapped
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
def make_repeated_task(
|
||||
func: NoArgsNoReturnAnyFuncT,
|
||||
*,
|
||||
seconds: float,
|
||||
wait_first: float | None = None,
|
||||
max_repetitions: int | None = None,
|
||||
on_complete: NoArgsNoReturnAnyFuncT | None = None,
|
||||
on_exception: ExcArgNoReturnAnyFuncT | None = None,
|
||||
) -> NoArgsNoReturnAsyncFuncT:
|
||||
"""Create a version of the given function that runs periodically.
|
||||
|
||||
This function wraps `func` with the `repeat_every` decorator at runtime,
|
||||
allowing decorator parameters to be determined dynamically rather than at import time.
|
||||
|
||||
Args:
|
||||
func (Callable[[], None] | Callable[[], Coroutine[Any, Any, None]]):
|
||||
The function to execute periodically. Must accept no arguments.
|
||||
seconds (float):
|
||||
Interval in seconds between repeated calls.
|
||||
wait_first (float | None, optional):
|
||||
If provided, the function will wait this many seconds before the first call.
|
||||
max_repetitions (int | None, optional):
|
||||
Maximum number of times to repeat the function. If None, repeats indefinitely.
|
||||
on_complete (Callable[[], None] | Callable[[], Coroutine[Any, Any, None]] | None, optional):
|
||||
Function to call once the repetitions are complete.
|
||||
on_exception (Callable[[Exception], None] | Callable[[Exception], Coroutine[Any, Any, None]] | None, optional):
|
||||
Function to call if an exception is raised by `func`.
|
||||
|
||||
Returns:
|
||||
Callable[[], Coroutine[Any, Any, None]]:
|
||||
An async function that starts the periodic execution when called.
|
||||
|
||||
Usage:
|
||||
.. code-block:: python
|
||||
|
||||
from my_task import my_task
|
||||
|
||||
from akkudoktoreos.core.coreabc import get_config
|
||||
from akkudoktoreos.server.rest.tasks import make_repeated_task
|
||||
|
||||
config = get_config()
|
||||
|
||||
# Create a periodic task using configuration-dependent interval
|
||||
repeated_task = make_repeated_task(
|
||||
my_task,
|
||||
seconds=config.server.poll_interval,
|
||||
wait_first=5,
|
||||
max_repetitions=None
|
||||
)
|
||||
|
||||
# Run the task in the event loop
|
||||
import asyncio
|
||||
asyncio.run(repeated_task())
|
||||
|
||||
|
||||
Notes:
|
||||
- This pattern avoids starting the loop at import time.
|
||||
- Arguments such as `seconds` can be read from runtime sources (config, CLI args, environment variables).
|
||||
- The returned function must be awaited to start the periodic loop.
|
||||
"""
|
||||
# Return decorated function
|
||||
return repeat_every(
|
||||
seconds=seconds,
|
||||
wait_first=wait_first,
|
||||
max_repetitions=max_repetitions,
|
||||
on_complete=on_complete,
|
||||
on_exception=on_exception,
|
||||
)(func)
|
||||
|
||||
390
src/akkudoktoreos/server/retentionmanager.py
Normal file
390
src/akkudoktoreos/server/retentionmanager.py
Normal file
@@ -0,0 +1,390 @@
|
||||
"""Retention Manager for Akkudoktor-EOS server.
|
||||
|
||||
This module provides a single long-running background task that owns the scheduling of all periodic
|
||||
server-maintenance jobs (cache cleanup, DB autosave, config reload, …).
|
||||
|
||||
Responsibilities:
|
||||
- Run a fast "heartbeat" loop (default 5 s) — the *compaction tick*.
|
||||
- Maintain a registry of ``ManagedJob`` entries, each with its own interval.
|
||||
- Re-read the live configuration on every tick so interval changes take effect
|
||||
immediately without a server restart.
|
||||
- Track per-job state: last run time, last duration, last error, run count.
|
||||
- Expose that state for health-check / metrics endpoints.
|
||||
|
||||
Example:
|
||||
Typical usage inside your FastAPI lifespan::
|
||||
|
||||
from akkudoktoreos.core.coreabc import get_config
|
||||
from akkudoktoreos.server.rest.retention_manager import RetentionManager
|
||||
from akkudoktoreos.server.rest.tasks import make_repeated_task
|
||||
|
||||
manager = RetentionManager(get_config().get_nested_value)
|
||||
manager.register("cache_cleanup", cache_cleanup_fn, interval_attr="server/cache_cleanup_interval")
|
||||
manager.register("db_autosave", db_autosave_fn, interval_attr="server/db_autosave_interval")
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
tick_task = make_repeated_task(manager.tick, seconds=5, wait_first=2)
|
||||
await tick_task()
|
||||
yield
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Callable, Coroutine, Optional, Union
|
||||
|
||||
from loguru import logger
|
||||
from starlette.concurrency import run_in_threadpool
|
||||
|
||||
NoArgsNoReturnAnyFuncT = Union[Callable[[], None], Callable[[], Coroutine[Any, Any, None]]]
|
||||
ExcArgNoReturnAnyFuncT = Union[
|
||||
Callable[[Exception], None], Callable[[Exception], Coroutine[Any, Any, None]]
|
||||
]
|
||||
ConfigGetterFuncT = Callable[[str], Any]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Job state — one per registered maintenance task
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@dataclass
|
||||
class JobState:
|
||||
"""Runtime state tracked for a single managed job.
|
||||
|
||||
Attributes:
|
||||
name: Unique human-readable job name used in logs and metrics.
|
||||
func: The maintenance callable. Must accept no arguments.
|
||||
interval_attr: Key passed to ``config_getter`` to retrieve the interval in seconds
|
||||
for this job.
|
||||
fallback_interval: Interval in seconds used when the key is not found or returns zero.
|
||||
config_getter: Callable that accepts a string key and returns the corresponding
|
||||
configuration value. Invoked with ``interval_attr`` to obtain the interval
|
||||
in seconds.
|
||||
on_exception: Optional callable invoked with the raised exception whenever
|
||||
``func`` fails. May be sync or async.
|
||||
last_run_at: Monotonic timestamp of the last completed run; ``0.0`` means never run.
|
||||
last_duration: How long the last run took, in seconds.
|
||||
last_error: String representation of the last exception, or ``None`` if the last run succeeded.
|
||||
run_count: Total number of completed runs (successful or not).
|
||||
is_running: ``True`` while the job coroutine is currently executing.
|
||||
"""
|
||||
|
||||
name: str
|
||||
func: NoArgsNoReturnAnyFuncT
|
||||
interval_attr: str # key passed to config_getter to obtain the interval in seconds
|
||||
fallback_interval: float # used when the key is not found or returns zero
|
||||
config_getter: ConfigGetterFuncT # callable(key: str) -> Any; returns interval in seconds
|
||||
on_exception: Optional[ExcArgNoReturnAnyFuncT] = None # optional cleanup/alerting hook
|
||||
|
||||
# mutable state
|
||||
last_run_at: float = 0.0 # monotonic timestamp; 0.0 means "never run"
|
||||
last_duration: float = 0.0 # seconds the job took
|
||||
last_error: Optional[str] = None
|
||||
run_count: int = 0
|
||||
is_running: bool = False
|
||||
|
||||
def interval(self) -> Optional[float]:
|
||||
"""Retrieve the current interval by calling ``config_getter`` with ``interval_attr``.
|
||||
|
||||
Returns ``None`` when the config value is ``None``, which signals that the
|
||||
job is disabled and must never fire. Falls back to ``fallback_interval``
|
||||
when the key is not found.
|
||||
|
||||
Returns:
|
||||
The interval in seconds, or ``None`` if the job is disabled.
|
||||
"""
|
||||
try:
|
||||
value = self.config_getter(self.interval_attr)
|
||||
if value is None:
|
||||
return None
|
||||
return float(value) if value else self.fallback_interval
|
||||
except (KeyError, IndexError):
|
||||
logger.warning(
|
||||
"RetentionManager: config key '{}' not found, using fallback {}s",
|
||||
self.interval_attr,
|
||||
self.fallback_interval,
|
||||
)
|
||||
return self.fallback_interval
|
||||
|
||||
def is_due(self) -> bool:
|
||||
"""Check whether enough time has elapsed since the last run to execute this job again.
|
||||
|
||||
Returns ``False`` immediately when `interval` returns ``None``
|
||||
(job is disabled), so a disabled job never fires regardless of when it
|
||||
last ran.
|
||||
|
||||
Returns:
|
||||
``True`` if the job should be executed on this tick, ``False`` otherwise.
|
||||
"""
|
||||
interval = self.interval()
|
||||
if interval is None:
|
||||
return False
|
||||
return (time.monotonic() - self.last_run_at) >= interval
|
||||
|
||||
def summary(self) -> dict:
|
||||
"""Build a serialisable snapshot of the job's current state.
|
||||
|
||||
Returns:
|
||||
A dictionary suitable for JSON serialisation, containing the job name,
|
||||
interval key, last run timestamp, last duration, last error,
|
||||
run count, and whether the job is currently running.
|
||||
"""
|
||||
return {
|
||||
"name": self.name,
|
||||
"interval_attr": self.interval_attr,
|
||||
"interval_s": self.interval(),
|
||||
"last_run_at": self.last_run_at,
|
||||
"last_duration_s": round(self.last_duration, 4),
|
||||
"last_error": self.last_error,
|
||||
"run_count": self.run_count,
|
||||
"is_running": self.is_running,
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Retention Manager
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class RetentionManager:
|
||||
"""Orchestrates all periodic server-maintenance jobs.
|
||||
|
||||
The manager itself is driven by an external ``make_repeated_task`` heartbeat
|
||||
(the *compaction tick*). A ``config_getter`` callable — accepting a string key
|
||||
and returning the corresponding value — is supplied at initialisation and
|
||||
stored on every registered job, keeping the manager decoupled from any
|
||||
specific config implementation.
|
||||
|
||||
Jobs are launched as independent ``asyncio.Task`` objects so they run
|
||||
concurrently without blocking the tick. Call `shutdown` during
|
||||
application teardown to wait for any in-flight tasks to complete before
|
||||
the event loop closes. A configurable shutdown_timeout prevents the
|
||||
wait from blocking indefinitely; jobs still running after the timeout are
|
||||
reported by name but not cancelled.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
config_getter: ConfigGetterFuncT,
|
||||
*,
|
||||
shutdown_timeout: float = 30.0,
|
||||
) -> None:
|
||||
"""Initialise the manager with a configuration accessor.
|
||||
|
||||
Args:
|
||||
config_getter: Callable that accepts a string key and returns the
|
||||
corresponding configuration value. Used by each registered job
|
||||
to look up its interval in seconds.
|
||||
shutdown_timeout: Maximum number of seconds to wait for in-flight
|
||||
jobs to finish during `shutdown`. If the timeout elapses
|
||||
before all tasks complete, an error is logged and the names of
|
||||
the still-running jobs are reported. The tasks are not cancelled
|
||||
so they may continue running until the event loop closes.
|
||||
Defaults to 30.0.
|
||||
|
||||
Example::
|
||||
|
||||
manager = RetentionManager(get_config().get_nested_value, shutdown_timeout=60.0)
|
||||
"""
|
||||
self._config_getter = config_getter
|
||||
self._shutdown_timeout = shutdown_timeout
|
||||
self._jobs: dict[str, JobState] = {}
|
||||
self._running_tasks: set[asyncio.Task] = set()
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Registration
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def register(
|
||||
self,
|
||||
name: str,
|
||||
func: NoArgsNoReturnAnyFuncT,
|
||||
*,
|
||||
interval_attr: str,
|
||||
fallback_interval: float = 300.0,
|
||||
on_exception: Optional[ExcArgNoReturnAnyFuncT] = None,
|
||||
) -> None:
|
||||
"""Register a maintenance function with the manager.
|
||||
|
||||
Args:
|
||||
name: Unique human-readable job name used in logs and metrics.
|
||||
func: The maintenance callable. Must accept no arguments.
|
||||
interval_attr: Key passed to ``config_getter`` to retrieve the interval
|
||||
in seconds for this job. When the config value is ``None`` the job
|
||||
is treated as disabled and will never fire.
|
||||
fallback_interval: Seconds to use when the config attribute is missing or zero.
|
||||
Defaults to ``300.0``.
|
||||
on_exception: Optional callable invoked with the raised exception whenever
|
||||
``func`` fails. Useful for cleanup or alerting. May be sync or async.
|
||||
|
||||
Raises:
|
||||
ValueError: If a job with the given ``name`` is already registered.
|
||||
"""
|
||||
if name in self._jobs:
|
||||
raise ValueError(f"RetentionManager: job '{name}' is already registered")
|
||||
|
||||
self._jobs[name] = JobState(
|
||||
name=name,
|
||||
func=func,
|
||||
interval_attr=interval_attr,
|
||||
fallback_interval=fallback_interval,
|
||||
config_getter=self._config_getter,
|
||||
on_exception=on_exception,
|
||||
)
|
||||
logger.info("RetentionManager: registered job '{}' (config: {})", name, interval_attr)
|
||||
|
||||
def unregister(self, name: str) -> None:
|
||||
"""Remove a previously registered job from the manager.
|
||||
|
||||
If no job with the given name exists, this is a no-op.
|
||||
|
||||
Args:
|
||||
name: The name of the job to remove.
|
||||
"""
|
||||
self._jobs.pop(name, None)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Tick — called by the external heartbeat loop
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
async def tick(self) -> None:
|
||||
"""Single compaction tick: check every job and fire those that are due.
|
||||
|
||||
Each job resolves its own interval via the ``config_getter`` captured at
|
||||
registration time. Jobs whose interval is ``None`` are silently skipped
|
||||
(disabled). Due jobs are launched as independent ``asyncio.Task`` objects
|
||||
so they run concurrently without blocking the tick. Each task is tracked
|
||||
in ``_running_tasks`` and removed automatically on completion, allowing
|
||||
`shutdown` to await all of them gracefully.
|
||||
|
||||
Jobs that are still running from a previous tick are skipped to prevent
|
||||
overlapping executions.
|
||||
|
||||
Note:
|
||||
This is the function you pass to ``make_repeated_task``.
|
||||
"""
|
||||
due = [job for job in self._jobs.values() if not job.is_running and job.is_due()]
|
||||
|
||||
if not due:
|
||||
return
|
||||
|
||||
logger.debug("RetentionManager: {} job(s) due this tick", len(due))
|
||||
for job in due:
|
||||
task = asyncio.ensure_future(self._run_job(job))
|
||||
task.set_name(job.name) # used by shutdown() to report timed-out jobs by name
|
||||
self._running_tasks.add(task)
|
||||
task.add_done_callback(self._running_tasks.discard)
|
||||
|
||||
async def shutdown(self) -> None:
|
||||
"""Wait for all currently running job tasks to complete.
|
||||
|
||||
Waits up to shutdown_timeout seconds (configured at initialisation)
|
||||
for in-flight tasks to finish. If the timeout elapses before all tasks
|
||||
complete, an error is logged listing the names of the jobs that are still
|
||||
running. Those tasks are **not** cancelled — they continue until the event
|
||||
loop closes — but `shutdown` returns so that application teardown
|
||||
is not blocked indefinitely.
|
||||
|
||||
Returns immediately if no tasks are running.
|
||||
|
||||
Example::
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
tick_task = make_repeated_task(manager.tick, seconds=5, wait_first=2)
|
||||
await tick_task()
|
||||
|
||||
Yield:
|
||||
await manager.shutdown()
|
||||
"""
|
||||
if not self._running_tasks:
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"RetentionManager: shutdown — waiting up to {}s for {} task(s) to finish",
|
||||
self._shutdown_timeout,
|
||||
len(self._running_tasks),
|
||||
)
|
||||
|
||||
done, pending = await asyncio.wait(self._running_tasks, timeout=self._shutdown_timeout)
|
||||
|
||||
if pending:
|
||||
# Task names were set to the job name when the task was created in tick().
|
||||
pending_names = [t.get_name() for t in pending]
|
||||
logger.error(
|
||||
"RetentionManager: shutdown timed out after {}s — {} job(s) still running: {}",
|
||||
self._shutdown_timeout,
|
||||
len(pending),
|
||||
pending_names,
|
||||
)
|
||||
else:
|
||||
logger.info("RetentionManager: all tasks finished, shutdown complete")
|
||||
|
||||
self._running_tasks.clear()
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internal helpers
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
async def _run_job(self, job: JobState) -> None:
|
||||
"""Execute a single job and update its state regardless of outcome.
|
||||
|
||||
Handles both async and sync callables for both the main function and the
|
||||
optional ``on_exception`` hook. Exceptions from ``func`` are caught, logged,
|
||||
stored on the job, and forwarded to ``on_exception`` if provided, so a
|
||||
failing job never disrupts other concurrent jobs or future ticks.
|
||||
|
||||
Args:
|
||||
job: The `JobState` instance to execute.
|
||||
"""
|
||||
job.is_running = True
|
||||
start = time.monotonic()
|
||||
logger.debug("RetentionManager: starting job '{}'", job.name)
|
||||
try:
|
||||
if asyncio.iscoroutinefunction(job.func):
|
||||
await job.func()
|
||||
else:
|
||||
await run_in_threadpool(job.func)
|
||||
|
||||
job.last_error = None
|
||||
logger.debug(
|
||||
"RetentionManager: job '{}' completed in {:.3f}s",
|
||||
job.name,
|
||||
time.monotonic() - start,
|
||||
)
|
||||
|
||||
except Exception as exc: # noqa: BLE001
|
||||
job.last_error = str(exc)
|
||||
logger.exception("RetentionManager: job '{}' raised an exception: {}", job.name, exc)
|
||||
|
||||
if job.on_exception is not None:
|
||||
if asyncio.iscoroutinefunction(job.on_exception):
|
||||
await job.on_exception(exc)
|
||||
else:
|
||||
await run_in_threadpool(job.on_exception, exc)
|
||||
|
||||
finally:
|
||||
job.last_duration = time.monotonic() - start
|
||||
job.last_run_at = time.monotonic()
|
||||
job.run_count += 1
|
||||
job.is_running = False
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Observability
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def status(self) -> list[dict]:
|
||||
"""Return a snapshot of every job's state for health or metrics endpoints.
|
||||
|
||||
Returns:
|
||||
A list of dictionaries, one per registered job, each produced by
|
||||
`JobState.summary`.
|
||||
"""
|
||||
return [job.summary() for job in self._jobs.values()]
|
||||
|
||||
def __repr__(self) -> str: # pragma: no cover
|
||||
return f"<RetentionManager jobs={list(self._jobs)}>"
|
||||
@@ -14,6 +14,7 @@ from loguru import logger
|
||||
from pydantic import Field, field_validator
|
||||
|
||||
from akkudoktoreos.config.configabc import SettingsBaseModel
|
||||
from akkudoktoreos.core.coreabc import get_config
|
||||
|
||||
|
||||
def get_default_host() -> str:
|
||||
@@ -258,8 +259,6 @@ def fix_data_directories_permissions(run_as_user: Optional[str] = None) -> None:
|
||||
run_as_user (Optional[str]): The user who should own the data directories and files.
|
||||
Defaults to current one.
|
||||
"""
|
||||
from akkudoktoreos.config.config import get_config
|
||||
|
||||
config_eos = get_config()
|
||||
|
||||
base_dirs = [
|
||||
|
||||
@@ -1868,6 +1868,28 @@ def to_duration(
|
||||
raise ValueError(error_msg)
|
||||
|
||||
|
||||
# Timezone names that are semantically identical to UTC and should be
|
||||
# canonicalized. Keys are lower-cased for case-insensitive matching.
|
||||
_UTC_ALIASES: dict[str, str] = {
|
||||
"utc": "UTC",
|
||||
"gmt": "UTC",
|
||||
"z": "UTC",
|
||||
"etc/utc": "UTC",
|
||||
"etc/gmt": "UTC",
|
||||
"etc/gmt+0": "UTC",
|
||||
"etc/gmt-0": "UTC",
|
||||
"etc/gmt0": "UTC",
|
||||
"etc/greenwich": "UTC",
|
||||
"etc/universal": "UTC",
|
||||
"etc/zulu": "UTC",
|
||||
}
|
||||
|
||||
|
||||
def _canonicalize_tz_name(name: str) -> str:
|
||||
"""Return 'UTC' when *name* is a known UTC alias, otherwise return unchanged."""
|
||||
return _UTC_ALIASES.get(name.lower(), name)
|
||||
|
||||
|
||||
@overload
|
||||
def to_timezone(
|
||||
utc_offset: Optional[float] = None,
|
||||
@@ -1891,6 +1913,9 @@ def to_timezone(
|
||||
) -> Union[Timezone, str]:
|
||||
"""Determines the timezone either by UTC offset, geographic location, or local system timezone.
|
||||
|
||||
Timezone names that are semantically equivalent to UTC (e.g. ``GMT``, ``Z``,
|
||||
``Etc/GMT``) are canonicalized to ``"UTC"`` before returning.
|
||||
|
||||
By default, it returns a `Timezone` object representing the timezone.
|
||||
If `as_string` is set to `True`, the function returns the timezone name as a string instead.
|
||||
|
||||
@@ -1925,7 +1950,15 @@ def to_timezone(
|
||||
if not -24 <= utc_offset <= 24:
|
||||
raise ValueError("UTC offset must be within the range -24 to +24 hours.")
|
||||
|
||||
# Convert UTC offset to an Etc/GMT-compatible format
|
||||
# Offset of exactly 0 is plain UTC – no need for Etc/GMT+0 etc.
|
||||
if utc_offset == 0:
|
||||
if as_string:
|
||||
return "UTC"
|
||||
return pendulum.timezone("UTC")
|
||||
|
||||
# Convert UTC offset to an Etc/GMT-compatible format.
|
||||
# NOTE: Etc/GMT sign convention is *inverted* relative to the common
|
||||
# expectation: Etc/GMT+5 means UTC-5. We therefore flip the sign.
|
||||
hours = int(utc_offset)
|
||||
minutes = int((abs(utc_offset) - abs(hours)) * 60)
|
||||
sign = "-" if utc_offset >= 0 else "+"
|
||||
@@ -1951,6 +1984,8 @@ def to_timezone(
|
||||
except Exception as e:
|
||||
raise ValueError(f"Error determining timezone for location {location}: {e}") from e
|
||||
|
||||
tz_name = _canonicalize_tz_name(tz_name)
|
||||
|
||||
if as_string:
|
||||
return tz_name
|
||||
return pendulum.timezone(tz_name)
|
||||
@@ -1958,7 +1993,9 @@ def to_timezone(
|
||||
# Fallback to local timezone
|
||||
local_tz = pendulum.local_timezone()
|
||||
if isinstance(local_tz, str):
|
||||
local_tz = pendulum.timezone(local_tz)
|
||||
local_tz = pendulum.timezone(_canonicalize_tz_name(local_tz))
|
||||
else:
|
||||
local_tz = pendulum.timezone(_canonicalize_tz_name(local_tz.name))
|
||||
if as_string:
|
||||
return local_tz.name
|
||||
return local_tz
|
||||
|
||||
@@ -11,8 +11,7 @@ import numpy as np
|
||||
import pendulum
|
||||
from matplotlib.backends.backend_pdf import PdfPages
|
||||
|
||||
from akkudoktoreos.core.coreabc import ConfigMixin
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import ConfigMixin, get_ems
|
||||
from akkudoktoreos.optimization.genetic.genetic import GeneticOptimizationParameters
|
||||
from akkudoktoreos.utils.datetimeutil import DateTime, to_datetime
|
||||
|
||||
|
||||
@@ -22,7 +22,8 @@ from _pytest.logging import LogCaptureFixture
|
||||
from loguru import logger
|
||||
from xprocess import ProcessStarter, XProcess
|
||||
|
||||
from akkudoktoreos.config.config import ConfigEOS, get_config
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
from akkudoktoreos.core.coreabc import get_config, get_prediction, singletons_init
|
||||
from akkudoktoreos.core.version import _version_hash, version
|
||||
from akkudoktoreos.server.server import get_default_host
|
||||
|
||||
@@ -134,8 +135,6 @@ def is_ci() -> bool:
|
||||
|
||||
@pytest.fixture
|
||||
def prediction_eos():
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
|
||||
return get_prediction()
|
||||
|
||||
|
||||
@@ -172,6 +171,37 @@ def cfg_non_existent(request):
|
||||
)
|
||||
|
||||
|
||||
# ------------------------------------
|
||||
# Provide pytest EOS config management
|
||||
# ------------------------------------
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def config_default_dirs(tmpdir):
|
||||
"""Fixture that provides a list of directories to be used as config dir."""
|
||||
tmp_user_home_dir = Path(tmpdir)
|
||||
|
||||
# Default config directory from platform user config directory
|
||||
config_default_dir_user = tmp_user_home_dir / "config"
|
||||
|
||||
# Default config directory from current working directory
|
||||
config_default_dir_cwd = tmp_user_home_dir / "cwd"
|
||||
config_default_dir_cwd.mkdir()
|
||||
|
||||
# Default config directory from default config file
|
||||
config_default_dir_default = Path(__file__).parent.parent.joinpath("src/akkudoktoreos/data")
|
||||
|
||||
# Default data directory from platform user data directory
|
||||
data_default_dir_user = tmp_user_home_dir
|
||||
|
||||
return (
|
||||
config_default_dir_user,
|
||||
config_default_dir_cwd,
|
||||
config_default_dir_default,
|
||||
data_default_dir_user,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def user_cwd(config_default_dirs):
|
||||
"""Patch cwd provided by module pathlib.Path.cwd."""
|
||||
@@ -203,64 +233,102 @@ def user_data_dir(config_default_dirs):
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def config_eos(
|
||||
def config_eos_factory(
|
||||
disable_debug_logging,
|
||||
user_config_dir,
|
||||
user_data_dir,
|
||||
user_cwd,
|
||||
config_default_dirs,
|
||||
monkeypatch,
|
||||
) -> ConfigEOS:
|
||||
"""Fixture to reset EOS config to default values."""
|
||||
monkeypatch.setenv(
|
||||
"EOS_CONFIG__DATA_CACHE_SUBPATH", str(config_default_dirs[-1] / "data/cache")
|
||||
)
|
||||
monkeypatch.setenv(
|
||||
"EOS_CONFIG__DATA_OUTPUT_SUBPATH", str(config_default_dirs[-1] / "data/output")
|
||||
)
|
||||
config_file = config_default_dirs[0] / ConfigEOS.CONFIG_FILE_NAME
|
||||
config_file_cwd = config_default_dirs[1] / ConfigEOS.CONFIG_FILE_NAME
|
||||
assert not config_file.exists()
|
||||
assert not config_file_cwd.exists()
|
||||
):
|
||||
"""Factory fixture for creating a fully initialized ``ConfigEOS`` instance.
|
||||
|
||||
config_eos = get_config()
|
||||
config_eos.reset_settings()
|
||||
assert config_file == config_eos.general.config_file_path
|
||||
assert config_file.exists()
|
||||
assert not config_file_cwd.exists()
|
||||
Returns a callable that creates a ``ConfigEOS`` singleton with a controlled
|
||||
filesystem layout and environment variables. Allows tests to customize which
|
||||
pydantic-settings sources are enabled (init, env, dotenv, file, secrets).
|
||||
|
||||
# Check user data directory pathes (config_default_dirs[-1] == data_default_dir_user)
|
||||
assert config_default_dirs[-1] / "data" == config_eos.general.data_folder_path
|
||||
assert config_default_dirs[-1] / "data/cache" == config_eos.cache.path()
|
||||
assert config_default_dirs[-1] / "data/output" == config_eos.general.data_output_path
|
||||
assert config_default_dirs[-1] / "data/output/eos.log" == config_eos.logging.file_path
|
||||
return config_eos
|
||||
The factory ensures:
|
||||
- Required directories exist
|
||||
- No pre-existing config files are present
|
||||
- Settings are reloaded to respect test-specific configuration
|
||||
- Dependent singletons are initialized
|
||||
|
||||
The singleton instance is reset during fixture teardown.
|
||||
"""
|
||||
def _create(init: dict[str, bool] | None = None) -> ConfigEOS:
|
||||
init = init or {
|
||||
"with_init_settings": True,
|
||||
"with_env_settings": True,
|
||||
"with_dotenv_settings": False,
|
||||
"with_file_settings": False,
|
||||
"with_file_secret_settings": False,
|
||||
}
|
||||
|
||||
# reset singleton before touching env or config
|
||||
ConfigEOS.reset_instance()
|
||||
ConfigEOS._init_config_eos = {
|
||||
"with_init_settings": True,
|
||||
"with_env_settings": True,
|
||||
"with_dotenv_settings": True,
|
||||
"with_file_settings": True,
|
||||
"with_file_secret_settings": True,
|
||||
}
|
||||
ConfigEOS._config_file_path = None
|
||||
ConfigEOS._force_documentation_mode = False
|
||||
|
||||
data_folder_path = config_default_dirs[-1] / "data"
|
||||
data_folder_path.mkdir(exist_ok=True)
|
||||
|
||||
config_dir = config_default_dirs[0]
|
||||
config_dir.mkdir(exist_ok=True)
|
||||
|
||||
cwd = config_default_dirs[1]
|
||||
cwd.mkdir(exist_ok=True)
|
||||
|
||||
monkeypatch.setenv("EOS_CONFIG_DIR", str(config_dir))
|
||||
monkeypatch.setenv("EOS_GENERAL__DATA_FOLDER_PATH", str(data_folder_path))
|
||||
monkeypatch.setenv("EOS_GENERAL__DATA_CACHE_SUBPATH", "cache")
|
||||
monkeypatch.setenv("EOS_GENERAL__DATA_OUTPUT_SUBPATH", "output")
|
||||
|
||||
# Ensure no config files exist
|
||||
config_file = config_dir / ConfigEOS.CONFIG_FILE_NAME
|
||||
config_file_cwd = cwd / ConfigEOS.CONFIG_FILE_NAME
|
||||
assert not config_file.exists()
|
||||
assert not config_file_cwd.exists()
|
||||
|
||||
config_eos = get_config(init=init)
|
||||
# Ensure newly created configurations are respected
|
||||
# Note: Workaround for pydantic_settings and pytest
|
||||
config_eos.reset_settings()
|
||||
|
||||
# Check user data directory pathes (config_default_dirs[-1] == data_default_dir_user)
|
||||
assert config_eos.general.data_folder_path == data_folder_path
|
||||
assert config_eos.general.data_output_subpath == Path("output")
|
||||
assert config_eos.cache.subpath == "cache"
|
||||
assert config_eos.cache.path() == config_default_dirs[-1] / "data/cache"
|
||||
assert config_eos.logging.file_path == config_default_dirs[-1] / "data/output/eos.log"
|
||||
|
||||
# Check config file path
|
||||
assert str(config_eos.general.config_file_path) == str(config_file)
|
||||
assert config_file.exists()
|
||||
assert not config_file_cwd.exists()
|
||||
|
||||
# Initialize all other singletons (if not already initialized)
|
||||
singletons_init()
|
||||
|
||||
return config_eos
|
||||
|
||||
yield _create
|
||||
|
||||
# teardown - final safety net
|
||||
ConfigEOS.reset_instance()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def config_default_dirs(tmpdir):
|
||||
"""Fixture that provides a list of directories to be used as config dir."""
|
||||
tmp_user_home_dir = Path(tmpdir)
|
||||
|
||||
# Default config directory from platform user config directory
|
||||
config_default_dir_user = tmp_user_home_dir / "config"
|
||||
|
||||
# Default config directory from current working directory
|
||||
config_default_dir_cwd = tmp_user_home_dir / "cwd"
|
||||
config_default_dir_cwd.mkdir()
|
||||
|
||||
# Default config directory from default config file
|
||||
config_default_dir_default = Path(__file__).parent.parent.joinpath("src/akkudoktoreos/data")
|
||||
|
||||
# Default data directory from platform user data directory
|
||||
data_default_dir_user = tmp_user_home_dir
|
||||
|
||||
return (
|
||||
config_default_dir_user,
|
||||
config_default_dir_cwd,
|
||||
config_default_dir_default,
|
||||
data_default_dir_user,
|
||||
)
|
||||
def config_eos(config_eos_factory) -> ConfigEOS:
|
||||
"""Fixture to reset EOS config to default values."""
|
||||
config_eos = config_eos_factory()
|
||||
return config_eos
|
||||
|
||||
|
||||
# ------------------------------------
|
||||
@@ -405,7 +473,11 @@ def server_base(
|
||||
Yields:
|
||||
dict[str, str]: A dictionary containing:
|
||||
- "server" (str): URL of the server.
|
||||
- "port": port
|
||||
- "eosdash_server": eosdash_server
|
||||
- "eosdash_port": eosdash_port
|
||||
- "eos_dir" (str): Path to the temporary EOS_DIR.
|
||||
- "timeout": server_timeout
|
||||
"""
|
||||
host = get_default_host()
|
||||
port = 8503
|
||||
@@ -427,12 +499,14 @@ def server_base(
|
||||
|
||||
eos_tmp_dir = tempfile.TemporaryDirectory()
|
||||
eos_dir = str(eos_tmp_dir.name)
|
||||
eos_general_data_folder_path = str(Path(eos_dir) / "data")
|
||||
|
||||
class Starter(ProcessStarter):
|
||||
# Set environment for server run
|
||||
env = os.environ.copy()
|
||||
env["EOS_DIR"] = eos_dir
|
||||
env["EOS_CONFIG_DIR"] = eos_dir
|
||||
env["EOS_GENERAL__DATA_FOLDER_PATH"] = eos_general_data_folder_path
|
||||
if extra_env:
|
||||
env.update(extra_env)
|
||||
|
||||
|
||||
@@ -12,13 +12,11 @@ from typing import Any
|
||||
import numpy as np
|
||||
from loguru import logger
|
||||
|
||||
from akkudoktoreos.config.config import get_config
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_config, get_ems, get_prediction
|
||||
from akkudoktoreos.core.emsettings import EnergyManagementMode
|
||||
from akkudoktoreos.optimization.genetic.geneticparams import (
|
||||
GeneticOptimizationParameters,
|
||||
)
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
from akkudoktoreos.utils.datetimeutil import to_datetime
|
||||
|
||||
config_eos = get_config()
|
||||
|
||||
@@ -6,8 +6,7 @@ import pstats
|
||||
import sys
|
||||
import time
|
||||
|
||||
from akkudoktoreos.config.config import get_config
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
from akkudoktoreos.core.coreabc import get_config, get_prediction
|
||||
|
||||
config_eos = get_config()
|
||||
prediction_eos = get_prediction()
|
||||
|
||||
@@ -12,11 +12,11 @@ import pytest
|
||||
from akkudoktoreos.adapter.adapter import (
|
||||
Adapter,
|
||||
AdapterCommonSettings,
|
||||
get_adapter,
|
||||
)
|
||||
from akkudoktoreos.adapter.adapterabc import AdapterContainer
|
||||
from akkudoktoreos.adapter.homeassistant import HomeAssistantAdapter
|
||||
from akkudoktoreos.adapter.nodered import NodeREDAdapter
|
||||
from akkudoktoreos.core.coreabc import get_adapter
|
||||
|
||||
# ---------- Typed aliases for fixtures ----------
|
||||
AdapterFixture: TypeAlias = Adapter
|
||||
|
||||
@@ -167,12 +167,21 @@ def temp_store_file():
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def cache_file_store(temp_store_file):
|
||||
def cache_file_store(temp_store_file, monkeypatch):
|
||||
"""A pytest fixture that creates a new CacheFileStore instance for testing."""
|
||||
|
||||
cache = CacheFileStore()
|
||||
cache._store_file = temp_store_file
|
||||
|
||||
# Patch the _cache_file method to return the temp file
|
||||
monkeypatch.setattr(
|
||||
cache,
|
||||
"_store_file",
|
||||
lambda: temp_store_file,
|
||||
)
|
||||
|
||||
cache.clear(clear_all=True)
|
||||
assert len(cache._store) == 0
|
||||
|
||||
return cache
|
||||
|
||||
|
||||
@@ -481,7 +490,7 @@ class TestCacheFileStore:
|
||||
cache_file_store.save_store()
|
||||
|
||||
# Verify the file content
|
||||
with cache_file_store._store_file.open("r", encoding="utf-8", newline=None) as f:
|
||||
with cache_file_store._store_file().open("r", encoding="utf-8", newline=None) as f:
|
||||
store_loaded = json.load(f)
|
||||
assert "test_key" in store_loaded
|
||||
assert store_loaded["test_key"]["cache_file"] == "cache_file_path"
|
||||
@@ -501,7 +510,7 @@ class TestCacheFileStore:
|
||||
"ttl_duration": None,
|
||||
}
|
||||
}
|
||||
with cache_file_store._store_file.open("w", encoding="utf-8", newline="\n") as f:
|
||||
with cache_file_store._store_file().open("w", encoding="utf-8", newline="\n") as f:
|
||||
json.dump(cache_record, f, indent=4)
|
||||
|
||||
# Mock the open function to return a MagicMock for the cache file
|
||||
|
||||
@@ -109,17 +109,18 @@ def test_config_ipaddress(monkeypatch, config_eos):
|
||||
assert config_eos.server.host == "localhost"
|
||||
|
||||
|
||||
def test_singleton_behavior(config_eos, config_default_dirs):
|
||||
def test_singleton_behavior(config_eos, config_default_dirs, monkeypatch):
|
||||
"""Test that ConfigEOS behaves as a singleton."""
|
||||
initial_cfg_file = config_eos.general.config_file_path
|
||||
with patch(
|
||||
"akkudoktoreos.config.config.user_config_dir", return_value=str(config_default_dirs[0])
|
||||
):
|
||||
instance1 = ConfigEOS()
|
||||
instance2 = ConfigEOS()
|
||||
assert instance1 is config_eos
|
||||
config_eos.reset_instance()
|
||||
|
||||
monkeypatch.setenv("EOS_CONFIG_DIR", str(config_default_dirs[0]))
|
||||
|
||||
instance1 = ConfigEOS()
|
||||
instance2 = ConfigEOS()
|
||||
|
||||
assert instance1 is not config_eos
|
||||
assert instance1 is instance2
|
||||
assert instance1.general.config_file_path == initial_cfg_file
|
||||
assert instance1._config_file_path == instance2._config_file_path
|
||||
|
||||
|
||||
def test_config_file_priority(config_default_dirs):
|
||||
@@ -169,17 +170,22 @@ def test_get_config_file_path(user_config_dir_patch, config_eos, config_default_
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_dir_path = Path(temp_dir)
|
||||
monkeypatch.setenv("EOS_DIR", str(temp_dir_path))
|
||||
monkeypatch.delenv("EOS_CONFIG_DIR", raising=False)
|
||||
assert config_eos._get_config_file_path() == (cfg_file(temp_dir_path), False)
|
||||
|
||||
monkeypatch.setenv("EOS_CONFIG_DIR", "config")
|
||||
config_dir = temp_dir_path / "config"
|
||||
config_dir.mkdir(exist_ok=True)
|
||||
assert config_eos._get_config_file_path() == (
|
||||
cfg_file(temp_dir_path / "config"),
|
||||
cfg_file(config_dir),
|
||||
False,
|
||||
)
|
||||
|
||||
monkeypatch.setenv("EOS_CONFIG_DIR", str(temp_dir_path / "config2"))
|
||||
config_dir = temp_dir_path / "config2"
|
||||
config_dir.mkdir(exist_ok=True)
|
||||
assert config_eos._get_config_file_path() == (
|
||||
cfg_file(temp_dir_path / "config2"),
|
||||
cfg_file(config_dir),
|
||||
False,
|
||||
)
|
||||
|
||||
@@ -188,8 +194,10 @@ def test_get_config_file_path(user_config_dir_patch, config_eos, config_default_
|
||||
assert config_eos._get_config_file_path() == (cfg_file(config_default_dir_user), False)
|
||||
|
||||
monkeypatch.setenv("EOS_CONFIG_DIR", str(temp_dir_path / "config3"))
|
||||
config_dir = temp_dir_path / "config3"
|
||||
config_dir.mkdir(exist_ok=True)
|
||||
assert config_eos._get_config_file_path() == (
|
||||
cfg_file(temp_dir_path / "config3"),
|
||||
cfg_file(config_dir),
|
||||
False,
|
||||
)
|
||||
|
||||
@@ -199,7 +207,7 @@ def test_config_copy(config_eos, monkeypatch):
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_folder_path = Path(temp_dir)
|
||||
temp_config_file_path = temp_folder_path.joinpath(config_eos.CONFIG_FILE_NAME).resolve()
|
||||
monkeypatch.setenv(config_eos.EOS_DIR, str(temp_folder_path))
|
||||
monkeypatch.setenv("EOS_CONFIG_DIR", str(temp_folder_path))
|
||||
assert not temp_config_file_path.exists()
|
||||
with patch("akkudoktoreos.config.config.user_config_dir", return_value=temp_dir):
|
||||
assert config_eos._get_config_file_path() == (temp_config_file_path, False)
|
||||
|
||||
@@ -26,6 +26,9 @@ MIGRATION_PAIRS = [
|
||||
# (DIR_TESTDATA / "old_config_X.json", DIR_TESTDATA / "expected_config_X.json"),
|
||||
]
|
||||
|
||||
# Any sentinel in expected data
|
||||
_ANY_SENTINEL = "__ANY__"
|
||||
|
||||
|
||||
def _dict_contains(superset: Any, subset: Any, path="") -> list[str]:
|
||||
"""Recursively verify that all key-value pairs from a subset dictionary or list exist in a superset.
|
||||
@@ -60,6 +63,9 @@ def _dict_contains(superset: Any, subset: Any, path="") -> list[str]:
|
||||
errors.extend(_dict_contains(superset[i], elem, f"{path}[{i}]" if path else f"[{i}]"))
|
||||
|
||||
else:
|
||||
# "__ANY__" in expected means "accept whatever value the migration produces"
|
||||
if subset == _ANY_SENTINEL:
|
||||
return errors
|
||||
# Compare values (with numeric tolerance)
|
||||
if isinstance(subset, (int, float)) and isinstance(superset, (int, float)):
|
||||
if abs(float(subset) - float(superset)) > 1e-6:
|
||||
@@ -162,6 +168,7 @@ class TestConfigMigration:
|
||||
assert backup_file.exists(), f"Backup file not created for {old_file.name}"
|
||||
|
||||
# --- Compare migrated result with expected output ---
|
||||
old_data = json.loads(old_file.read_text(encoding="utf-8"))
|
||||
new_data = json.loads(working_file.read_text(encoding="utf-8"))
|
||||
expected_data = json.loads(expected_file.read_text(encoding="utf-8"))
|
||||
|
||||
@@ -202,6 +209,14 @@ class TestConfigMigration:
|
||||
# Verify the migrated value matches the expected one
|
||||
new_value = configmigrate._get_json_nested_value(new_data, new_path)
|
||||
if new_value != expected_value:
|
||||
# Check if this mapping uses _KEEP_DEFAULT and the old value was None/missing
|
||||
old_value = configmigrate._get_json_nested_value(old_data, old_path)
|
||||
keep_default = (
|
||||
isinstance(mapping, tuple)
|
||||
and configmigrate._KEEP_DEFAULT in mapping
|
||||
)
|
||||
if keep_default and old_value is None:
|
||||
continue # acceptable: old was None, new model keeps its default
|
||||
mismatched_values.append(
|
||||
f"{old_path} → {new_path}: expected {expected_value!r}, got {new_value!r}"
|
||||
)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
1114
tests/test_dataabccompact.py
Normal file
1114
tests/test_dataabccompact.py
Normal file
File diff suppressed because it is too large
Load Diff
1148
tests/test_database.py
Normal file
1148
tests/test_database.py
Normal file
File diff suppressed because it is too large
Load Diff
888
tests/test_databaseabc.py
Normal file
888
tests/test_databaseabc.py
Normal file
@@ -0,0 +1,888 @@
|
||||
from typing import Any, Iterator, Literal, Optional, Type, cast
|
||||
|
||||
import pytest
|
||||
from numpydantic import NDArray, Shape
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from akkudoktoreos.core.databaseabc import (
|
||||
DATABASE_METADATA_KEY,
|
||||
DatabaseRecordProtocolMixin,
|
||||
DatabaseTimestamp,
|
||||
_DatabaseTimestampUnbound,
|
||||
)
|
||||
from akkudoktoreos.utils.datetimeutil import (
|
||||
DateTime,
|
||||
Duration,
|
||||
to_datetime,
|
||||
to_duration,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test record
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class SampleRecord(BaseModel):
|
||||
date_time: Optional[DateTime] = Field(
|
||||
default=None, json_schema_extra={"description": "DateTime"}
|
||||
)
|
||||
value: Optional[float] = None
|
||||
|
||||
def __getitem__(self, key: str) -> Any:
|
||||
if key == "date_time":
|
||||
return self.date_time
|
||||
if key == "value":
|
||||
return self.value
|
||||
assert key is None
|
||||
return None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fake database backend
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class SampleDatabase:
|
||||
def __init__(self):
|
||||
self._data: dict[Optional[str], dict[bytes, bytes]] = {}
|
||||
self._metadata: Optional[bytes] = None
|
||||
self.is_open = True
|
||||
self.compression = False
|
||||
self.compression_level = 0
|
||||
self.storage_path = "/fake"
|
||||
|
||||
# serialization (pass-through)
|
||||
|
||||
def serialize_data(self, data: bytes) -> bytes:
|
||||
return data
|
||||
|
||||
def deserialize_data(self, data: bytes) -> bytes:
|
||||
return data
|
||||
|
||||
# metadata
|
||||
|
||||
def set_metadata(self, metadata: Optional[bytes], *, namespace: Optional[str] = None) -> None:
|
||||
self._metadata = metadata
|
||||
|
||||
def get_metadata(self, namespace: Optional[str] = None) -> Optional[bytes]:
|
||||
return self._metadata
|
||||
|
||||
# write
|
||||
|
||||
def save_records(
|
||||
self, records: list[tuple[bytes, bytes]], namespace: Optional[str] = None
|
||||
) -> int:
|
||||
ns = self._data.setdefault(namespace, {})
|
||||
saved = 0
|
||||
for key, value in records:
|
||||
ns[key] = value
|
||||
saved += 1
|
||||
return saved
|
||||
|
||||
def delete_records(
|
||||
self, keys: Iterator[bytes], namespace: Optional[str] = None
|
||||
) -> int:
|
||||
ns_data = self._data.get(namespace, {})
|
||||
deleted = 0
|
||||
for key in keys:
|
||||
if key in ns_data:
|
||||
del ns_data[key]
|
||||
deleted += 1
|
||||
return deleted
|
||||
|
||||
# read
|
||||
|
||||
def iterate_records(
|
||||
self,
|
||||
start_key: Optional[bytes] = None,
|
||||
end_key: Optional[bytes] = None,
|
||||
namespace: Optional[str] = None,
|
||||
reverse: bool = False,
|
||||
) -> Iterator[tuple[bytes, bytes]]:
|
||||
items = self._data.get(namespace, {})
|
||||
keys = sorted(items, reverse=reverse)
|
||||
for k in keys:
|
||||
if k == DATABASE_METADATA_KEY:
|
||||
continue
|
||||
if start_key and k < start_key:
|
||||
continue
|
||||
if end_key and k >= end_key:
|
||||
continue
|
||||
yield k, items[k]
|
||||
|
||||
# stats
|
||||
|
||||
def count_records(
|
||||
self,
|
||||
start_key: Optional[bytes] = None,
|
||||
end_key: Optional[bytes] = None,
|
||||
*,
|
||||
namespace: Optional[str] = None,
|
||||
) -> int:
|
||||
items = self._data.get(namespace, {})
|
||||
count = 0
|
||||
for k in items:
|
||||
if k == DATABASE_METADATA_KEY:
|
||||
continue
|
||||
if start_key and k < start_key:
|
||||
continue
|
||||
if end_key and k >= end_key:
|
||||
continue
|
||||
count += 1
|
||||
return count
|
||||
|
||||
def get_key_range(
|
||||
self, namespace: Optional[str] = None
|
||||
) -> tuple[Optional[bytes], Optional[bytes]]:
|
||||
items = self._data.get(namespace, {})
|
||||
keys = sorted(k for k in items if k != DATABASE_METADATA_KEY)
|
||||
if not keys:
|
||||
return None, None
|
||||
return keys[0], keys[-1]
|
||||
|
||||
def get_backend_stats(self, namespace: Optional[str] = None) -> dict:
|
||||
return {}
|
||||
|
||||
def flush(self, namespace: Optional[str] = None) -> None:
|
||||
pass
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Concrete test sequence — minimal, no Pydantic / singleton overhead
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class SampleSequence(DatabaseRecordProtocolMixin[SampleRecord]):
|
||||
"""Minimal concrete implementation for unit-testing the mixin."""
|
||||
|
||||
def __init__(self):
|
||||
self.records: list[SampleRecord] = []
|
||||
self._db_record_index: dict[DatabaseTimestamp, SampleRecord] = {}
|
||||
self._db_sorted_timestamps: list[DatabaseTimestamp] = []
|
||||
self._db_dirty_timestamps: set[DatabaseTimestamp] = set()
|
||||
self._db_new_timestamps: set[DatabaseTimestamp] = set()
|
||||
self._db_deleted_timestamps: set[DatabaseTimestamp] = set()
|
||||
self._db_initialized: bool = True
|
||||
self._db_storage_initialized: bool = False
|
||||
self._db_metadata: Optional[dict] = None
|
||||
self._db_loaded_range = None
|
||||
from akkudoktoreos.core.databaseabc import DatabaseRecordProtocolLoadPhase
|
||||
self._db_load_phase = DatabaseRecordProtocolLoadPhase.NONE
|
||||
self._db_version: int = 1
|
||||
|
||||
self.database = SampleDatabase()
|
||||
self.config = type(
|
||||
"Cfg",
|
||||
(),
|
||||
{
|
||||
"database": type(
|
||||
"DBCfg",
|
||||
(),
|
||||
{
|
||||
"auto_save": False,
|
||||
"compression_level": 0,
|
||||
"autosave_interval_sec": 10,
|
||||
"initial_load_window_h": None,
|
||||
"keep_duration_h": None,
|
||||
},
|
||||
)()
|
||||
},
|
||||
)()
|
||||
|
||||
@classmethod
|
||||
def record_class(cls) -> Type[SampleRecord]:
|
||||
return SampleRecord
|
||||
|
||||
def db_namespace(self) -> str:
|
||||
return "test"
|
||||
|
||||
@property
|
||||
def record_keys_writable(self) -> list[str]:
|
||||
"""Return writable field names of SampleRecord.
|
||||
|
||||
Required by _db_compact_tier which iterates record_keys_writable
|
||||
to decide which fields to resample. Must match exactly what
|
||||
key_to_array accepts — only 'value' here, not 'date_time'.
|
||||
"""
|
||||
return ["value"]
|
||||
|
||||
# Override key_to_array for the mixin tests — the full DataSequence
|
||||
# implementation lives in dataabc.py; here we provide a minimal version
|
||||
# that resamples the single `value` field to demonstrate compaction.
|
||||
def key_to_array(
|
||||
self,
|
||||
key: str,
|
||||
start_datetime: Optional[DateTime] = None,
|
||||
end_datetime: Optional[DateTime] = None,
|
||||
interval: Optional[Duration] = None,
|
||||
fill_method: Optional[str] = None,
|
||||
dropna: Optional[bool] = True,
|
||||
boundary: Literal["strict", "context"] = "context",
|
||||
align_to_interval: bool = False,
|
||||
) -> NDArray[Shape["*"], Any]:
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
|
||||
if interval is None:
|
||||
interval = to_duration("1 hour")
|
||||
|
||||
dates = []
|
||||
values = []
|
||||
for record in self.records:
|
||||
if record.date_time is None:
|
||||
continue
|
||||
ts = DatabaseTimestamp.from_datetime(record.date_time)
|
||||
if start_datetime and DatabaseTimestamp.from_datetime(start_datetime) > ts:
|
||||
continue
|
||||
if end_datetime and DatabaseTimestamp.from_datetime(end_datetime) <= ts:
|
||||
continue
|
||||
dates.append(record.date_time)
|
||||
values.append(getattr(record, key, None))
|
||||
|
||||
if not dates:
|
||||
return np.array([])
|
||||
|
||||
index = pd.to_datetime(dates, utc=True)
|
||||
series = pd.Series(values, index=index, dtype=float)
|
||||
freq = f"{int(interval.total_seconds())}s"
|
||||
origin = start_datetime if start_datetime else "start_day"
|
||||
resampled = series.resample(freq, origin=origin).mean().interpolate("time")
|
||||
|
||||
if start_datetime is not None:
|
||||
resampled = resampled.truncate(before=start_datetime)
|
||||
if end_datetime is not None:
|
||||
resampled = resampled.truncate(after=end_datetime)
|
||||
|
||||
return resampled.values
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _insert_records_every_n_minutes(
|
||||
seq: SampleSequence,
|
||||
base: DateTime,
|
||||
count: int,
|
||||
interval_minutes: int,
|
||||
value_fn=None,
|
||||
) -> None:
|
||||
"""Insert `count` records spaced `interval_minutes` apart starting at `base`."""
|
||||
for i in range(count):
|
||||
dt = base.add(minutes=i * interval_minutes)
|
||||
value = value_fn(i) if value_fn else float(i)
|
||||
seq.db_insert_record(SampleRecord(date_time=dt, value=value))
|
||||
seq.db_save_records()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fixtures
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def seq():
|
||||
return SampleSequence()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def seq_with_15min_data():
|
||||
"""Sequence with 15-min records spanning 4 weeks, so both tiers have data."""
|
||||
s = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
# 4 weeks × 7 days × 24 h × 4 records/h = 2688 records
|
||||
base = now.subtract(weeks=4)
|
||||
_insert_records_every_n_minutes(s, base, count=2688, interval_minutes=15)
|
||||
return s, now
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def seq_sparse():
|
||||
"""Sequence with only 3 records spread over 4 weeks — sparse, no compaction benefit."""
|
||||
s = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
base = now.subtract(weeks=4)
|
||||
for offset_days in [0, 14, 27]:
|
||||
dt = base.add(days=offset_days)
|
||||
s.db_insert_record(SampleRecord(date_time=dt, value=float(offset_days)))
|
||||
s.db_save_records()
|
||||
return s, now
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Existing tests (unchanged)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDatabaseRecordProtocolMixin:
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"start_str, value_count, interval_seconds",
|
||||
[
|
||||
("2024-11-10 00:00:00", 24, 3600),
|
||||
("2024-08-10 00:00:00", 24, 3600),
|
||||
("2024-03-31 00:00:00", 24, 3600),
|
||||
("2024-10-27 00:00:00", 24, 3600),
|
||||
],
|
||||
)
|
||||
def test_db_generate_timestamps_utc_spacing(
|
||||
self, seq, start_str, value_count, interval_seconds
|
||||
):
|
||||
start_dt = to_datetime(start_str, in_timezone="Europe/Berlin")
|
||||
assert start_dt.tz.name == "Europe/Berlin"
|
||||
|
||||
db_start = DatabaseTimestamp.from_datetime(start_dt)
|
||||
generated = list(seq.db_generate_timestamps(db_start, value_count))
|
||||
|
||||
assert len(generated) == value_count
|
||||
|
||||
for db_dt in generated:
|
||||
dt = DatabaseTimestamp.to_datetime(db_dt)
|
||||
assert dt.tz.name == "UTC"
|
||||
|
||||
assert len(generated) == len(set(generated)), "Duplicate UTC datetimes found"
|
||||
|
||||
for i in range(1, len(generated)):
|
||||
last_dt = DatabaseTimestamp.to_datetime(generated[i - 1])
|
||||
current_dt = DatabaseTimestamp.to_datetime(generated[i])
|
||||
delta = (current_dt - last_dt).total_seconds()
|
||||
assert delta == interval_seconds, f"Spacing mismatch at index {i}: {delta}s"
|
||||
|
||||
def test_insert_and_memory_range(self, seq):
|
||||
t0 = to_datetime()
|
||||
t1 = t0.add(hours=1)
|
||||
|
||||
seq.db_insert_record(SampleRecord(date_time=t0, value=1))
|
||||
seq.db_insert_record(SampleRecord(date_time=t1, value=2))
|
||||
|
||||
assert seq.records[0].date_time == t0
|
||||
assert seq.records[-1].date_time == t1
|
||||
assert len(seq.records) == 2
|
||||
|
||||
def test_roundtrip_reload(self):
|
||||
seq = SampleSequence()
|
||||
t0 = to_datetime()
|
||||
t1 = t0.add(hours=1)
|
||||
|
||||
seq.db_insert_record(SampleRecord(date_time=t0, value=1))
|
||||
seq.db_insert_record(SampleRecord(date_time=t1, value=2))
|
||||
assert seq.db_save_records() == 2
|
||||
|
||||
db = seq.database
|
||||
seq2 = SampleSequence()
|
||||
seq2.database = db
|
||||
loaded = seq2.db_load_records()
|
||||
|
||||
assert loaded == 2
|
||||
assert len(seq2.records) == 2
|
||||
|
||||
def test_db_count_records(self, seq):
|
||||
t0 = to_datetime()
|
||||
seq.db_insert_record(SampleRecord(date_time=t0, value=1))
|
||||
assert seq.db_count_records() == 1
|
||||
seq.db_save_records()
|
||||
assert seq.db_count_records() == 1
|
||||
|
||||
def test_delete_range(self, seq):
|
||||
base = to_datetime()
|
||||
for i in range(5):
|
||||
seq.db_insert_record(SampleRecord(date_time=base.add(minutes=i), value=i))
|
||||
|
||||
db_start = DatabaseTimestamp.from_datetime(base.add(minutes=1))
|
||||
db_end = DatabaseTimestamp.from_datetime(base.add(minutes=4))
|
||||
deleted = seq.db_delete_records(start_timestamp=db_start, end_timestamp=db_end)
|
||||
|
||||
assert deleted == 3
|
||||
assert [r.value for r in seq.records] == [0, 4]
|
||||
|
||||
def test_db_count_records_memory_only_multiple(self):
|
||||
seq = SampleSequence()
|
||||
base = to_datetime()
|
||||
for i in range(3):
|
||||
seq.db_insert_record(SampleRecord(date_time=base.add(minutes=i), value=i))
|
||||
assert seq.db_count_records() == 3
|
||||
|
||||
def test_db_count_records_memory_newer_than_db(self):
|
||||
seq = SampleSequence()
|
||||
base = to_datetime()
|
||||
seq.db_insert_record(SampleRecord(date_time=base, value=1))
|
||||
seq.db_save_records()
|
||||
seq.db_insert_record(SampleRecord(date_time=base.add(hours=1), value=2))
|
||||
seq.db_insert_record(SampleRecord(date_time=base.add(hours=2), value=3))
|
||||
assert seq.db_count_records() == 3
|
||||
|
||||
def test_db_count_records_memory_older_than_db(self):
|
||||
seq = SampleSequence()
|
||||
base = to_datetime()
|
||||
seq.db_insert_record(SampleRecord(date_time=base.add(hours=1), value=2))
|
||||
seq.db_save_records()
|
||||
seq.db_insert_record(SampleRecord(date_time=base, value=1))
|
||||
assert seq.db_count_records() == 2
|
||||
|
||||
def test_db_count_records_empty_everywhere(self):
|
||||
seq = SampleSequence()
|
||||
assert seq.db_count_records() == 0
|
||||
|
||||
def test_metadata_not_counted(self, seq):
|
||||
seq.database._data.setdefault("test", {})[DATABASE_METADATA_KEY] = b"meta"
|
||||
assert seq.db_count_records() == 0
|
||||
|
||||
def test_key_range_excludes_metadata(self, seq):
|
||||
ns = seq.db_namespace()
|
||||
seq.database._data.setdefault(ns, {})[DATABASE_METADATA_KEY] = b"meta"
|
||||
assert seq.database.get_key_range(ns) == (None, None)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Compaction tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestCompactTiers:
|
||||
"""Tests for db_compact_tiers() and the tier hook."""
|
||||
|
||||
def test_default_tiers_returns_two_entries(self, seq):
|
||||
tiers = seq.db_compact_tiers()
|
||||
assert len(tiers) == 2
|
||||
|
||||
def test_default_tiers_ordered_shortest_first(self, seq):
|
||||
tiers = seq.db_compact_tiers()
|
||||
ages = [t[0].total_seconds() for t in tiers]
|
||||
assert ages == sorted(ages), "Tiers must be ordered shortest age first"
|
||||
|
||||
def test_default_tiers_first_is_2h_to_15min(self, seq):
|
||||
tiers = seq.db_compact_tiers()
|
||||
age_sec, interval_sec = (
|
||||
tiers[0][0].total_seconds(),
|
||||
tiers[0][1].total_seconds(),
|
||||
)
|
||||
assert age_sec == 2 * 3600
|
||||
assert interval_sec == 15 * 60
|
||||
|
||||
def test_default_tiers_second_is_2weeks_to_1h(self, seq):
|
||||
tiers = seq.db_compact_tiers()
|
||||
age_sec, interval_sec = (
|
||||
tiers[1][0].total_seconds(),
|
||||
tiers[1][1].total_seconds(),
|
||||
)
|
||||
assert age_sec == 14 * 24 * 3600
|
||||
assert interval_sec == 3600
|
||||
|
||||
def test_override_tiers(self):
|
||||
class CustomSeq(SampleSequence):
|
||||
def db_compact_tiers(self):
|
||||
return [(to_duration("7 days"), to_duration("1 hour"))]
|
||||
|
||||
s = CustomSeq()
|
||||
tiers = s.db_compact_tiers()
|
||||
assert len(tiers) == 1
|
||||
assert tiers[0][1].total_seconds() == 3600
|
||||
|
||||
def test_empty_tiers_disables_compaction(self):
|
||||
class NoCompactSeq(SampleSequence):
|
||||
def db_compact_tiers(self):
|
||||
return []
|
||||
|
||||
s = NoCompactSeq()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
base = now.subtract(weeks=4)
|
||||
_insert_records_every_n_minutes(s, base, count=100, interval_minutes=15)
|
||||
|
||||
deleted = s.db_compact()
|
||||
assert deleted == 0
|
||||
|
||||
|
||||
class TestCompactState:
|
||||
"""Tests for _db_get_compact_state / _db_set_compact_state."""
|
||||
|
||||
def test_get_state_returns_none_when_no_metadata(self, seq):
|
||||
interval = to_duration("1 hour")
|
||||
assert seq._db_get_compact_state(interval) is None
|
||||
|
||||
def test_set_and_get_state_roundtrip(self, seq):
|
||||
interval = to_duration("1 hour")
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
ts = DatabaseTimestamp.from_datetime(now)
|
||||
|
||||
seq._db_set_compact_state(interval, ts)
|
||||
retrieved = seq._db_get_compact_state(interval)
|
||||
|
||||
assert retrieved == ts
|
||||
|
||||
def test_state_is_per_tier(self, seq):
|
||||
"""Different tier intervals must not overwrite each other."""
|
||||
interval_15min = to_duration("15 minutes")
|
||||
interval_1h = to_duration("1 hour")
|
||||
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
ts_15 = DatabaseTimestamp.from_datetime(now)
|
||||
ts_1h = DatabaseTimestamp.from_datetime(now.subtract(days=1))
|
||||
|
||||
seq._db_set_compact_state(interval_15min, ts_15)
|
||||
seq._db_set_compact_state(interval_1h, ts_1h)
|
||||
|
||||
assert seq._db_get_compact_state(interval_15min) == ts_15
|
||||
assert seq._db_get_compact_state(interval_1h) == ts_1h
|
||||
|
||||
def test_state_persists_in_metadata(self, seq):
|
||||
"""State must survive a metadata reload."""
|
||||
interval = to_duration("1 hour")
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
ts = DatabaseTimestamp.from_datetime(now)
|
||||
|
||||
seq._db_set_compact_state(interval, ts)
|
||||
|
||||
# Reload metadata from fake DB
|
||||
seq2 = SampleSequence()
|
||||
seq2.database = seq.database
|
||||
seq2._db_metadata = seq2._db_load_metadata()
|
||||
|
||||
assert seq2._db_get_compact_state(interval) == ts
|
||||
|
||||
|
||||
class TestCompactSparseGuard:
|
||||
"""The inflation guard must skip compaction when records are already sparse."""
|
||||
|
||||
def test_sparse_data_aligns_but_does_not_reduce_cardinality(self, seq_sparse):
|
||||
"""Sparse data must be aligned to the target interval for all records that were modified."""
|
||||
seq, _ = seq_sparse
|
||||
|
||||
interval = to_duration("15 minutes")
|
||||
interval_sec = int(interval.total_seconds())
|
||||
|
||||
# Snapshot original timestamps
|
||||
before_epochs = {
|
||||
int(r.date_time.timestamp())
|
||||
for r in seq.records
|
||||
}
|
||||
|
||||
seq._db_compact_tier(
|
||||
to_duration("30 minutes"),
|
||||
interval,
|
||||
)
|
||||
|
||||
after_epochs = {
|
||||
int(r.date_time.timestamp())
|
||||
for r in seq.records
|
||||
}
|
||||
|
||||
# Cardinality must not increase
|
||||
assert len(after_epochs) <= len(before_epochs)
|
||||
|
||||
# Any timestamp that changed must now be aligned
|
||||
changed_epochs = after_epochs - before_epochs
|
||||
|
||||
for epoch in changed_epochs:
|
||||
assert epoch % interval_sec == 0
|
||||
|
||||
def test_sparse_guard_advances_cutoff(self, seq_sparse):
|
||||
"""Even when skipped, the cutoff should be stored so next run skips the same window."""
|
||||
seq, _ = seq_sparse
|
||||
interval_1h = to_duration("1 hour")
|
||||
interval_15min = to_duration("15 minutes")
|
||||
|
||||
seq.db_compact()
|
||||
|
||||
# Both tiers should have stored a cutoff even though nothing was deleted
|
||||
assert seq._db_get_compact_state(interval_1h) is not None
|
||||
assert seq._db_get_compact_state(interval_15min) is not None
|
||||
|
||||
def test_exactly_at_boundary_remains_stable(self, seq):
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
interval = to_duration("1 hour")
|
||||
|
||||
raw_base = now.subtract(hours=5).set(minute=0, second=0, microsecond=0)
|
||||
base = raw_base.subtract(seconds=int(raw_base.timestamp()) % 3600)
|
||||
|
||||
for i in range(4):
|
||||
seq.db_insert_record(
|
||||
SampleRecord(
|
||||
date_time=base.add(hours=i),
|
||||
value=float(i),
|
||||
)
|
||||
)
|
||||
|
||||
seq.db_insert_record(
|
||||
SampleRecord(date_time=now.subtract(seconds=1), value=0.0)
|
||||
)
|
||||
seq.db_save_records()
|
||||
|
||||
before = [
|
||||
(int(r.date_time.timestamp()), r.value)
|
||||
for r in seq.records
|
||||
]
|
||||
|
||||
seq._db_compact_tier(
|
||||
to_duration("30 minutes"),
|
||||
interval,
|
||||
)
|
||||
|
||||
after = [
|
||||
(int(r.date_time.timestamp()), r.value)
|
||||
for r in seq.records
|
||||
]
|
||||
|
||||
assert before == after
|
||||
|
||||
|
||||
class TestCompactTierWorker:
|
||||
"""Unit tests for _db_compact_tier directly."""
|
||||
|
||||
def test_empty_sequence_returns_zero(self, seq):
|
||||
age = to_duration("2 hours")
|
||||
interval = to_duration("15 minutes")
|
||||
assert seq._db_compact_tier(age, interval) == 0
|
||||
|
||||
def test_all_records_too_recent_skipped(self):
|
||||
"""Records within the age threshold must not be touched."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
# Insert 10 records from 30 minutes ago — all within 2h threshold
|
||||
base = now.subtract(minutes=30)
|
||||
_insert_records_every_n_minutes(seq, base, count=10, interval_minutes=1)
|
||||
|
||||
before = seq.db_count_records()
|
||||
deleted = seq._db_compact_tier(to_duration("2 hours"), to_duration("15 minutes"))
|
||||
|
||||
assert deleted == 0
|
||||
assert seq.db_count_records() == before
|
||||
|
||||
def test_compaction_reduces_record_count(self):
|
||||
"""Dense 1-min records older than 2h should be downsampled to 15-min."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
# Insert 1-min records for 6 hours ending 3 hours ago
|
||||
base = now.subtract(hours=9)
|
||||
_insert_records_every_n_minutes(seq, base, count=6 * 60, interval_minutes=1)
|
||||
|
||||
before = seq.db_count_records()
|
||||
deleted = seq._db_compact_tier(to_duration("2 hours"), to_duration("15 minutes"))
|
||||
|
||||
after = seq.db_count_records()
|
||||
assert deleted > 0
|
||||
assert after < before
|
||||
|
||||
def test_records_within_threshold_preserved(self):
|
||||
"""Records newer than age_threshold must remain untouched after compaction."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
|
||||
# Old dense records (will be compacted)
|
||||
old_base = now.subtract(hours=6)
|
||||
_insert_records_every_n_minutes(seq, old_base, count=4 * 60, interval_minutes=1)
|
||||
|
||||
# Recent records (must not be touched) — insert 5 records in the last hour
|
||||
recent_base = now.subtract(minutes=50)
|
||||
_insert_records_every_n_minutes(seq, recent_base, count=5, interval_minutes=10)
|
||||
|
||||
recent_before = [
|
||||
r for r in seq.records
|
||||
if r.date_time and r.date_time >= recent_base
|
||||
]
|
||||
|
||||
seq._db_compact_tier(to_duration("2 hours"), to_duration("15 minutes"))
|
||||
|
||||
recent_after = [
|
||||
r for r in seq.records
|
||||
if r.date_time and r.date_time >= recent_base
|
||||
]
|
||||
assert len(recent_after) == len(recent_before)
|
||||
|
||||
def test_incremental_cutoff_prevents_recompaction(self):
|
||||
"""Running compaction twice must not re-compact already-compacted data."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
base = now.subtract(hours=8)
|
||||
_insert_records_every_n_minutes(seq, base, count=5 * 60, interval_minutes=1)
|
||||
|
||||
age = to_duration("2 hours")
|
||||
interval = to_duration("15 minutes")
|
||||
|
||||
deleted_first = seq._db_compact_tier(age, interval)
|
||||
count_after_first = seq.db_count_records()
|
||||
|
||||
deleted_second = seq._db_compact_tier(age, interval)
|
||||
count_after_second = seq.db_count_records()
|
||||
|
||||
assert deleted_first > 0
|
||||
assert deleted_second == 0, "Second run must be a no-op"
|
||||
assert count_after_first == count_after_second
|
||||
|
||||
def test_cutoff_stored_after_compaction(self):
|
||||
"""Cutoff timestamp must be persisted after a successful compaction run."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
base = now.subtract(hours=8)
|
||||
_insert_records_every_n_minutes(seq, base, count=5 * 60, interval_minutes=1)
|
||||
|
||||
interval = to_duration("15 minutes")
|
||||
seq._db_compact_tier(to_duration("2 hours"), interval)
|
||||
|
||||
assert seq._db_get_compact_state(interval) is not None
|
||||
|
||||
|
||||
class TestDbCompact:
|
||||
"""Integration tests for the public db_compact() entry point."""
|
||||
|
||||
def test_compact_dense_data_both_tiers(self, seq_with_15min_data):
|
||||
"""4 weeks of 15-min data should be reduced by both tiers."""
|
||||
seq, _ = seq_with_15min_data
|
||||
before = seq.db_count_records()
|
||||
|
||||
total_deleted = seq.db_compact()
|
||||
|
||||
after = seq.db_count_records()
|
||||
assert total_deleted > 0
|
||||
assert after < before
|
||||
|
||||
def test_compact_coarsest_tier_runs_first(self, seq_with_15min_data):
|
||||
"""The 1-hour tier (coarsest) must run before the 15-min tier.
|
||||
|
||||
If coarsest ran last it would re-compact records the 15-min tier
|
||||
had already downsampled — verified by checking that the 1-hour
|
||||
cutoff is not later than the 15-min cutoff.
|
||||
"""
|
||||
seq, _ = seq_with_15min_data
|
||||
seq.db_compact()
|
||||
|
||||
cutoff_1h = seq._db_get_compact_state(to_duration("1 hour"))
|
||||
cutoff_15min = seq._db_get_compact_state(to_duration("15 minutes"))
|
||||
|
||||
assert cutoff_1h is not None
|
||||
assert cutoff_15min is not None
|
||||
# The 1h tier covers older data → its cutoff must be earlier than 15min tier
|
||||
assert cutoff_1h <= cutoff_15min
|
||||
|
||||
def test_compact_idempotent(self, seq_with_15min_data):
|
||||
"""Running db_compact twice must not change record count."""
|
||||
seq, _ = seq_with_15min_data
|
||||
seq.db_compact()
|
||||
after_first = seq.db_count_records()
|
||||
|
||||
seq.db_compact()
|
||||
after_second = seq.db_count_records()
|
||||
|
||||
assert after_first == after_second
|
||||
|
||||
def test_compact_empty_sequence_returns_zero(self, seq):
|
||||
assert seq.db_compact() == 0
|
||||
|
||||
def test_compact_with_override_tiers(self):
|
||||
"""Passing compact_tiers directly must override db_compact_tiers()."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
base = now.subtract(weeks=3)
|
||||
_insert_records_every_n_minutes(seq, base, count=3 * 7 * 24 * 4, interval_minutes=15)
|
||||
|
||||
before = seq.db_count_records()
|
||||
deleted = seq.db_compact(
|
||||
compact_tiers=[(to_duration("1 day"), to_duration("1 hour"))]
|
||||
)
|
||||
|
||||
assert deleted > 0
|
||||
assert seq.db_count_records() < before
|
||||
|
||||
def test_compact_only_processes_new_window_on_second_call(self):
|
||||
"""Second call processes only the new window, not the full history."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
base = now.subtract(weeks=3)
|
||||
# Dense 1-min data for 3 weeks
|
||||
_insert_records_every_n_minutes(seq, base, count=3 * 7 * 24 * 60, interval_minutes=1)
|
||||
|
||||
seq.db_compact()
|
||||
count_after_first = seq.db_count_records()
|
||||
|
||||
# Add one more day of dense data in the past (simulate new old data arriving)
|
||||
extra_base = now.subtract(weeks=3).subtract(days=1)
|
||||
_insert_records_every_n_minutes(seq, extra_base, count=24 * 60, interval_minutes=1)
|
||||
|
||||
seq.db_compact()
|
||||
count_after_second = seq.db_count_records()
|
||||
|
||||
# Second compact should have processed the newly added old data
|
||||
# Record count may change but should not exceed first compacted count by much
|
||||
assert count_after_second >= 0 # basic sanity
|
||||
|
||||
|
||||
class TestCompactDataIntegrity:
|
||||
"""Verify value integrity is preserved after compaction."""
|
||||
|
||||
def test_constant_value_preserved(self):
|
||||
"""Constant value field must survive mean-resampling unchanged."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
base = now.subtract(hours=6)
|
||||
|
||||
# All values = 42.0
|
||||
_insert_records_every_n_minutes(
|
||||
seq, base, count=6 * 60, interval_minutes=1, value_fn=lambda _: 42.0
|
||||
)
|
||||
|
||||
seq._db_compact_tier(to_duration("2 hours"), to_duration("15 minutes"))
|
||||
|
||||
for record in seq.records:
|
||||
if record.date_time and record.date_time < now.subtract(hours=2):
|
||||
assert record.value == pytest.approx(42.0, abs=1e-6)
|
||||
|
||||
def test_recent_records_not_modified(self):
|
||||
"""Records newer than the age threshold must have unchanged values."""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
|
||||
old_base = now.subtract(hours=6)
|
||||
_insert_records_every_n_minutes(seq, old_base, count=3 * 60, interval_minutes=1)
|
||||
|
||||
# Known recent values
|
||||
recent_base = now.subtract(minutes=30)
|
||||
expected = {i * 10: float(100 + i) for i in range(3)}
|
||||
for offset, val in expected.items():
|
||||
dt = recent_base.add(minutes=offset)
|
||||
seq.db_insert_record(SampleRecord(date_time=dt, value=val))
|
||||
seq.db_save_records()
|
||||
|
||||
seq._db_compact_tier(to_duration("2 hours"), to_duration("15 minutes"))
|
||||
|
||||
for record in seq.records:
|
||||
if record.date_time and record.date_time >= recent_base:
|
||||
offset = int((record.date_time - recent_base).total_seconds() / 60)
|
||||
if offset in expected:
|
||||
assert record.value == pytest.approx(expected[offset], abs=1e-6)
|
||||
|
||||
def test_compacted_timestamps_spacing(self):
|
||||
"""Resampled records must be fewer than original and span the compaction window.
|
||||
|
||||
Exact per-bucket spacing depends on the full DataSequence.key_to_array
|
||||
implementation (pandas resampling). The stub key_to_array in SampleSequence
|
||||
only guarantees a reduction in count — uniform spacing is verified in
|
||||
test_dataabc_compact.py against the real implementation.
|
||||
"""
|
||||
seq = SampleSequence()
|
||||
now = to_datetime().in_timezone("UTC")
|
||||
base = now.subtract(hours=6)
|
||||
_insert_records_every_n_minutes(seq, base, count=5 * 60, interval_minutes=1)
|
||||
|
||||
before = seq.db_count_records()
|
||||
seq._db_compact_tier(to_duration("2 hours"), to_duration("15 minutes"))
|
||||
|
||||
cutoff = now.subtract(hours=2)
|
||||
compacted = sorted(
|
||||
[r for r in seq.records if r.date_time and r.date_time < cutoff],
|
||||
key=lambda r: cast(DateTime, r.date_time),
|
||||
)
|
||||
|
||||
# Must have produced fewer records than the original 1-min data
|
||||
assert len(compacted) > 0, "Expected at least one compacted record"
|
||||
assert len(compacted) < before, "Compaction must reduce record count"
|
||||
|
||||
# Window start is floored to interval boundary
|
||||
interval_sec = 15 * 60
|
||||
expected_window_start = DateTime.fromtimestamp(
|
||||
(int(base.timestamp()) // interval_sec) * interval_sec,
|
||||
tz="UTC",
|
||||
)
|
||||
assert compacted[0].date_time >= expected_window_start
|
||||
|
||||
# Last compacted record must be before the cutoff
|
||||
assert compacted[-1].date_time < cutoff
|
||||
@@ -1460,6 +1460,17 @@ class TestTimeWindowSequence:
|
||||
# - without local timezone as UTC
|
||||
(
|
||||
"TC014",
|
||||
"UTC",
|
||||
"2024-01-03",
|
||||
None,
|
||||
"UTC",
|
||||
None,
|
||||
False,
|
||||
pendulum.datetime(2024, 1, 3, 0, 0, 0, tz="UTC"),
|
||||
False,
|
||||
),
|
||||
(
|
||||
"TC015",
|
||||
"Atlantic/Canary",
|
||||
"02/02/24",
|
||||
None,
|
||||
@@ -1470,7 +1481,7 @@ class TestTimeWindowSequence:
|
||||
False,
|
||||
),
|
||||
(
|
||||
"TC015",
|
||||
"TC016",
|
||||
"Atlantic/Canary",
|
||||
"2024-03-03T10:20:30.000Z", # No dalight saving time at this date
|
||||
None,
|
||||
@@ -1484,7 +1495,7 @@ class TestTimeWindowSequence:
|
||||
# from pendulum.datetime to pendulum.datetime object
|
||||
# ---------------------------------------
|
||||
(
|
||||
"TC016",
|
||||
"TC017",
|
||||
"Atlantic/Canary",
|
||||
pendulum.datetime(2024, 4, 4, 0, 0, 0),
|
||||
None,
|
||||
@@ -1495,7 +1506,7 @@ class TestTimeWindowSequence:
|
||||
False,
|
||||
),
|
||||
(
|
||||
"TC017",
|
||||
"TC018",
|
||||
"Atlantic/Canary",
|
||||
pendulum.datetime(2024, 4, 4, 1, 0, 0),
|
||||
None,
|
||||
@@ -1506,7 +1517,7 @@ class TestTimeWindowSequence:
|
||||
False,
|
||||
),
|
||||
(
|
||||
"TC018",
|
||||
"TC019",
|
||||
"Atlantic/Canary",
|
||||
pendulum.datetime(2024, 4, 4, 1, 0, 0, tz="Etc/UTC"),
|
||||
None,
|
||||
@@ -1517,7 +1528,7 @@ class TestTimeWindowSequence:
|
||||
False,
|
||||
),
|
||||
(
|
||||
"TC019",
|
||||
"TC020",
|
||||
"Atlantic/Canary",
|
||||
pendulum.datetime(2024, 4, 4, 2, 0, 0, tz="Europe/Berlin"),
|
||||
None,
|
||||
@@ -1533,7 +1544,7 @@ class TestTimeWindowSequence:
|
||||
# - no timezone
|
||||
# local timezone UTC
|
||||
(
|
||||
"TC020",
|
||||
"TC021",
|
||||
"Etc/UTC",
|
||||
"2023-11-06T00:00:00",
|
||||
"UTC",
|
||||
@@ -1545,7 +1556,7 @@ class TestTimeWindowSequence:
|
||||
),
|
||||
# local timezone "Europe/Berlin"
|
||||
(
|
||||
"TC021",
|
||||
"TC022",
|
||||
"Europe/Berlin",
|
||||
"2023-11-06T00:00:00",
|
||||
"UTC",
|
||||
@@ -1557,7 +1568,7 @@ class TestTimeWindowSequence:
|
||||
),
|
||||
# - no microseconds
|
||||
(
|
||||
"TC022",
|
||||
"TC023",
|
||||
"Atlantic/Canary",
|
||||
"2024-10-30T00:00:00+01:00",
|
||||
"UTC",
|
||||
@@ -1568,7 +1579,7 @@ class TestTimeWindowSequence:
|
||||
False,
|
||||
),
|
||||
(
|
||||
"TC023",
|
||||
"TC024",
|
||||
"Atlantic/Canary",
|
||||
"2024-10-30T01:00:00+01:00",
|
||||
"utc",
|
||||
@@ -1580,7 +1591,7 @@ class TestTimeWindowSequence:
|
||||
),
|
||||
# - with microseconds
|
||||
(
|
||||
"TC024",
|
||||
"TC025",
|
||||
"Atlantic/Canary",
|
||||
"2024-10-07T10:20:30.000+02:00",
|
||||
"UTC",
|
||||
@@ -1596,7 +1607,7 @@ class TestTimeWindowSequence:
|
||||
# - no timezone
|
||||
# local timezone
|
||||
(
|
||||
"TC025",
|
||||
"TC026",
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
|
||||
@@ -14,8 +14,10 @@ DIR_DOCS_GENERATED = DIR_PROJECT_ROOT / "docs" / "_generated"
|
||||
DIR_TEST_GENERATED = DIR_TESTDATA / "docs" / "_generated"
|
||||
|
||||
|
||||
def test_openapi_spec_current(config_eos):
|
||||
def test_openapi_spec_current(config_eos, set_other_timezone):
|
||||
"""Verify the openapi spec hasn´t changed."""
|
||||
set_other_timezone("UTC") # CI runs on UTC
|
||||
|
||||
expected_spec_path = DIR_PROJECT_ROOT / "openapi.json"
|
||||
new_spec_path = DIR_TESTDATA / "openapi-new.json"
|
||||
|
||||
@@ -23,7 +25,7 @@ def test_openapi_spec_current(config_eos):
|
||||
expected_spec = json.load(f_expected)
|
||||
|
||||
# Patch get_config and import within guard to patch global variables within the eos module.
|
||||
with patch("akkudoktoreos.config.config.get_config", return_value=config_eos):
|
||||
with patch("akkudoktoreos.core.coreabc.get_config", return_value=config_eos):
|
||||
# Ensure the script works correctly as part of a package
|
||||
root_dir = Path(__file__).resolve().parent.parent
|
||||
sys.path.insert(0, str(root_dir))
|
||||
@@ -39,7 +41,7 @@ def test_openapi_spec_current(config_eos):
|
||||
expected_spec_str = json.dumps(expected_spec, indent=4, sort_keys=True)
|
||||
|
||||
try:
|
||||
assert spec_str == expected_spec_str
|
||||
assert json.loads(spec_str) == json.loads(expected_spec_str)
|
||||
except AssertionError as e:
|
||||
pytest.fail(
|
||||
f"Expected {new_spec_path} to equal {expected_spec_path}.\n"
|
||||
@@ -47,8 +49,10 @@ def test_openapi_spec_current(config_eos):
|
||||
)
|
||||
|
||||
|
||||
def test_openapi_md_current(config_eos):
|
||||
def test_openapi_md_current(config_eos, set_other_timezone):
|
||||
"""Verify the generated openapi markdown hasn´t changed."""
|
||||
set_other_timezone("UTC") # CI runs on UTC
|
||||
|
||||
expected_spec_md_path = DIR_PROJECT_ROOT / "docs" / "_generated" / "openapi.md"
|
||||
new_spec_md_path = DIR_TESTDATA / "openapi-new.md"
|
||||
|
||||
@@ -56,7 +60,7 @@ def test_openapi_md_current(config_eos):
|
||||
expected_spec_md = f_expected.read()
|
||||
|
||||
# Patch get_config and import within guard to patch global variables within the eos module.
|
||||
with patch("akkudoktoreos.config.config.get_config", return_value=config_eos):
|
||||
with patch("akkudoktoreos.core.coreabc.get_config", return_value=config_eos):
|
||||
# Ensure the script works correctly as part of a package
|
||||
root_dir = Path(__file__).resolve().parent.parent
|
||||
sys.path.insert(0, str(root_dir))
|
||||
@@ -76,8 +80,10 @@ def test_openapi_md_current(config_eos):
|
||||
)
|
||||
|
||||
|
||||
def test_config_md_current(config_eos):
|
||||
def test_config_md_current(config_eos, set_other_timezone):
|
||||
"""Verify the generated configuration markdown hasn´t changed."""
|
||||
set_other_timezone("UTC") # CI runs on UTC
|
||||
|
||||
assert DIR_DOCS_GENERATED.exists()
|
||||
|
||||
# Remove any leftover files from last run
|
||||
@@ -88,7 +94,7 @@ def test_config_md_current(config_eos):
|
||||
DIR_TEST_GENERATED.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Patch get_config and import within guard to patch global variables within the eos module.
|
||||
with patch("akkudoktoreos.config.config.get_config", return_value=config_eos):
|
||||
with patch("akkudoktoreos.core.coreabc.get_config", return_value=config_eos):
|
||||
# Ensure the script works correctly as part of a package
|
||||
root_dir = Path(__file__).resolve().parent.parent
|
||||
sys.path.insert(0, str(root_dir))
|
||||
@@ -106,7 +112,11 @@ def test_config_md_current(config_eos):
|
||||
tested.append(DIR_TEST_GENERATED / file_name)
|
||||
|
||||
# Create test files
|
||||
config_md = generate_config_md.generate_config_md(tested[0], config_eos)
|
||||
try:
|
||||
config_eos._force_documentation_mode = True
|
||||
config_md = generate_config_md.generate_config_md(tested[0], config_eos)
|
||||
finally:
|
||||
config_eos._force_documentation_mode = False
|
||||
|
||||
# Check test files are the same as the expected files
|
||||
for i, expected_path in enumerate(expected):
|
||||
|
||||
@@ -9,6 +9,8 @@ from typing import Optional
|
||||
|
||||
import pytest
|
||||
|
||||
from akkudoktoreos.core.coreabc import singletons_init
|
||||
|
||||
DIR_PROJECT_ROOT = Path(__file__).absolute().parent.parent
|
||||
DIR_BUILD = DIR_PROJECT_ROOT / "build"
|
||||
DIR_BUILD_DOCS = DIR_PROJECT_ROOT / "build" / "docs"
|
||||
@@ -80,6 +82,7 @@ class TestSphinxDocumentation:
|
||||
|
||||
def test_sphinx_build(self, sphinx_changed: Optional[str], is_finalize: bool):
|
||||
"""Build Sphinx documentation and ensure no major warnings appear in the build output."""
|
||||
|
||||
# Ensure docs folder exists
|
||||
if not DIR_DOCS.exists():
|
||||
pytest.skip(f"Skipping Sphinx build test - docs folder not present: {DIR_DOCS}")
|
||||
@@ -88,7 +91,7 @@ class TestSphinxDocumentation:
|
||||
pytest.skip(f"Skipping Sphinx build — no relevant file changes detected: {HASH_FILE}")
|
||||
|
||||
if not is_finalize:
|
||||
pytest.skip("Skipping Sphinx test — not full run")
|
||||
pytest.skip("Skipping Sphinx test — not finalize")
|
||||
|
||||
# Clean directories
|
||||
self._cleanup_autosum_dirs()
|
||||
@@ -123,7 +126,11 @@ class TestSphinxDocumentation:
|
||||
# Remove temporary EOS_DIR
|
||||
eos_tmp_dir.cleanup()
|
||||
|
||||
assert returncode == 0
|
||||
if returncode != 0:
|
||||
pytest.fail(
|
||||
f"Sphinx build failed with exit code {returncode}.\n"
|
||||
f"{output}\n"
|
||||
)
|
||||
|
||||
# Possible markers: ERROR: WARNING: TRACEBACK:
|
||||
major_markers = ("ERROR:", "TRACEBACK:")
|
||||
|
||||
@@ -8,7 +8,7 @@ import requests
|
||||
from loguru import logger
|
||||
|
||||
from akkudoktoreos.core.cache import CacheFileStore
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.elecpriceakkudoktor import (
|
||||
AkkudoktorElecPrice,
|
||||
AkkudoktorElecPriceValue,
|
||||
|
||||
@@ -8,7 +8,7 @@ import requests
|
||||
from loguru import logger
|
||||
|
||||
from akkudoktoreos.core.cache import CacheFileStore
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.elecpriceakkudoktor import (
|
||||
AkkudoktorElecPrice,
|
||||
AkkudoktorElecPriceValue,
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import numpy.testing as npt
|
||||
import pytest
|
||||
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.elecpriceimport import ElecPriceImport
|
||||
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime
|
||||
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime, to_duration
|
||||
|
||||
DIR_TESTDATA = Path(__file__).absolute().parent.joinpath("testdata")
|
||||
|
||||
@@ -83,6 +84,7 @@ def test_invalid_provider(provider, config_eos):
|
||||
)
|
||||
def test_import(provider, sample_import_1_json, start_datetime, from_file, config_eos):
|
||||
"""Test fetching forecast from Import."""
|
||||
key = "elecprice_marketprice_wh"
|
||||
ems_eos = get_ems()
|
||||
ems_eos.set_start_datetime(to_datetime(start_datetime, in_timezone="Europe/Berlin"))
|
||||
if from_file:
|
||||
@@ -91,7 +93,7 @@ def test_import(provider, sample_import_1_json, start_datetime, from_file, confi
|
||||
else:
|
||||
config_eos.elecprice.elecpriceimport.import_file_path = None
|
||||
assert config_eos.elecprice.elecpriceimport.import_file_path is None
|
||||
provider.clear()
|
||||
provider.delete_by_datetime(start_datetime=None, end_datetime=None)
|
||||
|
||||
# Call the method
|
||||
provider.update_data()
|
||||
@@ -100,16 +102,13 @@ def test_import(provider, sample_import_1_json, start_datetime, from_file, confi
|
||||
assert provider.ems_start_datetime is not None
|
||||
assert provider.total_hours is not None
|
||||
assert compare_datetimes(provider.ems_start_datetime, ems_eos.start_datetime).equal
|
||||
values = sample_import_1_json["elecprice_marketprice_wh"]
|
||||
value_datetime_mapping = provider.import_datetimes(ems_eos.start_datetime, len(values))
|
||||
for i, mapping in enumerate(value_datetime_mapping):
|
||||
assert i < len(provider.records)
|
||||
expected_datetime, expected_value_index = mapping
|
||||
expected_value = values[expected_value_index]
|
||||
result_datetime = provider.records[i].date_time
|
||||
result_value = provider.records[i]["elecprice_marketprice_wh"]
|
||||
|
||||
# print(f"{i}: Expected: {expected_datetime}:{expected_value}")
|
||||
# print(f"{i}: Result: {result_datetime}:{result_value}")
|
||||
assert compare_datetimes(result_datetime, expected_datetime).equal
|
||||
assert result_value == expected_value
|
||||
expected_values = sample_import_1_json[key]
|
||||
result_values = provider.key_to_array(
|
||||
key=key,
|
||||
start_datetime=provider.ems_start_datetime,
|
||||
end_datetime=provider.ems_start_datetime + to_duration(f"{len(expected_values)} hours"),
|
||||
interval=to_duration("1 hour"),
|
||||
)
|
||||
# Allow for some difference due to value calculation on DST change
|
||||
npt.assert_allclose(result_values, expected_values, rtol=0.001)
|
||||
|
||||
@@ -3,7 +3,7 @@ from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.feedintarifffixed import FeedInTariffFixed
|
||||
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ import pytest
|
||||
|
||||
from akkudoktoreos.config.config import ConfigEOS
|
||||
from akkudoktoreos.core.cache import CacheEnergyManagementStore
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.optimization.genetic.genetic import GeneticOptimization
|
||||
from akkudoktoreos.optimization.genetic.geneticparams import (
|
||||
GeneticOptimizationParameters,
|
||||
@@ -18,7 +18,7 @@ from akkudoktoreos.utils.visualize import (
|
||||
prepare_visualize, # Import the new prepare_visualize
|
||||
)
|
||||
|
||||
ems_eos = get_ems()
|
||||
ems_eos = get_ems(init=True) # init once
|
||||
|
||||
DIR_TESTDATA = Path(__file__).parent / "testdata"
|
||||
|
||||
|
||||
@@ -4,8 +4,8 @@ import numpy as np
|
||||
import pendulum
|
||||
import pytest
|
||||
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.measurement.measurement import MeasurementDataRecord, get_measurement
|
||||
from akkudoktoreos.core.coreabc import get_ems, get_measurement
|
||||
from akkudoktoreos.measurement.measurement import MeasurementDataRecord
|
||||
from akkudoktoreos.prediction.loadakkudoktor import (
|
||||
LoadAkkudoktor,
|
||||
LoadAkkudoktorAdjusted,
|
||||
@@ -63,7 +63,7 @@ def measurement_eos():
|
||||
dt = to_datetime("2024-01-01T00:00:00")
|
||||
interval = to_duration("1 hour")
|
||||
for i in range(25):
|
||||
measurement.records.append(
|
||||
measurement.insert_by_datetime(
|
||||
MeasurementDataRecord(
|
||||
date_time=dt,
|
||||
load0_mr=load0_mr,
|
||||
@@ -138,7 +138,7 @@ def test_update_data(mock_load_data, loadakkudoktor):
|
||||
ems_eos.set_start_datetime(pendulum.datetime(2024, 1, 1))
|
||||
|
||||
# Assure there are no prediction records
|
||||
loadakkudoktor.clear()
|
||||
loadakkudoktor.delete_by_datetime(start_datetime=None, end_datetime=None)
|
||||
assert len(loadakkudoktor) == 0
|
||||
|
||||
# Execute the method
|
||||
@@ -152,6 +152,24 @@ def test_calculate_adjustment(loadakkudoktoradjusted, measurement_eos):
|
||||
"""Test `_calculate_adjustment` for various scenarios."""
|
||||
data_year_energy = np.random.rand(365, 2, 24)
|
||||
|
||||
# Check the test setup
|
||||
assert loadakkudoktoradjusted.measurement is measurement_eos
|
||||
assert measurement_eos.min_datetime == to_datetime("2024-01-01T00:00:00")
|
||||
assert measurement_eos.max_datetime == to_datetime("2024-01-02T00:00:00")
|
||||
# Use same calculation as in _calculate_adjustment
|
||||
compare_start = measurement_eos.max_datetime - to_duration("7 days")
|
||||
if compare_datetimes(compare_start, measurement_eos.min_datetime).lt:
|
||||
# Not enough measurements for 7 days - use what is available
|
||||
compare_start = measurement_eos.min_datetime
|
||||
compare_end = measurement_eos.max_datetime
|
||||
compare_interval = to_duration("1 hour")
|
||||
load_total_kwh_array = measurement_eos.load_total_kwh(
|
||||
start_datetime=compare_start,
|
||||
end_datetime=compare_end,
|
||||
interval=compare_interval,
|
||||
)
|
||||
np.testing.assert_allclose(load_total_kwh_array, [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])
|
||||
|
||||
# Call the method and validate results
|
||||
weekday_adjust, weekend_adjust = loadakkudoktoradjusted._calculate_adjustment(data_year_energy)
|
||||
assert weekday_adjust.shape == (24,)
|
||||
|
||||
@@ -3,10 +3,17 @@ import pytest
|
||||
from pendulum import datetime, duration
|
||||
|
||||
from akkudoktoreos.config.config import SettingsEOS
|
||||
from akkudoktoreos.core.coreabc import get_measurement
|
||||
from akkudoktoreos.measurement.measurement import (
|
||||
MeasurementCommonSettings,
|
||||
MeasurementDataRecord,
|
||||
get_measurement,
|
||||
)
|
||||
from akkudoktoreos.utils.datetimeutil import (
|
||||
DateTime,
|
||||
Duration,
|
||||
compare_datetimes,
|
||||
to_datetime,
|
||||
to_duration,
|
||||
)
|
||||
|
||||
|
||||
@@ -41,8 +48,9 @@ class TestMeasurementDataRecord:
|
||||
|
||||
def test_getitem_existing_field(self, record):
|
||||
"""Test that __getitem__ returns correct value for existing native field."""
|
||||
record.date_time = "2024-01-01T00:00:00+00:00"
|
||||
assert record["date_time"] is not None
|
||||
date_time = "2024-01-01T00:00:00+00:00"
|
||||
record.date_time = date_time
|
||||
assert compare_datetimes(record["date_time"], to_datetime(date_time)).equal
|
||||
|
||||
def test_getitem_existing_measurement(self, record):
|
||||
"""Test that __getitem__ retrieves existing measurement values."""
|
||||
@@ -220,6 +228,7 @@ class TestMeasurement:
|
||||
# Load meter readings are in kWh
|
||||
config_eos.measurement.load_emr_keys = ["load0_mr", "load1_mr", "load2_mr", "load3_mr"]
|
||||
measurement = get_measurement()
|
||||
measurement.delete_by_datetime(None, None)
|
||||
record0 = MeasurementDataRecord(
|
||||
date_time=datetime(2023, 1, 1, hour=0),
|
||||
load0_mr=100,
|
||||
@@ -227,52 +236,54 @@ class TestMeasurement:
|
||||
)
|
||||
assert record0.load0_mr == 100
|
||||
assert record0.load1_mr == 200
|
||||
measurement.records = [
|
||||
records = [
|
||||
MeasurementDataRecord(
|
||||
date_time=datetime(2023, 1, 1, hour=0),
|
||||
date_time=to_datetime("2023-01-01T00:00:00"),
|
||||
load0_mr=100,
|
||||
load1_mr=200,
|
||||
),
|
||||
MeasurementDataRecord(
|
||||
date_time=datetime(2023, 1, 1, hour=1),
|
||||
date_time=to_datetime("2023-01-01T01:00:00"),
|
||||
load0_mr=150,
|
||||
load1_mr=250,
|
||||
),
|
||||
MeasurementDataRecord(
|
||||
date_time=datetime(2023, 1, 1, hour=2),
|
||||
date_time=to_datetime("2023-01-01T02:00:00"),
|
||||
load0_mr=200,
|
||||
load1_mr=300,
|
||||
),
|
||||
MeasurementDataRecord(
|
||||
date_time=datetime(2023, 1, 1, hour=3),
|
||||
date_time=to_datetime("2023-01-01T03:00:00"),
|
||||
load0_mr=250,
|
||||
load1_mr=350,
|
||||
),
|
||||
MeasurementDataRecord(
|
||||
date_time=datetime(2023, 1, 1, hour=4),
|
||||
date_time=to_datetime("2023-01-01T04:00:00"),
|
||||
load0_mr=300,
|
||||
load1_mr=400,
|
||||
),
|
||||
MeasurementDataRecord(
|
||||
date_time=datetime(2023, 1, 1, hour=5),
|
||||
date_time=to_datetime("2023-01-01T05:00:00"),
|
||||
load0_mr=350,
|
||||
load1_mr=450,
|
||||
),
|
||||
]
|
||||
for record in records:
|
||||
measurement.insert_by_datetime(record)
|
||||
return measurement
|
||||
|
||||
def test_interval_count(self, measurement_eos):
|
||||
"""Test interval count calculation."""
|
||||
start = datetime(2023, 1, 1, 0)
|
||||
end = datetime(2023, 1, 1, 3)
|
||||
start = to_datetime("2023-01-01T00:00:00")
|
||||
end = to_datetime("2023-01-01T03:00:00")
|
||||
interval = duration(hours=1)
|
||||
|
||||
assert measurement_eos._interval_count(start, end, interval) == 3
|
||||
|
||||
def test_interval_count_invalid_end_before_start(self, measurement_eos):
|
||||
"""Test interval count raises ValueError when end_datetime is before start_datetime."""
|
||||
start = datetime(2023, 1, 1, 3)
|
||||
end = datetime(2023, 1, 1, 0)
|
||||
start = to_datetime("2023-01-01T03:00:00")
|
||||
end = to_datetime("2023-01-01T00:00:00")
|
||||
interval = duration(hours=1)
|
||||
|
||||
with pytest.raises(ValueError, match="end_datetime must be after start_datetime"):
|
||||
@@ -280,8 +291,8 @@ class TestMeasurement:
|
||||
|
||||
def test_interval_count_invalid_non_positive_interval(self, measurement_eos):
|
||||
"""Test interval count raises ValueError when interval is non-positive."""
|
||||
start = datetime(2023, 1, 1, 0)
|
||||
end = datetime(2023, 1, 1, 3)
|
||||
start = to_datetime("2023-01-01T00:00:00")
|
||||
end = to_datetime("2023-01-01T03:00:00")
|
||||
|
||||
with pytest.raises(ValueError, match="interval must be positive"):
|
||||
measurement_eos._interval_count(start, end, duration(hours=0))
|
||||
@@ -289,8 +300,8 @@ class TestMeasurement:
|
||||
def test_energy_from_meter_readings_valid_input(self, measurement_eos):
|
||||
"""Test _energy_from_meter_readings with valid inputs and proper alignment of load data."""
|
||||
key = "load0_mr"
|
||||
start_datetime = datetime(2023, 1, 1, 0)
|
||||
end_datetime = datetime(2023, 1, 1, 5)
|
||||
start_datetime = to_datetime("2023-01-01T00:00:00")
|
||||
end_datetime = to_datetime("2023-01-01T05:00:00")
|
||||
interval = duration(hours=1)
|
||||
|
||||
load_array = measurement_eos._energy_from_meter_readings(
|
||||
@@ -303,12 +314,12 @@ class TestMeasurement:
|
||||
def test_energy_from_meter_readings_empty_array(self, measurement_eos):
|
||||
"""Test _energy_from_meter_readings with no data (empty array)."""
|
||||
key = "load0_mr"
|
||||
start_datetime = datetime(2023, 1, 1, 0)
|
||||
end_datetime = datetime(2023, 1, 1, 5)
|
||||
start_datetime = to_datetime("2023-01-01T00:00:00")
|
||||
end_datetime = to_datetime("2023-01-01T05:00:00")
|
||||
interval = duration(hours=1)
|
||||
|
||||
# Use empyt records array
|
||||
measurement_eos.records = []
|
||||
measurement_eos.delete_by_datetime(start_datetime, end_datetime)
|
||||
|
||||
load_array = measurement_eos._energy_from_meter_readings(
|
||||
key, start_datetime, end_datetime, interval
|
||||
@@ -324,25 +335,46 @@ class TestMeasurement:
|
||||
def test_energy_from_meter_readings_misaligned_array(self, measurement_eos):
|
||||
"""Test _energy_from_meter_readings with misaligned array size."""
|
||||
key = "load1_mr"
|
||||
start_datetime = measurement_eos.min_datetime
|
||||
end_datetime = measurement_eos.max_datetime
|
||||
interval = duration(hours=1)
|
||||
start_datetime = to_datetime("2023-01-01T00:00:00")
|
||||
end_datetime = to_datetime("2023-01-01T05:00:00")
|
||||
|
||||
# Use misaligned array, latest interval set to 2 hours (instead of 1 hour)
|
||||
measurement_eos.records[-1].date_time = datetime(2023, 1, 1, 6)
|
||||
latest_record_datetime = to_datetime("2023-01-01T05:00:00")
|
||||
new_record_datetime = to_datetime("2023-01-01T06:00:00")
|
||||
record = measurement_eos.get_by_datetime(latest_record_datetime)
|
||||
assert record is not None
|
||||
measurement_eos.delete_by_datetime(start_datetime = latest_record_datetime,
|
||||
end_datetime = new_record_datetime)
|
||||
record.date_time = new_record_datetime
|
||||
measurement_eos.insert_by_datetime(record)
|
||||
|
||||
# Check test setup
|
||||
dates, values = measurement_eos.key_to_lists(key, start_datetime, None)
|
||||
assert dates == [
|
||||
to_datetime("2023-01-01T00:00:00"),
|
||||
to_datetime("2023-01-01T01:00:00"),
|
||||
to_datetime("2023-01-01T02:00:00"),
|
||||
to_datetime("2023-01-01T03:00:00"),
|
||||
to_datetime("2023-01-01T04:00:00"),
|
||||
to_datetime("2023-01-01T06:00:00"),
|
||||
]
|
||||
assert values == [200, 250, 300, 350, 400, 450]
|
||||
array = measurement_eos.key_to_array(key, start_datetime, end_datetime + interval, interval=interval)
|
||||
np.testing.assert_array_equal(array, [200, 250, 300, 350, 400, 425])
|
||||
|
||||
load_array = measurement_eos._energy_from_meter_readings(
|
||||
key, start_datetime, end_datetime, interval
|
||||
)
|
||||
|
||||
expected_load_array = np.array([50, 50, 50, 50, 25]) # Differences between consecutive readings
|
||||
expected_load_array = np.array([50., 50., 50., 50., 25.]) # Differences between consecutive readings
|
||||
np.testing.assert_array_equal(load_array, expected_load_array)
|
||||
|
||||
def test_energy_from_meter_readings_partial_data(self, measurement_eos, caplog):
|
||||
"""Test _energy_from_meter_readings with partial data (misaligned but empty array)."""
|
||||
key = "load2_mr"
|
||||
start_datetime = datetime(2023, 1, 1, 0)
|
||||
end_datetime = datetime(2023, 1, 1, 5)
|
||||
start_datetime = to_datetime("2023-01-01T00:00:00")
|
||||
end_datetime = to_datetime("2023-01-01T05:00:00")
|
||||
interval = duration(hours=1)
|
||||
|
||||
with caplog.at_level("DEBUG"):
|
||||
@@ -359,8 +391,8 @@ class TestMeasurement:
|
||||
def test_energy_from_meter_readings_negative_interval(self, measurement_eos):
|
||||
"""Test _energy_from_meter_readings with a negative interval."""
|
||||
key = "load3_mr"
|
||||
start_datetime = datetime(2023, 1, 1, 0)
|
||||
end_datetime = datetime(2023, 1, 1, 5)
|
||||
start_datetime = to_datetime("2023-01-01T00:00:00")
|
||||
end_datetime = to_datetime("2023-01-01T05:00:00")
|
||||
interval = duration(hours=-1)
|
||||
|
||||
with pytest.raises(ValueError, match="interval must be positive"):
|
||||
@@ -368,11 +400,11 @@ class TestMeasurement:
|
||||
|
||||
def test_load_total_kwh(self, measurement_eos):
|
||||
"""Test total load calculation."""
|
||||
start = datetime(2023, 1, 1, 0)
|
||||
end = datetime(2023, 1, 1, 2)
|
||||
start_datetime = to_datetime("2023-01-01T03:00:00")
|
||||
end_datetime = to_datetime("2023-01-01T05:00:00")
|
||||
interval = duration(hours=1)
|
||||
|
||||
result = measurement_eos.load_total_kwh(start_datetime=start, end_datetime=end, interval=interval)
|
||||
result = measurement_eos.load_total_kwh(start_datetime=start_datetime, end_datetime=end_datetime, interval=interval)
|
||||
|
||||
# Expected total load per interval
|
||||
expected = np.array([100, 100]) # Differences between consecutive meter readings
|
||||
@@ -381,20 +413,20 @@ class TestMeasurement:
|
||||
def test_load_total_kwh_no_data(self, measurement_eos):
|
||||
"""Test total load calculation with no data."""
|
||||
measurement_eos.records = []
|
||||
start = datetime(2023, 1, 1, 0)
|
||||
end = datetime(2023, 1, 1, 3)
|
||||
start_datetime = to_datetime("2023-01-01T00:00:00")
|
||||
end_datetime = to_datetime("2023-01-01T03:00:00")
|
||||
interval = duration(hours=1)
|
||||
|
||||
result = measurement_eos.load_total_kwh(start_datetime=start, end_datetime=end, interval=interval)
|
||||
result = measurement_eos.load_total_kwh(start_datetime=start_datetime, end_datetime=end_datetime, interval=interval)
|
||||
expected = np.zeros(3) # No data, so all intervals are zero
|
||||
np.testing.assert_array_equal(result, expected)
|
||||
|
||||
def test_load_total_kwh_partial_intervals(self, measurement_eos):
|
||||
"""Test total load calculation with partial intervals."""
|
||||
start = datetime(2023, 1, 1, 0, 30) # Start in the middle of an interval
|
||||
end = datetime(2023, 1, 1, 1, 30) # End in the middle of another interval
|
||||
start_datetime = to_datetime("2023-01-01T00:30:00") # Start in the middle of an interval
|
||||
end_datetime = to_datetime("2023-01-01T01:30:00") # End in the middle of another interval
|
||||
interval = duration(hours=1)
|
||||
|
||||
result = measurement_eos.load_total_kwh(start_datetime=start, end_datetime=end, interval=interval)
|
||||
result = measurement_eos.load_total_kwh(start_datetime=start_datetime, end_datetime=end_datetime, interval=interval)
|
||||
expected = np.array([100]) # Only one complete interval covered
|
||||
np.testing.assert_array_equal(result, expected)
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import pytest
|
||||
from pydantic import ValidationError
|
||||
|
||||
from akkudoktoreos.core.coreabc import get_prediction
|
||||
from akkudoktoreos.prediction.elecpriceakkudoktor import ElecPriceAkkudoktor
|
||||
from akkudoktoreos.prediction.elecpriceenergycharts import ElecPriceEnergyCharts
|
||||
from akkudoktoreos.prediction.elecpriceimport import ElecPriceImport
|
||||
@@ -15,7 +16,6 @@ from akkudoktoreos.prediction.loadvrm import LoadVrm
|
||||
from akkudoktoreos.prediction.prediction import (
|
||||
Prediction,
|
||||
PredictionCommonSettings,
|
||||
get_prediction,
|
||||
)
|
||||
from akkudoktoreos.prediction.pvforecastakkudoktor import PVForecastAkkudoktor
|
||||
from akkudoktoreos.prediction.pvforecastimport import PVForecastImport
|
||||
|
||||
@@ -7,10 +7,10 @@ import pendulum
|
||||
import pytest
|
||||
from pydantic import Field
|
||||
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.prediction import PredictionCommonSettings
|
||||
from akkudoktoreos.prediction.predictionabc import (
|
||||
PredictionBase,
|
||||
PredictionABC,
|
||||
PredictionContainer,
|
||||
PredictionProvider,
|
||||
PredictionRecord,
|
||||
@@ -28,7 +28,7 @@ class DerivedConfig(PredictionCommonSettings):
|
||||
class_constant: Optional[int] = Field(default=None, description="Test config by class constant")
|
||||
|
||||
|
||||
class DerivedBase(PredictionBase):
|
||||
class DerivedBase(PredictionABC):
|
||||
instance_field: Optional[str] = Field(default=None, description="Field Value")
|
||||
class_constant: ClassVar[int] = 30
|
||||
|
||||
@@ -84,7 +84,7 @@ class DerivedPredictionContainer(PredictionContainer):
|
||||
# ----------
|
||||
|
||||
|
||||
class TestPredictionBase:
|
||||
class TestPredictionABC:
|
||||
@pytest.fixture
|
||||
def base(self, monkeypatch):
|
||||
# Provide default values for configuration
|
||||
@@ -216,17 +216,19 @@ class TestPredictionProvider:
|
||||
def test_delete_by_datetime(self, provider, sample_start_datetime):
|
||||
"""Test `delete_by_datetime` method for removing records by datetime range."""
|
||||
# Add records to the provider for deletion testing
|
||||
provider.records = [
|
||||
records = [
|
||||
self.create_test_record(sample_start_datetime - to_duration("3 hours"), 1),
|
||||
self.create_test_record(sample_start_datetime - to_duration("1 hour"), 2),
|
||||
self.create_test_record(sample_start_datetime + to_duration("1 hour"), 3),
|
||||
]
|
||||
for record in records:
|
||||
provider.insert_by_datetime(record)
|
||||
|
||||
provider.delete_by_datetime(
|
||||
start_datetime=sample_start_datetime - to_duration("2 hours"),
|
||||
end_datetime=sample_start_datetime + to_duration("2 hours"),
|
||||
)
|
||||
assert len(provider.records) == 1, (
|
||||
assert len(provider) == 1, (
|
||||
"Only one record should remain after deletion by datetime."
|
||||
)
|
||||
assert provider.records[0].date_time == sample_start_datetime - to_duration("3 hours"), (
|
||||
@@ -243,15 +245,17 @@ class TestPredictionContainer:
|
||||
|
||||
@pytest.fixture
|
||||
def container_with_providers(self):
|
||||
record1 = self.create_test_record(datetime(2023, 11, 5), 1)
|
||||
record2 = self.create_test_record(datetime(2023, 11, 6), 2)
|
||||
record3 = self.create_test_record(datetime(2023, 11, 7), 3)
|
||||
records = [
|
||||
# Test records - include 'prediction_value' key
|
||||
self.create_test_record(datetime(2023, 11, 5), 1),
|
||||
self.create_test_record(datetime(2023, 11, 6), 2),
|
||||
self.create_test_record(datetime(2023, 11, 7), 3),
|
||||
]
|
||||
provider = DerivedPredictionProvider()
|
||||
provider.clear()
|
||||
provider.delete_by_datetime(start_datetime=None, end_datetime=None)
|
||||
assert len(provider) == 0
|
||||
provider.append(record1)
|
||||
provider.append(record2)
|
||||
provider.append(record3)
|
||||
for record in records:
|
||||
provider.insert_by_datetime(record)
|
||||
assert len(provider) == 3
|
||||
container = DerivedPredictionContainer()
|
||||
container.providers.clear()
|
||||
@@ -378,7 +382,9 @@ class TestPredictionContainer:
|
||||
assert len(container_with_providers.providers) == 1
|
||||
# check all keys are available (don't care for position)
|
||||
for key in ["prediction_value", "date_time"]:
|
||||
assert key in list(container_with_providers.keys())
|
||||
assert key in container_with_providers.record_keys
|
||||
for key in ["prediction_value", "date_time"]:
|
||||
assert key in container_with_providers.keys()
|
||||
series = container_with_providers["prediction_value"]
|
||||
assert isinstance(series, pd.Series)
|
||||
assert series.name == "prediction_value"
|
||||
|
||||
@@ -5,8 +5,7 @@ from unittest.mock import Mock, patch
|
||||
import pytest
|
||||
from loguru import logger
|
||||
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.prediction.prediction import get_prediction
|
||||
from akkudoktoreos.core.coreabc import get_ems, get_prediction
|
||||
from akkudoktoreos.prediction.pvforecastakkudoktor import (
|
||||
AkkudoktorForecastHorizon,
|
||||
AkkudoktorForecastMeta,
|
||||
@@ -137,7 +136,7 @@ def provider():
|
||||
def provider_empty_instance():
|
||||
"""Fixture that returns an empty instance of PVForecast."""
|
||||
empty_instance = PVForecastAkkudoktor()
|
||||
empty_instance.clear()
|
||||
empty_instance.delete_by_datetime(start_datetime=None, end_datetime=None)
|
||||
assert len(empty_instance) == 0
|
||||
return empty_instance
|
||||
|
||||
@@ -277,7 +276,7 @@ def test_pvforecast_akkudoktor_update_with_sample_forecast(
|
||||
ems_eos.set_start_datetime(sample_forecast_start)
|
||||
provider.update_data(force_enable=True, force_update=True)
|
||||
assert compare_datetimes(provider.ems_start_datetime, sample_forecast_start).equal
|
||||
assert compare_datetimes(provider[0].date_time, to_datetime(sample_forecast_start)).equal
|
||||
assert compare_datetimes(provider.records[0].date_time, to_datetime(sample_forecast_start)).equal
|
||||
|
||||
|
||||
# Report Generation Test
|
||||
@@ -290,7 +289,7 @@ def test_report_ac_power_and_measurement(provider, config_eos):
|
||||
pvforecast_dc_power=450.0,
|
||||
pvforecast_ac_power=400.0,
|
||||
)
|
||||
provider.append(record)
|
||||
provider.insert_by_datetime(record)
|
||||
|
||||
report = provider.report_ac_power_and_measurement()
|
||||
assert "DC: 450.0" in report
|
||||
@@ -323,19 +322,19 @@ def test_timezone_behaviour(
|
||||
expected_datetime = to_datetime("2024-10-06T00:00:00+0200", in_timezone=other_timezone)
|
||||
assert compare_datetimes(other_start_datetime, expected_datetime).equal
|
||||
|
||||
provider.clear()
|
||||
provider.delete_by_datetime(start_datetime=None, end_datetime=None)
|
||||
assert len(provider) == 0
|
||||
ems_eos = get_ems()
|
||||
ems_eos.set_start_datetime(other_start_datetime)
|
||||
provider.update_data(force_update=True)
|
||||
assert compare_datetimes(provider.ems_start_datetime, other_start_datetime).equal
|
||||
# Check wether first record starts at requested sample start time
|
||||
assert compare_datetimes(provider[0].date_time, sample_forecast_start).equal
|
||||
assert compare_datetimes(provider.records[0].date_time, sample_forecast_start).equal
|
||||
|
||||
# Test updating AC power measurement for a specific date.
|
||||
provider.update_value(sample_forecast_start, "pvforecastakkudoktor_ac_power_measured", 1000)
|
||||
# Check wether first record was filled with ac power measurement
|
||||
assert provider[0].pvforecastakkudoktor_ac_power_measured == 1000
|
||||
assert provider.records[0].pvforecastakkudoktor_ac_power_measured == 1000
|
||||
|
||||
# Test fetching temperature forecast for a specific date.
|
||||
other_end_datetime = other_start_datetime + to_duration("24 hours")
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import numpy.testing as npt
|
||||
import pytest
|
||||
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.pvforecastimport import PVForecastImport
|
||||
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime
|
||||
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime, to_duration
|
||||
|
||||
DIR_TESTDATA = Path(__file__).absolute().parent.joinpath("testdata")
|
||||
|
||||
@@ -87,6 +88,7 @@ def test_invalid_provider(provider, config_eos):
|
||||
)
|
||||
def test_import(provider, sample_import_1_json, start_datetime, from_file, config_eos):
|
||||
"""Test fetching forecast from import."""
|
||||
key = "pvforecast_ac_power"
|
||||
ems_eos = get_ems()
|
||||
ems_eos.set_start_datetime(to_datetime(start_datetime, in_timezone="Europe/Berlin"))
|
||||
if from_file:
|
||||
@@ -95,7 +97,7 @@ def test_import(provider, sample_import_1_json, start_datetime, from_file, confi
|
||||
else:
|
||||
config_eos.pvforecast.provider_settings.PVForecastImport.import_file_path = None
|
||||
assert config_eos.pvforecast.provider_settings.PVForecastImport.import_file_path is None
|
||||
provider.clear()
|
||||
provider.delete_by_datetime(start_datetime=None, end_datetime=None)
|
||||
|
||||
# Call the method
|
||||
provider.update_data()
|
||||
@@ -104,16 +106,13 @@ def test_import(provider, sample_import_1_json, start_datetime, from_file, confi
|
||||
assert provider.ems_start_datetime is not None
|
||||
assert provider.total_hours is not None
|
||||
assert compare_datetimes(provider.ems_start_datetime, ems_eos.start_datetime).equal
|
||||
values = sample_import_1_json["pvforecast_ac_power"]
|
||||
value_datetime_mapping = provider.import_datetimes(ems_eos.start_datetime, len(values))
|
||||
for i, mapping in enumerate(value_datetime_mapping):
|
||||
assert i < len(provider.records)
|
||||
expected_datetime, expected_value_index = mapping
|
||||
expected_value = values[expected_value_index]
|
||||
result_datetime = provider.records[i].date_time
|
||||
result_value = provider.records[i]["pvforecast_ac_power"]
|
||||
|
||||
# print(f"{i}: Expected: {expected_datetime}:{expected_value}")
|
||||
# print(f"{i}: Result: {result_datetime}:{result_value}")
|
||||
assert compare_datetimes(result_datetime, expected_datetime).equal
|
||||
assert result_value == expected_value
|
||||
expected_values = sample_import_1_json[key]
|
||||
result_values = provider.key_to_array(
|
||||
key=key,
|
||||
start_datetime=provider.ems_start_datetime,
|
||||
end_datetime=provider.ems_start_datetime + to_duration(f"{len(expected_values)} hours"),
|
||||
interval=to_duration("1 hour"),
|
||||
)
|
||||
# Allow for some difference due to value calculation on DST change
|
||||
npt.assert_allclose(result_values, expected_values, rtol=0.001)
|
||||
|
||||
701
tests/test_retentionmanager.py
Normal file
701
tests/test_retentionmanager.py
Normal file
@@ -0,0 +1,701 @@
|
||||
"""Tests for RetentionManager and JobState."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import time
|
||||
from typing import Any
|
||||
from unittest.mock import AsyncMock, MagicMock, call, patch
|
||||
|
||||
import pytest
|
||||
from loguru import logger
|
||||
|
||||
import akkudoktoreos.server.retentionmanager
|
||||
from akkudoktoreos.server.retentionmanager import JobState, RetentionManager
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Shared helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
INTERVAL = 10.0
|
||||
DUE_INTERVAL = 0.001 # non-zero so interval() does not fall back to fallback_interval
|
||||
FALLBACK = 300.0
|
||||
|
||||
|
||||
def make_config_getter(interval: float = INTERVAL) -> Any:
|
||||
"""Return a simple config getter that always yields ``interval`` for any key."""
|
||||
return lambda key: interval
|
||||
|
||||
|
||||
def make_config_getter_none() -> Any:
|
||||
"""Return a config getter that always yields ``None`` (job disabled)."""
|
||||
return lambda key: None
|
||||
|
||||
|
||||
def make_manager(interval: float = INTERVAL, shutdown_timeout: float = 5.0) -> RetentionManager:
|
||||
"""Return a ``RetentionManager`` backed by a fixed-interval config getter."""
|
||||
return RetentionManager(make_config_getter(interval), shutdown_timeout=shutdown_timeout)
|
||||
|
||||
|
||||
def make_manager_none(shutdown_timeout: float = 5.0) -> RetentionManager:
|
||||
"""Return a ``RetentionManager`` whose config getter always returns None (all jobs disabled)."""
|
||||
return RetentionManager(make_config_getter_none(), shutdown_timeout=shutdown_timeout)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestRetentionManager:
|
||||
"""Tests for :class:`RetentionManager` and :class:`JobState`."""
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Initialisation
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_init_stores_config_getter(self) -> None:
|
||||
"""The config getter passed to __init__ is stored and forwarded to jobs."""
|
||||
getter = make_config_getter()
|
||||
manager = RetentionManager(getter)
|
||||
assert manager._config_getter is getter
|
||||
|
||||
def test_init_empty_job_registry(self) -> None:
|
||||
"""A newly created manager has no registered jobs."""
|
||||
manager = make_manager()
|
||||
assert manager._jobs == {}
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# register / unregister
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_register_adds_job(self) -> None:
|
||||
"""Registering a function adds a JobState entry."""
|
||||
manager = make_manager()
|
||||
func = MagicMock()
|
||||
manager.register("job1", func, interval_attr="some/key")
|
||||
assert "job1" in manager._jobs
|
||||
|
||||
def test_register_job_state_fields(self) -> None:
|
||||
"""Registered JobState carries the correct initial field values."""
|
||||
manager = make_manager()
|
||||
func = MagicMock()
|
||||
manager.register("job1", func, interval_attr="some/key", fallback_interval=60.0)
|
||||
job = manager._jobs["job1"]
|
||||
assert job.name == "job1"
|
||||
assert job.func is func
|
||||
assert job.interval_attr == "some/key"
|
||||
assert job.fallback_interval == 60.0
|
||||
assert job.config_getter is manager._config_getter
|
||||
assert job.on_exception is None
|
||||
assert job.last_run_at == 0.0
|
||||
assert job.run_count == 0
|
||||
assert job.is_running is False
|
||||
|
||||
def test_register_stores_on_exception(self) -> None:
|
||||
"""The on_exception callback is stored on the JobState."""
|
||||
manager = make_manager()
|
||||
handler = MagicMock()
|
||||
manager.register("job1", MagicMock(), interval_attr="k", on_exception=handler)
|
||||
assert manager._jobs["job1"].on_exception is handler
|
||||
|
||||
def test_register_duplicate_raises(self) -> None:
|
||||
"""Registering the same name twice raises ValueError."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
with pytest.raises(ValueError, match="job1"):
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
|
||||
def test_unregister_removes_job(self) -> None:
|
||||
"""Unregistering a job removes it from the registry."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
manager.unregister("job1")
|
||||
assert "job1" not in manager._jobs
|
||||
|
||||
def test_unregister_missing_job_is_noop(self) -> None:
|
||||
"""Unregistering a non-existent job does not raise."""
|
||||
manager = make_manager()
|
||||
manager.unregister("nonexistent") # must not raise
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# JobState.interval()
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_job_interval_from_config_getter(self) -> None:
|
||||
"""JobState.interval() returns the value provided by config_getter."""
|
||||
manager = make_manager(interval=42.0)
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
assert manager._jobs["job1"].interval() == 42.0
|
||||
|
||||
def test_job_interval_none_when_config_returns_none(self) -> None:
|
||||
"""JobState.interval() returns None when config_getter returns None (job disabled)."""
|
||||
manager = make_manager_none()
|
||||
manager.register("job1", MagicMock(), interval_attr="k", fallback_interval=FALLBACK)
|
||||
assert manager._jobs["job1"].interval() is None
|
||||
|
||||
def test_job_interval_none_does_not_fall_back(self) -> None:
|
||||
"""A None config value must NOT fall back to fallback_interval -- None means disabled."""
|
||||
manager = make_manager_none()
|
||||
manager.register("job1", MagicMock(), interval_attr="k", fallback_interval=99.0)
|
||||
# If None incorrectly fell back, this would return 99.0 instead of None
|
||||
assert manager._jobs["job1"].interval() is None
|
||||
|
||||
def test_job_interval_fallback_on_key_error(self) -> None:
|
||||
"""JobState.interval() uses fallback_interval when config_getter raises KeyError."""
|
||||
manager = RetentionManager(lambda key: (_ for _ in ()).throw(KeyError(key)))
|
||||
manager.register("job1", MagicMock(), interval_attr="k", fallback_interval=99.0)
|
||||
assert manager._jobs["job1"].interval() == 99.0
|
||||
|
||||
def test_job_interval_fallback_on_index_error(self) -> None:
|
||||
"""JobState.interval() uses fallback_interval when config_getter raises IndexError."""
|
||||
manager = RetentionManager(lambda key: (_ for _ in ()).throw(IndexError()))
|
||||
manager.register("job1", MagicMock(), interval_attr="k", fallback_interval=77.0)
|
||||
assert manager._jobs["job1"].interval() == 77.0
|
||||
|
||||
def test_job_interval_fallback_on_zero_value(self) -> None:
|
||||
"""JobState.interval() uses fallback_interval when config_getter returns zero."""
|
||||
manager = RetentionManager(lambda key: 0)
|
||||
manager.register("job1", MagicMock(), interval_attr="k", fallback_interval=55.0)
|
||||
assert manager._jobs["job1"].interval() == 55.0
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# JobState.is_due()
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_job_is_due_when_never_run(self) -> None:
|
||||
"""A job is always due when it has never been run (last_run_at == 0.0)."""
|
||||
manager = make_manager(interval=INTERVAL)
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
assert manager._jobs["job1"].is_due() is True
|
||||
|
||||
def test_job_is_not_due_immediately_after_run(self) -> None:
|
||||
"""A job is not due immediately after last_run_at is set to now."""
|
||||
manager = make_manager(interval=INTERVAL)
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
manager._jobs["job1"].last_run_at = time.monotonic()
|
||||
assert manager._jobs["job1"].is_due() is False
|
||||
|
||||
def test_job_is_due_after_interval_elapsed(self) -> None:
|
||||
"""A job becomes due once the interval has passed since last_run_at."""
|
||||
manager = make_manager(interval=1.0)
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
manager._jobs["job1"].last_run_at = time.monotonic() - 2.0 # 2 s ago > 1 s interval
|
||||
assert manager._jobs["job1"].is_due() is True
|
||||
|
||||
def test_job_is_never_due_when_interval_is_none(self) -> None:
|
||||
"""is_due() returns False when interval() is None, even if last_run_at is 0."""
|
||||
manager = make_manager_none()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
# last_run_at == 0.0 would make any enabled job due immediately
|
||||
assert job.last_run_at == 0.0
|
||||
assert job.is_due() is False
|
||||
|
||||
def test_job_is_never_due_when_disabled_regardless_of_last_run(self) -> None:
|
||||
"""is_due() stays False for a disabled job even long after its last run."""
|
||||
manager = make_manager_none()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
job.last_run_at = time.monotonic() - 365 * 24 * 3600 # "ran" a year ago
|
||||
assert job.is_due() is False
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# JobState.summary()
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_summary_keys(self) -> None:
|
||||
"""summary() returns all expected keys including interval_s."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
summary = manager._jobs["job1"].summary()
|
||||
assert set(summary.keys()) == {
|
||||
"name", "interval_attr", "interval_s", "last_run_at",
|
||||
"last_duration_s", "last_error", "run_count", "is_running",
|
||||
}
|
||||
|
||||
def test_summary_interval_s_reflects_config(self) -> None:
|
||||
"""summary()['interval_s'] matches the value returned by interval()."""
|
||||
manager = make_manager(interval=42.0)
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
assert manager._jobs["job1"].summary()["interval_s"] == 42.0
|
||||
|
||||
def test_summary_interval_s_is_none_when_disabled(self) -> None:
|
||||
"""summary()['interval_s'] is None when the job is disabled via config."""
|
||||
manager = make_manager_none()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
assert manager._jobs["job1"].summary()["interval_s"] is None
|
||||
|
||||
def test_summary_values(self) -> None:
|
||||
"""summary() reflects the current JobState values."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="my/key")
|
||||
job = manager._jobs["job1"]
|
||||
job.last_run_at = 1234.5
|
||||
job.last_duration = 0.12345
|
||||
job.last_error = "oops"
|
||||
job.run_count = 3
|
||||
job.is_running = True
|
||||
s = job.summary()
|
||||
assert s["name"] == "job1"
|
||||
assert s["interval_attr"] == "my/key"
|
||||
assert s["last_run_at"] == 1234.5
|
||||
assert s["last_duration_s"] == 0.1235 # rounded to 4 dp
|
||||
assert s["last_error"] == "oops"
|
||||
assert s["run_count"] == 3
|
||||
assert s["is_running"] is True
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# status()
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_status_empty(self) -> None:
|
||||
"""status() returns an empty list when no jobs are registered."""
|
||||
assert make_manager().status() == []
|
||||
|
||||
def test_status_contains_all_jobs(self) -> None:
|
||||
"""status() returns one entry per registered job."""
|
||||
manager = make_manager()
|
||||
manager.register("a", MagicMock(), interval_attr="k1")
|
||||
manager.register("b", MagicMock(), interval_attr="k2")
|
||||
names = {s["name"] for s in manager.status()}
|
||||
assert names == {"a", "b"}
|
||||
|
||||
def test_status_shows_disabled_job(self) -> None:
|
||||
"""status() includes disabled jobs with interval_s == None."""
|
||||
manager = make_manager_none()
|
||||
manager.register("disabled", MagicMock(), interval_attr="k")
|
||||
entries = manager.status()
|
||||
assert len(entries) == 1
|
||||
assert entries[0]["interval_s"] is None
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# tick() -- job dispatch
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_runs_due_sync_job(self) -> None:
|
||||
"""tick() executes a sync job that is due."""
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
func = MagicMock()
|
||||
manager.register("job1", func, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await asyncio.sleep(0) # second yield ensures tasks have started
|
||||
await manager.shutdown()
|
||||
func.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_runs_due_async_job(self) -> None:
|
||||
"""tick() executes an async job that is due."""
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
func = AsyncMock()
|
||||
manager.register("job1", func, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await asyncio.sleep(0) # second yield ensures tasks have started
|
||||
await manager.shutdown()
|
||||
func.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_skips_not_due_job(self) -> None:
|
||||
"""tick() does not execute a job whose interval has not yet elapsed."""
|
||||
manager = make_manager(interval=9999.0)
|
||||
func = MagicMock()
|
||||
manager.register("job1", func, interval_attr="k")
|
||||
manager._jobs["job1"].last_run_at = time.monotonic() # just ran
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
await manager.shutdown()
|
||||
func.assert_not_called()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_skips_disabled_job(self) -> None:
|
||||
"""tick() never executes a job whose interval is None, even if never run before."""
|
||||
manager = make_manager_none()
|
||||
func = MagicMock()
|
||||
manager.register("disabled", func, interval_attr="k")
|
||||
job = manager._jobs["disabled"]
|
||||
# last_run_at == 0.0 would fire any enabled job immediately
|
||||
assert job.last_run_at == 0.0
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
await manager.shutdown()
|
||||
func.assert_not_called()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_skips_disabled_job_adds_no_task(self) -> None:
|
||||
"""tick() adds no task to _running_tasks for a disabled job."""
|
||||
manager = make_manager_none()
|
||||
manager.register("disabled", AsyncMock(), interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
assert len(manager._running_tasks) == 0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_enabled_and_disabled_jobs_mixed(self) -> None:
|
||||
"""tick() fires enabled jobs and silently skips disabled ones in the same manager."""
|
||||
results: list[str] = []
|
||||
|
||||
async def enabled_job() -> None:
|
||||
results.append("ran")
|
||||
|
||||
manager = RetentionManager(
|
||||
lambda key: DUE_INTERVAL if key == "enabled/interval" else None,
|
||||
shutdown_timeout=5.0,
|
||||
)
|
||||
manager.register("enabled", enabled_job, interval_attr="enabled/interval")
|
||||
manager.register("disabled", AsyncMock(), interval_attr="disabled/interval")
|
||||
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
await asyncio.sleep(0)
|
||||
await manager.shutdown()
|
||||
|
||||
assert results == ["ran"], "Only the enabled job must have run"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_skips_already_running_job(self) -> None:
|
||||
"""tick() does not start a job that is still marked as running."""
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
func = MagicMock()
|
||||
manager.register("job1", func, interval_attr="k")
|
||||
manager._jobs["job1"].is_running = True
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
await manager.shutdown()
|
||||
func.assert_not_called()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_runs_multiple_jobs_concurrently(self) -> None:
|
||||
"""tick() fires all due jobs as independent tasks."""
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
results: list[str] = []
|
||||
|
||||
async def job_a() -> None:
|
||||
results.append("a")
|
||||
|
||||
async def job_b() -> None:
|
||||
results.append("b")
|
||||
|
||||
manager.register("a", job_a, interval_attr="k")
|
||||
manager.register("b", job_b, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await asyncio.sleep(0) # second yield ensures tasks have started
|
||||
await manager.shutdown()
|
||||
assert sorted(results) == ["a", "b"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_adds_tasks_to_running_set(self) -> None:
|
||||
"""tick() adds a task to _running_tasks for each due job."""
|
||||
barrier = asyncio.Event()
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
|
||||
async def blocking_job() -> None:
|
||||
await barrier.wait()
|
||||
|
||||
manager.register("job1", blocking_job, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await asyncio.sleep(0) # second yield ensures tasks have started
|
||||
# Task is still running (barrier not set), so it must be in the set.
|
||||
assert len(manager._running_tasks) == 1
|
||||
barrier.set()
|
||||
await manager.shutdown()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tick_removes_task_from_running_set_on_completion(self) -> None:
|
||||
"""Completed tasks are removed from _running_tasks automatically."""
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
manager.register("job1", AsyncMock(), interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await manager.shutdown()
|
||||
assert len(manager._running_tasks) == 0
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# shutdown()
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_returns_immediately_when_no_tasks(self) -> None:
|
||||
"""shutdown() completes without blocking when no tasks are running."""
|
||||
manager = make_manager()
|
||||
await manager.shutdown() # must return promptly without raising
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_waits_for_in_flight_task(self) -> None:
|
||||
"""shutdown() blocks until a long-running job task finishes."""
|
||||
barrier = asyncio.Event()
|
||||
finished: list[bool] = []
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
|
||||
async def slow_job() -> None:
|
||||
await barrier.wait()
|
||||
finished.append(True)
|
||||
|
||||
manager.register("job1", slow_job, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await asyncio.sleep(0) # second yield ensures tasks have started
|
||||
assert finished == [] # job still blocked
|
||||
barrier.set()
|
||||
await manager.shutdown()
|
||||
assert finished == [True] # job completed before shutdown returned
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_waits_for_multiple_in_flight_tasks(self) -> None:
|
||||
"""shutdown() waits for all concurrently running job tasks."""
|
||||
barrier = asyncio.Event()
|
||||
finished: list[str] = []
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
|
||||
async def slow_a() -> None:
|
||||
await barrier.wait()
|
||||
finished.append("a")
|
||||
|
||||
async def slow_b() -> None:
|
||||
await barrier.wait()
|
||||
finished.append("b")
|
||||
|
||||
manager.register("a", slow_a, interval_attr="k")
|
||||
manager.register("b", slow_b, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await asyncio.sleep(0) # second yield ensures tasks have started
|
||||
assert finished == []
|
||||
barrier.set()
|
||||
await manager.shutdown()
|
||||
assert sorted(finished) == ["a", "b"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_does_not_raise_when_job_failed(self) -> None:
|
||||
"""shutdown() completes without raising even if a job task raised an exception."""
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
|
||||
def failing_func() -> None:
|
||||
raise RuntimeError("job error")
|
||||
|
||||
manager.register("job1", failing_func, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await manager.shutdown() # must not raise
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_clears_running_tasks_set(self) -> None:
|
||||
"""_running_tasks is empty after shutdown() completes."""
|
||||
manager = make_manager(interval=DUE_INTERVAL)
|
||||
manager.register("job1", AsyncMock(), interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0) # yield so ensure_future tasks are scheduled
|
||||
await manager.shutdown()
|
||||
assert manager._running_tasks == set()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_timeout_returns_without_blocking(self) -> None:
|
||||
"""shutdown() returns once the timeout elapses even if a job is still running."""
|
||||
stuck = asyncio.Event() # never set -- job blocks forever
|
||||
manager = RetentionManager(make_config_getter(DUE_INTERVAL), shutdown_timeout=0.05)
|
||||
|
||||
async def forever() -> None:
|
||||
await stuck.wait()
|
||||
|
||||
manager.register("stuck", forever, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
await asyncio.sleep(0)
|
||||
# Must return within the timeout, not block forever.
|
||||
await manager.shutdown()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_timeout_logs_error_for_pending_jobs(self) -> None:
|
||||
"""An error is logged listing jobs still running after the timeout."""
|
||||
stuck = asyncio.Event()
|
||||
manager = RetentionManager(make_config_getter(DUE_INTERVAL), shutdown_timeout=0.05)
|
||||
|
||||
async def forever() -> None:
|
||||
await stuck.wait()
|
||||
|
||||
manager.register("stuck_job", forever, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
await asyncio.sleep(0)
|
||||
|
||||
with patch.object(logger, "error") as mock_error:
|
||||
await manager.shutdown()
|
||||
assert mock_error.called, "Expected logger.error to be called on timeout"
|
||||
# All positional args joined: the stuck job name must appear.
|
||||
logged = str(mock_error.call_args_list)
|
||||
assert "stuck_job" in logged
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_timeout_clears_running_tasks_set(self) -> None:
|
||||
"""_running_tasks is cleared even when the timeout elapses."""
|
||||
stuck = asyncio.Event()
|
||||
manager = RetentionManager(make_config_getter(DUE_INTERVAL), shutdown_timeout=0.05)
|
||||
|
||||
async def forever() -> None:
|
||||
await stuck.wait()
|
||||
|
||||
manager.register("stuck", forever, interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
await asyncio.sleep(0)
|
||||
await manager.shutdown()
|
||||
assert manager._running_tasks == set()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shutdown_no_error_logged_when_all_finish_in_time(self) -> None:
|
||||
"""No error is logged when all tasks complete within the timeout."""
|
||||
manager = RetentionManager(make_config_getter(DUE_INTERVAL), shutdown_timeout=5.0)
|
||||
manager.register("job1", AsyncMock(), interval_attr="k")
|
||||
await manager.tick()
|
||||
await asyncio.sleep(0)
|
||||
|
||||
with patch.object(logger, "error") as mock_error:
|
||||
await manager.shutdown()
|
||||
mock_error.assert_not_called()
|
||||
|
||||
def test_init_stores_shutdown_timeout(self) -> None:
|
||||
"""The shutdown_timeout passed to __init__ is stored on the instance."""
|
||||
manager = RetentionManager(make_config_getter(), shutdown_timeout=99.0)
|
||||
assert manager._shutdown_timeout == 99.0
|
||||
|
||||
def test_init_default_shutdown_timeout(self) -> None:
|
||||
"""The default shutdown_timeout is 30 seconds."""
|
||||
manager = RetentionManager(make_config_getter())
|
||||
assert manager._shutdown_timeout == 30.0
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# _run_job() -- state updates
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_increments_run_count(self) -> None:
|
||||
"""_run_job() increments run_count after each execution."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
await manager._run_job(job)
|
||||
await manager._run_job(job)
|
||||
assert job.run_count == 2
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_updates_last_run_at(self) -> None:
|
||||
"""_run_job() sets last_run_at to a recent monotonic timestamp."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
before = time.monotonic()
|
||||
await manager._run_job(job)
|
||||
assert job.last_run_at >= before
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_updates_last_duration(self) -> None:
|
||||
"""_run_job() records a non-negative last_duration."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
await manager._run_job(job)
|
||||
assert job.last_duration >= 0.0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_clears_is_running_on_success(self) -> None:
|
||||
"""is_running is False after a successful job execution."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
await manager._run_job(job)
|
||||
assert job.is_running is False
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_clears_last_error_on_success(self) -> None:
|
||||
"""last_error is set to None after a successful execution."""
|
||||
manager = make_manager()
|
||||
manager.register("job1", MagicMock(), interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
job.last_error = "stale error"
|
||||
await manager._run_job(job)
|
||||
assert job.last_error is None
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# _run_job() -- exception handling
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_stores_exception_message(self) -> None:
|
||||
"""last_error is set to the exception message when the job raises."""
|
||||
manager = make_manager()
|
||||
|
||||
def failing_func() -> None:
|
||||
raise RuntimeError("boom")
|
||||
|
||||
manager.register("job1", failing_func, interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
await manager._run_job(job)
|
||||
assert job.last_error == "boom"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_still_updates_state_after_exception(self) -> None:
|
||||
"""run_count and last_run_at are updated even when the job raises."""
|
||||
manager = make_manager()
|
||||
|
||||
def failing_func() -> None:
|
||||
raise RuntimeError("boom")
|
||||
|
||||
manager.register("job1", failing_func, interval_attr="k")
|
||||
job = manager._jobs["job1"]
|
||||
before = time.monotonic()
|
||||
await manager._run_job(job)
|
||||
assert job.run_count == 1
|
||||
assert job.last_run_at >= before
|
||||
assert job.is_running is False
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_calls_sync_on_exception_handler(self) -> None:
|
||||
"""A sync on_exception handler is called with the raised exception."""
|
||||
manager = make_manager()
|
||||
handler = MagicMock()
|
||||
exc = RuntimeError("oops")
|
||||
|
||||
def failing_func() -> None:
|
||||
raise exc
|
||||
|
||||
manager.register("job1", failing_func, interval_attr="k", on_exception=handler)
|
||||
await manager._run_job(manager._jobs["job1"])
|
||||
handler.assert_called_once_with(exc)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_calls_async_on_exception_handler(self) -> None:
|
||||
"""An async on_exception handler is awaited with the raised exception."""
|
||||
manager = make_manager()
|
||||
handler = AsyncMock()
|
||||
exc = RuntimeError("oops")
|
||||
|
||||
def failing_func() -> None:
|
||||
raise exc
|
||||
|
||||
manager.register("job1", failing_func, interval_attr="k", on_exception=handler)
|
||||
await manager._run_job(manager._jobs["job1"])
|
||||
handler.assert_called_once_with(exc)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_no_on_exception_handler_does_not_raise(self) -> None:
|
||||
"""A failing job without on_exception does not propagate the exception."""
|
||||
manager = make_manager()
|
||||
|
||||
def failing_func() -> None:
|
||||
raise RuntimeError("silent failure")
|
||||
|
||||
manager.register("job1", failing_func, interval_attr="k")
|
||||
await manager._run_job(manager._jobs["job1"]) # must not raise
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_job_on_exception_not_called_on_success(self) -> None:
|
||||
"""on_exception is not called when the job succeeds."""
|
||||
manager = make_manager()
|
||||
handler = MagicMock()
|
||||
manager.register("job1", MagicMock(), interval_attr="k", on_exception=handler)
|
||||
await manager._run_job(manager._jobs["job1"])
|
||||
handler.assert_not_called()
|
||||
@@ -2,11 +2,22 @@
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional, Union
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
|
||||
from akkudoktoreos.core.version import _version_calculate, _version_hash
|
||||
from akkudoktoreos.core.version import (
|
||||
ALLOWED_SUFFIXES,
|
||||
DIR_PACKAGE_ROOT,
|
||||
EXCLUDED_DIR_PATTERNS,
|
||||
EXCLUDED_FILES,
|
||||
HashConfig,
|
||||
_version_calculate,
|
||||
_version_hash,
|
||||
collect_files,
|
||||
hash_files,
|
||||
)
|
||||
|
||||
DIR_PROJECT_ROOT = Path(__file__).parent.parent
|
||||
GET_VERSION_SCRIPT = DIR_PROJECT_ROOT / "scripts" / "get_version.py"
|
||||
@@ -14,11 +25,166 @@ BUMP_DEV_SCRIPT = DIR_PROJECT_ROOT / "scripts" / "bump_dev_version.py"
|
||||
UPDATE_SCRIPT = DIR_PROJECT_ROOT / "scripts" / "update_version.py"
|
||||
|
||||
|
||||
# --- Git helpers ---
|
||||
|
||||
def get_git_tracked_files(repo_path: Path) -> Optional[set[Path]]:
|
||||
"""Get set of all files tracked by git in the repository.
|
||||
|
||||
Returns None if not a git repository or git command fails.
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "ls-files"],
|
||||
cwd=repo_path,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True
|
||||
)
|
||||
# Convert relative paths to absolute paths
|
||||
tracked_files = {
|
||||
(repo_path / line.strip()).resolve()
|
||||
for line in result.stdout.splitlines()
|
||||
if line.strip()
|
||||
}
|
||||
return tracked_files
|
||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||
return None
|
||||
|
||||
|
||||
def is_git_repository(path: Path) -> bool:
|
||||
"""Check if path is inside a git repository."""
|
||||
try:
|
||||
subprocess.run(
|
||||
["git", "rev-parse", "--git-dir"],
|
||||
cwd=path,
|
||||
capture_output=True,
|
||||
check=True
|
||||
)
|
||||
return True
|
||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||
return False
|
||||
|
||||
|
||||
def get_git_root(path: Path) -> Optional[Path]:
|
||||
"""Get the root directory of the git repository containing path."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "rev-parse", "--show-toplevel"],
|
||||
cwd=path,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True
|
||||
)
|
||||
return Path(result.stdout.strip())
|
||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||
return None
|
||||
|
||||
|
||||
def check_files_in_git(
|
||||
files: list[Path],
|
||||
base_path: Optional[Path] = None
|
||||
) -> tuple[list[Path], list[Path]]:
|
||||
"""Check which files are tracked by git.
|
||||
|
||||
Args:
|
||||
files: List of files to check
|
||||
base_path: Base path to check for git repository (uses first file's parent if None)
|
||||
|
||||
Returns:
|
||||
Tuple of (tracked_files, untracked_files)
|
||||
|
||||
Example:
|
||||
>>> files = collect_files(config)
|
||||
>>> tracked, untracked = check_files_in_git(files)
|
||||
>>> if untracked:
|
||||
... print(f"Warning: {len(untracked)} files not in git")
|
||||
"""
|
||||
if not files:
|
||||
return [], []
|
||||
|
||||
check_path = base_path or files[0].parent
|
||||
|
||||
assert is_git_repository(check_path)
|
||||
|
||||
git_root = get_git_root(check_path)
|
||||
if not git_root:
|
||||
return [], files
|
||||
|
||||
git_tracked = get_git_tracked_files(git_root)
|
||||
if git_tracked is None:
|
||||
return [], files
|
||||
|
||||
tracked = [f for f in files if f in git_tracked]
|
||||
untracked = [f for f in files if f not in git_tracked]
|
||||
|
||||
return tracked, untracked
|
||||
|
||||
|
||||
# --- Helper to create test files ---
|
||||
def write_file(path: Path, content: str):
|
||||
path.write_text(content, encoding="utf-8")
|
||||
return path
|
||||
|
||||
# -- Test version calculation ---
|
||||
|
||||
def test_version_hash() -> None:
|
||||
"""Test which files are used for version hash calculation."""
|
||||
|
||||
watched_paths = [DIR_PACKAGE_ROOT]
|
||||
|
||||
# Collect files
|
||||
config = HashConfig(
|
||||
paths=watched_paths,
|
||||
allowed_suffixes=ALLOWED_SUFFIXES,
|
||||
excluded_dir_patterns=EXCLUDED_DIR_PATTERNS,
|
||||
excluded_files=EXCLUDED_FILES
|
||||
)
|
||||
|
||||
files = collect_files(config)
|
||||
hash_digest = hash_files(files)
|
||||
|
||||
# Check git
|
||||
tracked, untracked = check_files_in_git(files, DIR_PACKAGE_ROOT)
|
||||
tracked_files: list[Path] = tracked
|
||||
untracked_files: list[Path] = untracked
|
||||
|
||||
if untracked_files:
|
||||
error_msg = f"\n{'='*60}"
|
||||
error_msg += f"Version Hash Inspection"
|
||||
error_msg += f"{'='*60}\n"
|
||||
error_msg += f"Hash: {hash_digest}"
|
||||
error_msg += f"Based on {len(files)} files:\n"
|
||||
|
||||
error_msg += f"OK: {len(tracked_files)} files tracked by git:\n"
|
||||
for i, file_path in enumerate(files, 1):
|
||||
try:
|
||||
rel_path = file_path.relative_to(DIR_PACKAGE_ROOT)
|
||||
status = ""
|
||||
if file_path in untracked_files:
|
||||
continue
|
||||
elif file_path in tracked_files:
|
||||
status = " [tracked]"
|
||||
error_msg += f" {i:3d}. {rel_path}{status}\n"
|
||||
except ValueError:
|
||||
error_msg += f" {i:3d}. {file_path}\n"
|
||||
|
||||
error_msg += f"Warning: {len(untracked_files)} files not tracked by git:\n"
|
||||
for i, file_path in enumerate(files, 1):
|
||||
try:
|
||||
rel_path = file_path.relative_to(DIR_PACKAGE_ROOT)
|
||||
status = ""
|
||||
if file_path in untracked_files:
|
||||
status = " [NOT IN GIT]"
|
||||
elif file_path in tracked_files:
|
||||
continue
|
||||
error_msg += f" {i:3d}. {rel_path}{status}\n"
|
||||
except ValueError:
|
||||
error_msg += f" {i:3d}. {file_path}\n"
|
||||
|
||||
error_msg += f"\n{'='*60}\n"
|
||||
|
||||
pytest.fail(error_msg)
|
||||
|
||||
|
||||
# --- Test version helpers ---
|
||||
def test_version_non_dev(monkeypatch):
|
||||
@@ -38,7 +204,7 @@ def test_version_dev_precision_8(monkeypatch):
|
||||
|
||||
result = _version_calculate()
|
||||
|
||||
# compute expected suffix
|
||||
# Compute expected suffix using the same logic as _version_calculate
|
||||
hash_value = int(fake_hash, 16)
|
||||
expected_digits = str(hash_value % (10 ** 8)).zfill(8)
|
||||
|
||||
@@ -60,12 +226,17 @@ def test_version_dev_precision_8_different_hash(monkeypatch):
|
||||
|
||||
result = _version_calculate()
|
||||
|
||||
# Compute expected suffix using the same logic as _version_calculate
|
||||
hash_value = int(fake_hash, 16)
|
||||
expected_digits = str(hash_value % (10 ** 8)).zfill(8)
|
||||
|
||||
expected = f"0.2.0.dev{expected_digits}"
|
||||
|
||||
assert result == expected
|
||||
assert len(expected_digits) == 8
|
||||
assert result.startswith("0.2.0.dev")
|
||||
assert result == expected
|
||||
|
||||
|
||||
|
||||
# --- 1️⃣ Test get_version.py ---
|
||||
|
||||
@@ -6,7 +6,7 @@ import pandas as pd
|
||||
import pytest
|
||||
|
||||
from akkudoktoreos.core.cache import CacheFileStore
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.weatherbrightsky import WeatherBrightSky
|
||||
from akkudoktoreos.utils.datetimeutil import to_datetime
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ import pytest
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
from akkudoktoreos.core.cache import CacheFileStore
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.weatherclearoutside import WeatherClearOutside
|
||||
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime
|
||||
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import numpy.testing as npt
|
||||
import pytest
|
||||
|
||||
from akkudoktoreos.core.ems import get_ems
|
||||
from akkudoktoreos.core.coreabc import get_ems
|
||||
from akkudoktoreos.prediction.weatherimport import WeatherImport
|
||||
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime
|
||||
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime, to_duration
|
||||
|
||||
DIR_TESTDATA = Path(__file__).absolute().parent.joinpath("testdata")
|
||||
|
||||
@@ -87,6 +88,7 @@ def test_invalid_provider(provider, config_eos, monkeypatch):
|
||||
)
|
||||
def test_import(provider, sample_import_1_json, start_datetime, from_file, config_eos):
|
||||
"""Test fetching forecast from Import."""
|
||||
key = "weather_temp_air"
|
||||
ems_eos = get_ems()
|
||||
ems_eos.set_start_datetime(to_datetime(start_datetime, in_timezone="Europe/Berlin"))
|
||||
if from_file:
|
||||
@@ -95,7 +97,7 @@ def test_import(provider, sample_import_1_json, start_datetime, from_file, confi
|
||||
else:
|
||||
config_eos.weather.provider_settings.WeatherImport.import_file_path = None
|
||||
assert config_eos.weather.provider_settings.WeatherImport.import_file_path is None
|
||||
provider.clear()
|
||||
provider.delete_by_datetime(start_datetime=None, end_datetime=None)
|
||||
|
||||
# Call the method
|
||||
provider.update_data()
|
||||
@@ -104,16 +106,13 @@ def test_import(provider, sample_import_1_json, start_datetime, from_file, confi
|
||||
assert provider.ems_start_datetime is not None
|
||||
assert provider.total_hours is not None
|
||||
assert compare_datetimes(provider.ems_start_datetime, ems_eos.start_datetime).equal
|
||||
values = sample_import_1_json["weather_temp_air"]
|
||||
value_datetime_mapping = provider.import_datetimes(ems_eos.start_datetime, len(values))
|
||||
for i, mapping in enumerate(value_datetime_mapping):
|
||||
assert i < len(provider.records)
|
||||
expected_datetime, expected_value_index = mapping
|
||||
expected_value = values[expected_value_index]
|
||||
result_datetime = provider.records[i].date_time
|
||||
result_value = provider.records[i]["weather_temp_air"]
|
||||
|
||||
# print(f"{i}: Expected: {expected_datetime}:{expected_value}")
|
||||
# print(f"{i}: Result: {result_datetime}:{result_value}")
|
||||
assert compare_datetimes(result_datetime, expected_datetime).equal
|
||||
assert result_value == expected_value
|
||||
expected_values = sample_import_1_json[key]
|
||||
result_values = provider.key_to_array(
|
||||
key=key,
|
||||
start_datetime=provider.ems_start_datetime,
|
||||
end_datetime=provider.ems_start_datetime + to_duration(f"{len(expected_values)} hours"),
|
||||
interval=to_duration("1 hour"),
|
||||
)
|
||||
# Allow for some difference due to value calculation on DST change
|
||||
npt.assert_allclose(result_values, expected_values, rtol=0.001)
|
||||
|
||||
2
tests/testdata/eos_config_andreas_now.json
vendored
2
tests/testdata/eos_config_andreas_now.json
vendored
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"general": {
|
||||
"data_folder_path": null,
|
||||
"data_folder_path": "__ANY__",
|
||||
"data_output_subpath": "output",
|
||||
"latitude": 52.5,
|
||||
"longitude": 13.4
|
||||
|
||||
Reference in New Issue
Block a user