Add database support for measurements and historic prediction data. (#848)

The database supports backend selection, compression, incremental data load,
automatic data saving to storage, automatic vaccum and compaction.

Make SQLite3 and LMDB database backends available.

Update tests for new interface conventions regarding data sequences,
data containers, data providers. This includes the measurements provider and
the prediction providers.

Add database documentation.

The fix includes several bug fixes that are not directly related to the database
implementation but are necessary to keep EOS running properly and to test and
document the changes.

* fix: config eos test setup

  Make the config_eos fixture generate a new instance of the config_eos singleton.
  Use correct env names to setup data folder path.

* fix: startup with no config

  Make cache and measurements complain about missing data path configuration but
  do not bail out.

* fix: soc data preparation and usage for genetic optimization.

  Search for soc measurments 48 hours around the optimization start time.
  Only clamp soc to maximum in battery device simulation.

* fix: dashboard bailout on zero value solution display

  Do not use zero values to calculate the chart values adjustment for display.

* fix: openapi generation script

  Make the script also replace data_folder_path and data_output_path to hide
  real (test) environment pathes.

* feat: add make repeated task function

  make_repeated_task allows to wrap a function to be repeated cyclically.

* chore: removed index based data sequence access

  Index based data sequence access does not make sense as the sequence can be backed
  by the database. The sequence is now purely time series data.

* chore: refactor eos startup to avoid module import startup

  Avoid module import initialisation expecially of the EOS configuration.
  Config mutation, singleton initialization, logging setup, argparse parsing,
  background task definitions depending on config and environment-dependent behavior
  is now done at function startup.

* chore: introduce retention manager

  A single long-running background task that owns the scheduling of all periodic
  server-maintenance jobs (cache cleanup, DB autosave, …)

* chore: canonicalize timezone name for UTC

  Timezone names that are semantically identical to UTC are canonicalized to UTC.

* chore: extend config file migration for default value handling

  Extend the config file migration handling values None or nonexisting values
  that will invoke a default value generation in the new config file. Also
  adapt test to handle this situation.

* chore: extend datetime util test cases

* chore: make version test check for untracked files

  Check for files that are not tracked by git. Version calculation will be
  wrong if these files will not be commited.

* chore: bump pandas to 3.0.0

  Pandas 3.0 now performs inference on the appropriate resolution (a.k.a. unit)
  for the output dtype which may become datetime64[us] (before it was ns). Also
  numeric dtype detection is now more strict which needs a different detection for
  numerics.

* chore: bump pydantic-settings to 2.12.0

  pydantic-settings 2.12.0 under pytest creates a different behaviour. The tests
  were adapted and a workaround was introduced. Also ConfigEOS was adapted
  to allow for fine grain initialization control to be able to switch
  off certain settings such as file settings during test.

* chore: remove sci learn kit from dependencies

  The sci learn kit is not strictly necessary as long as we have scipy.

* chore: add documentation mode guarding for sphinx autosummary

  Sphinx autosummary excecutes functions. Prevent exceptions in case of pure doc
  mode.

* chore: adapt docker-build CI workflow to stricter GitHub handling

Signed-off-by: Bobby Noelte <b0661n0e17e@gmail.com>
This commit is contained in:
Bobby Noelte
2026-02-22 14:12:42 +01:00
committed by GitHub
parent 5f66591d21
commit 6498c7dc32
92 changed files with 12710 additions and 2173 deletions

View File

@@ -1,4 +1,4 @@
from typing import TYPE_CHECKING, Optional, Union
from typing import Optional, Union
from pydantic import Field, computed_field, field_validator
@@ -10,9 +10,6 @@ from akkudoktoreos.adapter.homeassistant import (
from akkudoktoreos.adapter.nodered import NodeREDAdapter, NodeREDAdapterCommonSettings
from akkudoktoreos.config.configabc import SettingsBaseModel
if TYPE_CHECKING:
adapter_providers: list[str]
class AdapterCommonSettings(SettingsBaseModel):
"""Adapter Configuration."""
@@ -38,8 +35,9 @@ class AdapterCommonSettings(SettingsBaseModel):
@computed_field # type: ignore[prop-decorator]
@property
def providers(self) -> list[str]:
"""Available electricity price provider ids."""
return adapter_providers
"""Available adapter provider ids."""
adapter_provider_ids = [provider.provider_id() for provider in adapter_providers()]
return adapter_provider_ids
# Validators
@field_validator("provider", mode="after")
@@ -47,48 +45,39 @@ class AdapterCommonSettings(SettingsBaseModel):
def validate_provider(cls, value: Optional[list[str]]) -> Optional[list[str]]:
if value is None:
return value
adapter_provider_ids = [provider.provider_id() for provider in adapter_providers()]
for provider_id in value:
if provider_id not in adapter_providers:
if provider_id not in adapter_provider_ids:
raise ValueError(
f"Provider '{value}' is not a valid adapter provider: {adapter_providers}."
f"Provider '{value}' is not a valid adapter provider: {adapter_provider_ids}."
)
return value
class Adapter(AdapterContainer):
"""Adapter container to manage multiple adapter providers.
Attributes:
providers (List[Union[PVForecastAkkudoktor, WeatherBrightSky, WeatherClearOutside]]):
List of forecast provider instances, in the order they should be updated.
Providers may depend on updates from others.
"""
providers: list[
Union[
HomeAssistantAdapter,
NodeREDAdapter,
]
] = Field(default_factory=list, json_schema_extra={"description": "List of adapter providers"})
# Initialize adapter providers, all are singletons.
homeassistant_adapter = HomeAssistantAdapter()
nodered_adapter = NodeREDAdapter()
def get_adapter() -> Adapter:
"""Gets the EOS adapter data."""
# Initialize Adapter instance with providers in the required order
# Care for provider sequence as providers may rely on others to be updated before.
adapter = Adapter(
providers=[
homeassistant_adapter,
nodered_adapter,
def adapter_providers() -> list[Union["HomeAssistantAdapter", "NodeREDAdapter"]]:
"""Return list of adapter providers."""
global homeassistant_adapter, nodered_adapter
return [
homeassistant_adapter,
nodered_adapter,
]
class Adapter(AdapterContainer):
"""Adapter container to manage multiple adapter providers."""
providers: list[
Union[
HomeAssistantAdapter,
NodeREDAdapter,
]
] = Field(
default_factory=adapter_providers,
json_schema_extra={"description": "List of adapter providers"},
)
return adapter
# Valid adapter providers
adapter_providers = [provider.provider_id() for provider in get_adapter().providers]

View File

@@ -10,12 +10,12 @@ from pydantic import Field, computed_field, field_validator
from akkudoktoreos.adapter.adapterabc import AdapterProvider
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.core.coreabc import get_adapter
from akkudoktoreos.core.emplan import (
DDBCInstruction,
FRBCInstruction,
)
from akkudoktoreos.core.ems import EnergyManagementStage
from akkudoktoreos.devices.devices import get_resource_registry
from akkudoktoreos.utils.datetimeutil import to_datetime
# Supervisor API endpoint and token (injected automatically in add-on container)
@@ -29,8 +29,6 @@ HEADERS = {
HOMEASSISTANT_ENTITY_ID_PREFIX = "sensor.eos_"
resources_eos = get_resource_registry()
class HomeAssistantAdapterCommonSettings(SettingsBaseModel):
"""Common settings for the home assistant adapter."""
@@ -146,8 +144,6 @@ class HomeAssistantAdapterCommonSettings(SettingsBaseModel):
def homeassistant_entity_ids(self) -> list[str]:
"""Entity IDs available at Home Assistant."""
try:
from akkudoktoreos.adapter.adapter import get_adapter
adapter_eos = get_adapter()
result = adapter_eos.provider_by_id("HomeAssistant").get_homeassistant_entity_ids()
except:
@@ -159,8 +155,6 @@ class HomeAssistantAdapterCommonSettings(SettingsBaseModel):
def eos_solution_entity_ids(self) -> list[str]:
"""Entity IDs for optimization solution available at EOS."""
try:
from akkudoktoreos.adapter.adapter import get_adapter
adapter_eos = get_adapter()
result = adapter_eos.provider_by_id("HomeAssistant").get_eos_solution_entity_ids()
except:
@@ -172,8 +166,6 @@ class HomeAssistantAdapterCommonSettings(SettingsBaseModel):
def eos_device_instruction_entity_ids(self) -> list[str]:
"""Entity IDs for energy management instructions available at EOS."""
try:
from akkudoktoreos.adapter.adapter import get_adapter
adapter_eos = get_adapter()
result = adapter_eos.provider_by_id(
"HomeAssistant"

View File

@@ -11,6 +11,7 @@ Key features:
import json
import os
import sys
import tempfile
from pathlib import Path
from typing import Any, ClassVar, Optional, Type, Union
@@ -26,6 +27,7 @@ from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.config.configmigrate import migrate_config_data, migrate_config_file
from akkudoktoreos.core.cachesettings import CacheCommonSettings
from akkudoktoreos.core.coreabc import SingletonMixin
from akkudoktoreos.core.database import DatabaseCommonSettings
from akkudoktoreos.core.decorators import classproperty
from akkudoktoreos.core.emsettings import (
EnergyManagementCommonSettings,
@@ -65,16 +67,66 @@ def get_absolute_path(
return None
def is_home_assistant_addon() -> bool:
"""Detect Home Assistant add-on environment.
Home Assistant sets this environment variable automatically.
"""
return "HASSIO_TOKEN" in os.environ or "SUPERVISOR_TOKEN" in os.environ
def default_data_folder_path() -> Path:
"""Provide default data folder path.
1. From EOS_DATA_DIR env
2. From EOS_DIR env
3. From platform specific default path
4. Current working directory
Note:
When running as Home Assistant add-on the path is fixed to /data.
"""
if is_home_assistant_addon():
return Path("/data")
# 1. From EOS_DATA_DIR env
if env_dir := os.getenv(ConfigEOS.EOS_DATA_DIR):
try:
data_dir = Path(env_dir).resolve()
data_dir.mkdir(parents=True, exist_ok=True)
return data_dir
except Exception as e:
logger.warning(f"Could not setup data folder {data_dir}: {e}")
# 2. From EOS_DIR env
if env_dir := os.getenv(ConfigEOS.EOS_DIR):
try:
data_dir = Path(env_dir).resolve()
data_dir.mkdir(parents=True, exist_ok=True)
return data_dir
except Exception as e:
logger.warning(f"Could not setup data folder {data_dir}: {e}")
# 3. From platform specific default path
try:
data_dir = Path(user_data_dir(ConfigEOS.APP_NAME, ConfigEOS.APP_AUTHOR))
if data_dir is not None:
data_dir.mkdir(parents=True, exist_ok=True)
return data_dir
except Exception as e:
logger.warning(f"Could not setup data folder {data_dir}: {e}")
# 4. Current working directory
return Path.cwd()
class GeneralSettings(SettingsBaseModel):
"""General settings."""
_config_folder_path: ClassVar[Optional[Path]] = None
_config_file_path: ClassVar[Optional[Path]] = None
# Detect Home Assistant add-on environment
# Home Assistant sets this environment variable automatically
_home_assistant_addon: ClassVar[bool] = (
"HASSIO_TOKEN" in os.environ or "SUPERVISOR_TOKEN" in os.environ
home_assistant_addon: bool = Field(
default_factory=is_home_assistant_addon,
json_schema_extra={"description": "EOS is running as home assistant add-on."},
exclude=True,
)
version: str = Field(
@@ -84,17 +136,16 @@ class GeneralSettings(SettingsBaseModel):
},
)
data_folder_path: Optional[Path] = Field(
default=None,
data_folder_path: Path = Field(
default_factory=default_data_folder_path,
json_schema_extra={
"description": "Path to EOS data directory.",
"examples": [None, "/home/eos/data"],
"description": "Path to EOS data folder.",
},
)
data_output_subpath: Optional[Path] = Field(
default="output",
json_schema_extra={"description": "Sub-path for the EOS output data directory."},
json_schema_extra={"description": "Sub-path for the EOS output data folder."},
)
latitude: Optional[float] = Field(
@@ -134,19 +185,13 @@ class GeneralSettings(SettingsBaseModel):
@property
def config_folder_path(self) -> Optional[Path]:
"""Path to EOS configuration directory."""
return self._config_folder_path
return self.config._config_file_path.parent
@computed_field # type: ignore[prop-decorator]
@property
def config_file_path(self) -> Optional[Path]:
"""Path to EOS configuration file."""
return self._config_file_path
@computed_field # type: ignore[prop-decorator]
@property
def home_assistant_addon(self) -> bool:
"""EOS is running as home assistant add-on."""
return self._home_assistant_addon
return self.config._config_file_path
compatible_versions: ClassVar[list[str]] = [__version__]
@@ -164,17 +209,19 @@ class GeneralSettings(SettingsBaseModel):
@field_validator("data_folder_path", mode="after")
@classmethod
def validate_data_folder_path(cls, value: Optional[Union[str, Path]]) -> Optional[Path]:
def validate_data_folder_path(cls, value: Optional[Union[str, Path]]) -> Path:
"""Ensure dir is available."""
if cls._home_assistant_addon:
if is_home_assistant_addon():
# Force to home assistant add-on /data directory
return Path("/data")
if value is None:
return None
return default_data_folder_path()
if isinstance(value, str):
value = Path(value)
value.resolve()
if not value.is_dir():
try:
value.resolve()
value.mkdir(parents=True, exist_ok=True)
except Exception:
raise ValueError(f"Data folder path '{value}' is not a directory.")
return value
@@ -191,6 +238,9 @@ class SettingsEOS(pydantic_settings.BaseSettings, PydanticModelNestedValueMixin)
cache: Optional[CacheCommonSettings] = Field(
default=None, json_schema_extra={"description": "Cache Settings"}
)
database: Optional[DatabaseCommonSettings] = Field(
default=None, json_schema_extra={"description": "Database Settings"}
)
ems: Optional[EnergyManagementCommonSettings] = Field(
default=None, json_schema_extra={"description": "Energy Management Settings"}
)
@@ -248,22 +298,23 @@ class SettingsEOSDefaults(SettingsEOS):
Used by ConfigEOS instance to make all fields available.
"""
general: GeneralSettings = GeneralSettings()
cache: CacheCommonSettings = CacheCommonSettings()
ems: EnergyManagementCommonSettings = EnergyManagementCommonSettings()
logging: LoggingCommonSettings = LoggingCommonSettings()
devices: DevicesCommonSettings = DevicesCommonSettings()
measurement: MeasurementCommonSettings = MeasurementCommonSettings()
optimization: OptimizationCommonSettings = OptimizationCommonSettings()
prediction: PredictionCommonSettings = PredictionCommonSettings()
elecprice: ElecPriceCommonSettings = ElecPriceCommonSettings()
feedintariff: FeedInTariffCommonSettings = FeedInTariffCommonSettings()
load: LoadCommonSettings = LoadCommonSettings()
pvforecast: PVForecastCommonSettings = PVForecastCommonSettings()
weather: WeatherCommonSettings = WeatherCommonSettings()
server: ServerCommonSettings = ServerCommonSettings()
utils: UtilsCommonSettings = UtilsCommonSettings()
adapter: AdapterCommonSettings = AdapterCommonSettings()
general: GeneralSettings = Field(default_factory=GeneralSettings)
cache: CacheCommonSettings = Field(default_factory=CacheCommonSettings)
database: DatabaseCommonSettings = Field(default_factory=DatabaseCommonSettings)
ems: EnergyManagementCommonSettings = Field(default_factory=EnergyManagementCommonSettings)
logging: LoggingCommonSettings = Field(default_factory=LoggingCommonSettings)
devices: DevicesCommonSettings = Field(default_factory=DevicesCommonSettings)
measurement: MeasurementCommonSettings = Field(default_factory=MeasurementCommonSettings)
optimization: OptimizationCommonSettings = Field(default_factory=OptimizationCommonSettings)
prediction: PredictionCommonSettings = Field(default_factory=PredictionCommonSettings)
elecprice: ElecPriceCommonSettings = Field(default_factory=ElecPriceCommonSettings)
feedintariff: FeedInTariffCommonSettings = Field(default_factory=FeedInTariffCommonSettings)
load: LoadCommonSettings = Field(default_factory=LoadCommonSettings)
pvforecast: PVForecastCommonSettings = Field(default_factory=PVForecastCommonSettings)
weather: WeatherCommonSettings = Field(default_factory=WeatherCommonSettings)
server: ServerCommonSettings = Field(default_factory=ServerCommonSettings)
utils: UtilsCommonSettings = Field(default_factory=UtilsCommonSettings)
adapter: AdapterCommonSettings = Field(default_factory=AdapterCommonSettings)
def __hash__(self) -> int:
# Just for usage in configmigrate, finally overwritten when used by ConfigEOS.
@@ -300,10 +351,6 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
the same instance, which contains the most up-to-date configuration. Modifying the configuration
in one part of the application reflects across all references to this class.
Attributes:
config_folder_path (Optional[Path]): Path to the configuration directory.
config_file_path (Optional[Path]): Path to the configuration file.
Raises:
FileNotFoundError: If no configuration file is found, and creating a default configuration fails.
@@ -323,6 +370,15 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
EOS_CONFIG_DIR: ClassVar[str] = "EOS_CONFIG_DIR"
ENCODING: ClassVar[str] = "UTF-8"
CONFIG_FILE_NAME: ClassVar[str] = "EOS.config.json"
_init_config_eos: ClassVar[dict[str, bool]] = {
"with_init_settings": True,
"with_env_settings": True,
"with_dotenv_settings": True,
"with_file_settings": True,
"with_file_secret_settings": True,
}
_config_file_path: ClassVar[Optional[Path]] = None
_force_documentation_mode = False
def __hash__(self) -> int:
# ConfigEOS is a singleton
@@ -377,31 +433,156 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
configuration directory cannot be created.
- It ensures that a fallback to a default configuration file is always possible.
"""
# Ensure we know and have the config folder path and the config file
config_file = cls._setup_config_file()
def lazy_config_file_settings() -> dict:
"""Config file settings.
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
is recreated this function is run.
"""
config_file_path, exists = cls._get_config_file_path()
if not exists:
# Create minimum config file
config_minimum_content = '{ "general": { "version": "' + __version__ + '" } }'
if config_file_path.is_relative_to(ConfigEOS.package_root_path):
# Never write into package directory
error_msg = (
f"Could not create minimum config file. "
f"Config file path '{config_file_path}' is within package root "
f"'{ConfigEOS.package_root_path}'"
)
logger.error(error_msg)
raise RuntimeError(error_msg)
try:
config_file_path.parent.mkdir(parents=True, exist_ok=True)
config_file_path.write_text(config_minimum_content, encoding="utf-8")
except Exception as exc:
# Create minimum config in temporary config directory as last resort
error_msg = (
f"Could not create minimum config file in {config_file_path.parent}: {exc}"
)
logger.error(error_msg)
temp_dir = Path(tempfile.mkdtemp())
info_msg = f"Using temporary config directory {temp_dir}"
logger.info(info_msg)
config_file_path = temp_dir / config_file_path.name
config_file_path.write_text(config_minimum_content, encoding="utf-8")
# Remember for other lazy settings and computed_field
cls._config_file_path = config_file_path
return {}
def lazy_data_folder_path_settings() -> dict:
"""Data folder path settings.
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
is recreated this function is run.
"""
# Updates path to the data directory.
data_folder_settings = {
"general": {
"data_folder_path": default_data_folder_path(),
},
}
return data_folder_settings
def lazy_init_settings() -> dict:
"""Init settings.
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
is recreated this function is run.
"""
if not cls._init_config_eos.get("with_init_settings", True):
logger.debug("Config initialisation with init settings is disabled.")
return {}
settings = init_settings()
return settings
def lazy_env_settings() -> dict:
"""Env settings.
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
is recreated this function is run.
"""
if not cls._init_config_eos.get("with_env_settings", True):
logger.debug("Config initialisation with env settings is disabled.")
return {}
return env_settings()
def lazy_dotenv_settings() -> dict:
"""Dotenv settings.
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
is recreated this function is run.
"""
if not cls._init_config_eos.get("with_dotenv_settings", True):
logger.debug("Config initialisation with dotenv settings is disabled.")
return {}
return dotenv_settings()
def lazy_file_settings() -> dict:
"""File settings.
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
is recreated this function is run.
Ensures the config file exists and creates a backup if necessary.
"""
if not cls._init_config_eos.get("with_file_settings", True):
logger.debug("Config initialisation with file settings is disabled.")
return {}
config_file = cls._config_file_path # provided by lazy_config_file_settings
if config_file is None:
# This should not happen
raise RuntimeError("Config file path not set.")
try:
backup_file = config_file.with_suffix(f".{to_datetime(as_string='YYYYMMDDHHmmss')}")
if migrate_config_file(config_file, backup_file):
# If the config file does have the correct version add it as settings source
settings = pydantic_settings.JsonConfigSettingsSource(
settings_cls, json_file=config_file
)()
except Exception as ex:
logger.error(
f"Error reading config file '{config_file}' (falling back to default config): {ex}"
)
settings = {}
return settings
def lazy_file_secret_settings() -> dict:
"""File secret settings.
This function runs at **instance creation**, not class definition. Ensures if ConfigEOS
is recreated this function is run.
"""
if not cls._init_config_eos.get("with_file_secret_settings", True):
logger.debug("Config initialisation with file secret settings is disabled.")
return {}
return file_secret_settings()
# All the settings sources in priority sequence
# The settings are all lazyly evaluated at instance creation time to allow for
# runtime configuration.
setting_sources = [
init_settings,
env_settings,
dotenv_settings,
lazy_config_file_settings, # Prio high
lazy_init_settings,
lazy_env_settings,
lazy_dotenv_settings,
lazy_file_settings,
lazy_data_folder_path_settings,
lazy_file_secret_settings, # Prio low
]
# Append file settings to sources
file_settings: Optional[pydantic_settings.JsonConfigSettingsSource] = None
try:
backup_file = config_file.with_suffix(f".{to_datetime(as_string='YYYYMMDDHHmmss')}")
if migrate_config_file(config_file, backup_file):
# If the config file does have the correct version add it as settings source
file_settings = pydantic_settings.JsonConfigSettingsSource(
settings_cls, json_file=config_file
)
setting_sources.append(file_settings)
except Exception as ex:
logger.error(
f"Error reading config file '{config_file}' (falling back to default config): {ex}"
)
return tuple(setting_sources)
@classproperty
@@ -409,30 +590,41 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
"""Compute the package root path."""
return Path(__file__).parent.parent.resolve()
@classmethod
def documentation_mode(cls) -> bool:
"""Are we running in documentation mode.
Some checks may be relaxed to allow for proper documentation execution.
"""
# Detect if Sphinx is importing this module
is_sphinx = "sphinx" in sys.modules or getattr(sys, "_called_from_sphinx", False)
return cls._force_documentation_mode or is_sphinx
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Initializes the singleton ConfigEOS instance.
Configuration data is loaded from a configuration file or a default one is created if none
exists.
"""
logger.debug("Config init with parameters {} {}", args, kwargs)
# Check for singleton guard
if hasattr(self, "_initialized"):
logger.debug("Config init called again with parameters {} {}", args, kwargs)
return
logger.debug("Config init with parameters {} {}", args, kwargs)
self._setup(self, *args, **kwargs)
def _setup(self, *args: Any, **kwargs: Any) -> None:
"""Re-initialize global settings."""
logger.debug("Config setup with parameters {} {}", args, kwargs)
# Assure settings base knows the singleton EOS configuration
SettingsBaseModel.config = self
# (Re-)load settings - call base class init
SettingsEOSDefaults.__init__(self, *args, **kwargs)
# Init config file and data folder pathes
self._setup_config_file()
self._update_data_folder_path()
self._initialized = True
logger.debug("Config setup:\n{}", self)
logger.debug(f"Config setup:\n{self}")
def merge_settings(self, settings: SettingsEOS) -> None:
"""Merges the provided settings into the global settings for EOS, with optional overwrite.
@@ -562,48 +754,6 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
return result
def _update_data_folder_path(self) -> None:
"""Updates path to the data directory."""
# From Settings
if data_dir := self.general.data_folder_path:
try:
data_dir.mkdir(parents=True, exist_ok=True)
self.general.data_folder_path = data_dir
return
except Exception as e:
logger.warning(f"Could not setup data dir {data_dir}: {e}")
# From EOS_DATA_DIR env
if env_dir := os.getenv(self.EOS_DATA_DIR):
try:
data_dir = Path(env_dir).resolve()
data_dir.mkdir(parents=True, exist_ok=True)
self.general.data_folder_path = data_dir
return
except Exception as e:
logger.warning(f"Could not setup data dir {data_dir}: {e}")
# From EOS_DIR env
if env_dir := os.getenv(self.EOS_DIR):
try:
data_dir = Path(env_dir).resolve()
data_dir.mkdir(parents=True, exist_ok=True)
self.general.data_folder_path = data_dir
return
except Exception as e:
logger.warning(f"Could not setup data dir {data_dir}: {e}")
# From platform specific default path
try:
data_dir = Path(user_data_dir(self.APP_NAME, self.APP_AUTHOR))
if data_dir is not None:
data_dir.mkdir(parents=True, exist_ok=True)
self.general.data_folder_path = data_dir
return
except Exception as e:
logger.warning(f"Could not setup data dir {data_dir}: {e}")
# Current working directory
data_dir = Path.cwd()
logger.warning(f"Using data dir {data_dir}")
self.general.data_folder_path = data_dir
@classmethod
def _get_config_file_path(cls) -> tuple[Path, bool]:
"""Find a valid configuration file or return the desired path for a new config file.
@@ -618,32 +768,80 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
Returns:
tuple[Path, bool]: The path to the configuration file and if there is already a config file there
"""
if GeneralSettings._home_assistant_addon:
if is_home_assistant_addon():
# Only /data is persistent for home assistant add-on
cfile = Path("/data/config") / cls.CONFIG_FILE_NAME
logger.debug(f"Config file forced to: '{cfile}'")
return cfile, cfile.exists()
config_dirs = []
env_eos_dir = os.getenv(cls.EOS_DIR)
logger.debug(f"Environment EOS_DIR: '{env_eos_dir}'")
env_eos_config_dir = os.getenv(cls.EOS_CONFIG_DIR)
logger.debug(f"Environment EOS_CONFIG_DIR: '{env_eos_config_dir}'")
env_config_dir = get_absolute_path(env_eos_dir, env_eos_config_dir)
logger.debug(f"Resulting environment config dir: '{env_config_dir}'")
# 1. Directory specified by EOS_CONFIG_DIR
config_dir: Optional[Union[Path, str]] = os.getenv(cls.EOS_CONFIG_DIR)
if config_dir:
logger.debug(f"Environment EOS_CONFIG_DIR: '{config_dir}'")
config_dir = Path(config_dir).resolve()
if config_dir.exists():
config_dirs.append(config_dir)
else:
logger.info(f"Environment EOS_CONFIG_DIR: '{config_dir}' does not exist.")
if env_config_dir is not None:
config_dirs.append(env_config_dir.resolve())
config_dirs.append(Path(user_config_dir(cls.APP_NAME, cls.APP_AUTHOR)))
config_dirs.append(Path.cwd())
# 2. Directory specified by EOS_DIR / EOS_CONFIG_DIR
eos_dir = os.getenv(cls.EOS_DIR)
eos_config_dir = os.getenv(cls.EOS_CONFIG_DIR)
if eos_dir and eos_config_dir:
logger.debug(f"Environment EOS_DIR/EOS_CONFIG_DIR: '{eos_dir}/{eos_config_dir}'")
config_dir = get_absolute_path(eos_dir, eos_config_dir)
if config_dir:
config_dir = Path(config_dir).resolve()
if config_dir.exists():
config_dirs.append(config_dir)
else:
logger.info(
f"Environment EOS_DIR/EOS_CONFIG_DIR: '{config_dir}' does not exist."
)
else:
logger.debug(
f"Environment EOS_DIR/EOS_CONFIG_DIR: '{eos_dir}/{eos_config_dir}' not a valid path"
)
# 3. Directory specified by EOS_DIR
config_dir = os.getenv(cls.EOS_DIR)
if config_dir:
logger.debug(f"Environment EOS_DIR: '{config_dir}'")
config_dir = Path(config_dir).resolve()
if config_dir.exists():
config_dirs.append(config_dir)
else:
logger.info(f"Environment EOS_DIR: '{config_dir}' does not exist.")
# 4. User configuration directory
config_dir = Path(user_config_dir(cls.APP_NAME, cls.APP_AUTHOR)).resolve()
logger.debug(f"User config dir: '{config_dir}'")
if config_dir.exists():
config_dirs.append(config_dir)
else:
logger.info(f"User config dir: '{config_dir}' does not exist.")
# 5. Current working directory
config_dir = Path.cwd()
logger.debug(f"Current working dir: '{config_dir}'")
if config_dir.exists():
config_dirs.append(config_dir)
else:
logger.info(f"Current working dir: '{config_dir}' does not exist.")
# Search for file
for cdir in config_dirs:
cfile = cdir.joinpath(cls.CONFIG_FILE_NAME)
if cfile.exists():
logger.debug(f"Found config file: '{cfile}'")
return cfile, True
return config_dirs[0].joinpath(cls.CONFIG_FILE_NAME), False
# Return highest priority directory with standard file name appended
default_config_file = config_dirs[0].joinpath(cls.CONFIG_FILE_NAME)
logger.debug(f"No config file found. Defaulting to: '{default_config_file}'")
return default_config_file, False
@classmethod
def _setup_config_file(cls) -> Path:
@@ -714,8 +912,3 @@ class ConfigEOS(SingletonMixin, SettingsEOSDefaults):
The first non None value in priority order is taken.
"""
self._setup(**self.model_dump())
def get_config() -> ConfigEOS:
"""Gets the EOS configuration data."""
return ConfigEOS()

View File

@@ -3,7 +3,7 @@
import json
import shutil
from pathlib import Path
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Set, Tuple, Union
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Set, Tuple, Union, cast
from loguru import logger
@@ -13,19 +13,33 @@ if TYPE_CHECKING:
# There are circular dependencies - only import here for type checking
from akkudoktoreos.config.config import SettingsEOSDefaults
_KEEP_DEFAULT = object()
# -----------------------------
# Global migration map constant
# -----------------------------
# key: old JSON path, value: either
# - str (new model path)
# - tuple[str, Callable[[Any], Any]] (new path + transform)
# - _KEEP_DEFAULT (keep new default if old value is none or not given)
# - None (drop)
MIGRATION_MAP: Dict[str, Union[str, Tuple[str, Callable[[Any], Any]], None]] = {
MIGRATION_MAP: Dict[
str,
Union[
str, # simple rename
Tuple[str, Callable[[Any], Any]], # rename + transform
Tuple[str, object], # rename + _KEEP_DEFAULT
Tuple[str, object, Callable[[Any], Any]], # rename + _KEEP_DEFAULT + transform
None, # drop
],
] = {
# 0.2.0.dev -> 0.2.0.dev
"adapter/homeassistant/optimization_solution_entity_ids": (
"adapter/homeassistant/solution_entity_ids",
lambda v: v if isinstance(v, list) else None,
),
"general/data_folder_path": ("general/data_folder_path", _KEEP_DEFAULT),
# 0.2.0 -> 0.2.0+dev
"elecprice/provider_settings/ElecPriceImport/import_file_path": "elecprice/elecpriceimport/import_file_path",
"elecprice/provider_settings/ElecPriceImport/import_json": "elecprice/elecpriceimport/import_json",
@@ -91,20 +105,32 @@ def migrate_config_data(config_data: Dict[str, Any]) -> "SettingsEOSDefaults":
for old_path, mapping in MIGRATION_MAP.items():
new_path = None
transform = None
keep_default = False
if mapping is None:
migrated_source_paths.add(old_path.strip("/"))
logger.debug(f"🗑️ Migration map: dropping '{old_path}'")
continue
if isinstance(mapping, tuple):
new_path, transform = mapping
new_path = mapping[0]
for m in mapping[1:]:
if m is _KEEP_DEFAULT:
keep_default = True
elif callable(m):
transform = cast(Callable[[Any], Any], m)
else:
new_path = mapping
old_value = _get_json_nested_value(config_data, old_path)
if old_value is None:
migrated_source_paths.add(old_path.strip("/"))
mapped_count += 1
logger.debug(f"✅ Migrated mapped '{old_path}''None'")
if keep_default:
migrated_source_paths.add(old_path.strip("/"))
mapped_count += 1
logger.debug(f"✅ Migrated mapped '{old_path}' → keeping new default")
else:
migrated_source_paths.add(old_path.strip("/"))
mapped_count += 1
logger.debug(f"✅ Migrated mapped '{old_path}''None'")
continue
try:

View File

@@ -13,6 +13,7 @@ import os
import pickle
import tempfile
import threading
from pathlib import Path
from typing import (
IO,
Any,
@@ -236,6 +237,24 @@ Param = ParamSpec("Param")
RetType = TypeVar("RetType")
def cache_clear(clear_all: Optional[bool] = None) -> None:
"""Cleanup expired cache files."""
if clear_all:
CacheFileStore().clear(clear_all=True)
else:
CacheFileStore().clear(before_datetime=to_datetime())
def cache_load() -> dict:
"""Load cache from cachefilestore.json."""
return CacheFileStore().load_store()
def cache_save() -> dict:
"""Save cache to cachefilestore.json."""
return CacheFileStore().save_store()
class CacheFileRecord(PydanticBaseModel):
cache_file: Any = Field(
..., json_schema_extra={"description": "File descriptor of the cache file."}
@@ -284,9 +303,16 @@ class CacheFileStore(ConfigMixin, SingletonMixin):
return
self._store: Dict[str, CacheFileRecord] = {}
self._store_lock = threading.RLock()
self._store_file = self.config.cache.path().joinpath("cachefilestore.json")
super().__init__(*args, **kwargs)
def _store_file(self) -> Optional[Path]:
"""Get file to store the cache."""
try:
return self.config.cache.path().joinpath("cachefilestore.json")
except Exception:
logger.error("Path for cache files missing. Please configure!")
return None
def _until_datetime_by_options(
self,
until_date: Optional[Any] = None,
@@ -496,10 +522,18 @@ class CacheFileStore(ConfigMixin, SingletonMixin):
# File already available
cache_file_obj = cache_item.cache_file
else:
self.config.cache.path().mkdir(parents=True, exist_ok=True)
cache_file_obj = tempfile.NamedTemporaryFile(
mode=mode, delete=delete, suffix=suffix, dir=self.config.cache.path()
)
# Create cache file
store_file = self._store_file()
if store_file:
store_file.parent.mkdir(parents=True, exist_ok=True)
cache_file_obj = tempfile.NamedTemporaryFile(
mode=mode, delete=delete, suffix=suffix, dir=store_file.parent
)
else:
# Cache storage not configured, use temporary path
cache_file_obj = tempfile.NamedTemporaryFile(
mode=mode, delete=delete, suffix=suffix
)
self._store[cache_file_key] = CacheFileRecord(
cache_file=cache_file_obj,
until_datetime=until_datetime_dt,
@@ -766,10 +800,14 @@ class CacheFileStore(ConfigMixin, SingletonMixin):
Returns:
data (dict): cache management data that was saved.
"""
store_file = self._store_file()
if store_file is None:
return {}
with self._store_lock:
self._store_file.parent.mkdir(parents=True, exist_ok=True)
store_file.parent.mkdir(parents=True, exist_ok=True)
store_to_save = self.current_store()
with self._store_file.open("w", encoding="utf-8", newline="\n") as f:
with store_file.open("w", encoding="utf-8", newline="\n") as f:
try:
json.dump(store_to_save, f, indent=4)
except Exception as e:
@@ -782,18 +820,22 @@ class CacheFileStore(ConfigMixin, SingletonMixin):
Returns:
data (dict): cache management data that was loaded.
"""
store_file = self._store_file()
if store_file is None:
return {}
with self._store_lock:
store_loaded = {}
if self._store_file.exists():
with self._store_file.open("r", encoding="utf-8", newline=None) as f:
if store_file.exists():
with store_file.open("r", encoding="utf-8", newline=None) as f:
try:
store_to_load = json.load(f)
except Exception as e:
logger.error(
f"Error loading cache file store: {e}\n"
+ f"Deleting the store file {self._store_file}."
+ f"Deleting the store file {store_file}."
)
self._store_file.unlink()
store_file.unlink()
return {}
for key, record in store_to_load.items():
if record is None:

View File

@@ -20,7 +20,8 @@ class CacheCommonSettings(SettingsBaseModel):
)
cleanup_interval: float = Field(
default=5 * 60,
default=5.0 * 60,
ge=5.0,
json_schema_extra={"description": "Intervall in seconds for EOS file cache cleanup."},
)

View File

@@ -1,28 +1,76 @@
"""Abstract and base classes for EOS core.
This module provides foundational classes for handling configuration and prediction functionality
in EOS. It includes base classes that provide convenient access to global
configuration and prediction instances through properties.
Classes:
- ConfigMixin: Mixin class for managing and accessing global configuration.
- PredictionMixin: Mixin class for managing and accessing global prediction data.
- SingletonMixin: Mixin class to create singletons.
This module provides foundational classes and functions to access global EOS resources.
"""
from __future__ import (
annotations, # use types lazy as strings, helps to prevent circular dependencies
)
import threading
from typing import Any, ClassVar, Dict, Optional, Type
from typing import TYPE_CHECKING, Any, ClassVar, Dict, Optional, Type, Union
from loguru import logger
from akkudoktoreos.core.decorators import classproperty
from akkudoktoreos.utils.datetimeutil import DateTime
adapter_eos: Any = None
config_eos: Any = None
measurement_eos: Any = None
prediction_eos: Any = None
ems_eos: Any = None
if TYPE_CHECKING:
# Prevents circular dependies
from akkudoktoreos.adapter.adapter import Adapter
from akkudoktoreos.config.config import ConfigEOS
from akkudoktoreos.core.database import Database
from akkudoktoreos.core.ems import EnergyManagement
from akkudoktoreos.devices.devices import ResourceRegistry
from akkudoktoreos.measurement.measurement import Measurement
from akkudoktoreos.prediction.prediction import Prediction
# Module level singleton cache
_adapter_eos: Optional[Adapter] = None
_config_eos: Optional[ConfigEOS] = None
_ems_eos: Optional[EnergyManagement] = None
_database_eos: Optional[Database] = None
_measurement_eos: Optional[Measurement] = None
_prediction_eos: Optional[Prediction] = None
_resource_registry_eos: Optional[ResourceRegistry] = None
def get_adapter(init: bool = False) -> Adapter:
"""Retrieve the singleton EOS Adapter instance.
This function provides access to the global EOS Adapter instance. The Adapter
object is created on first access if `init` is True. If the instance is
accessed before initialization and `init` is False, a RuntimeError is raised.
Args:
init (bool): If True, create the Adapter instance if it does not exist.
Default is False.
Returns:
Adapter: The global EOS Adapter instance.
Raises:
RuntimeError: If accessed before initialization with `init=False`.
Usage:
.. code-block:: python
adapter = get_adapter(init=True) # Initialize and retrieve
adapter.do_something()
"""
global _adapter_eos
if _adapter_eos is None:
from akkudoktoreos.config.config import ConfigEOS
if not init and not ConfigEOS.documentation_mode():
raise RuntimeError("Adapter access before init.")
from akkudoktoreos.adapter.adapter import Adapter
_adapter_eos = Adapter()
return _adapter_eos
class AdapterMixin:
@@ -49,20 +97,84 @@ class AdapterMixin:
"""
@classproperty
def adapter(cls) -> Any:
def adapter(cls) -> Adapter:
"""Convenience class method/ attribute to retrieve the EOS adapters.
Returns:
Adapter: The adapters.
"""
# avoid circular dependency at import time
global adapter_eos
if adapter_eos is None:
from akkudoktoreos.adapter.adapter import get_adapter
return get_adapter()
adapter_eos = get_adapter()
return adapter_eos
def get_config(init: Union[bool, dict[str, bool]] = False) -> ConfigEOS:
"""Retrieve the singleton EOS configuration instance.
This function provides controlled access to the global EOS configuration
singleton (`ConfigEOS`). The configuration is created lazily on first
access and can be initialized with a configurable set of settings sources.
By default, accessing the configuration without prior initialization
raises a `RuntimeError`. Passing `init=True` or an initialization
configuration dictionary enables creation of the singleton.
Args:
init (Union[bool, dict[str, bool]]):
Controls initialization of the configuration.
- ``False`` (default): Do not initialize. Raises ``RuntimeError``
if the configuration does not yet exist.
- ``True``: Initialize the configuration using default
initialization behavior (all settings sources enabled).
- ``dict[str, bool]``: Initialize the configuration with fine-grained
control over which settings sources are enabled. Missing keys
default to ``True``.
Supported keys include:
- ``with_init_settings``
- ``with_env_settings``
- ``with_dotenv_settings``
- ``with_file_settings``
- ``with_file_secret_settings``
Returns:
ConfigEOS: The global EOS configuration singleton instance.
Raises:
RuntimeError:
If the configuration has not been initialized and ``init`` is
``False``.
Usage:
.. code-block:: python
# Initialize with default behavior (all sources enabled)
config = get_config(init=True)
# Initialize with explicit source control
config = get_config(init={
"with_init_settings": True,
"with_env_settings": True,
"with_dotenv_settings": True,
"with_file_settings": False,
"with_file_secret_settings": False,
})
# Access existing configuration
host = get_config().server.host
"""
global _config_eos
if _config_eos is None:
from akkudoktoreos.config.config import ConfigEOS
if not init and not ConfigEOS.documentation_mode():
raise RuntimeError("Config access before init.")
if isinstance(init, dict):
ConfigEOS._init_config_eos = init
_config_eos = ConfigEOS()
return _config_eos
class ConfigMixin:
@@ -89,20 +201,51 @@ class ConfigMixin:
"""
@classproperty
def config(cls) -> Any:
def config(cls) -> ConfigEOS:
"""Convenience class method/ attribute to retrieve the EOS configuration data.
Returns:
ConfigEOS: The configuration.
"""
# avoid circular dependency at import time
global config_eos
if config_eos is None:
from akkudoktoreos.config.config import get_config
return get_config()
config_eos = get_config()
return config_eos
def get_measurement(init: bool = False) -> Measurement:
"""Retrieve the singleton EOS Measurement instance.
This function provides access to the global EOS Measurement object. The
Measurement instance is created on first access if `init` is True. If the
instance is accessed before initialization and `init` is False, a RuntimeError
is raised.
Args:
init (bool): If True, create the Measurement instance if it does not exist.
Default is False.
Returns:
Measurement: The global EOS Measurement instance.
Raises:
RuntimeError: If accessed before initialization with `init=False`.
Usage:
.. code-block:: python
measurement = get_measurement(init=True) # Initialize and retrieve
measurement.read_sensor_data()
"""
global _measurement_eos
if _measurement_eos is None:
from akkudoktoreos.config.config import ConfigEOS
if not init and not ConfigEOS.documentation_mode():
raise RuntimeError("Measurement access before init.")
from akkudoktoreos.measurement.measurement import Measurement
_measurement_eos = Measurement()
return _measurement_eos
class MeasurementMixin:
@@ -130,20 +273,51 @@ class MeasurementMixin:
"""
@classproperty
def measurement(cls) -> Any:
def measurement(cls) -> Measurement:
"""Convenience class method/ attribute to retrieve the EOS measurement data.
Returns:
Measurement: The measurement.
"""
# avoid circular dependency at import time
global measurement_eos
if measurement_eos is None:
from akkudoktoreos.measurement.measurement import get_measurement
return get_measurement()
measurement_eos = get_measurement()
return measurement_eos
def get_prediction(init: bool = False) -> Prediction:
"""Retrieve the singleton EOS Prediction instance.
This function provides access to the global EOS Prediction object. The
Prediction instance is created on first access if `init` is True. If the
instance is accessed before initialization and `init` is False, a RuntimeError
is raised.
Args:
init (bool): If True, create the Prediction instance if it does not exist.
Default is False.
Returns:
Prediction: The global EOS Prediction instance.
Raises:
RuntimeError: If accessed before initialization with `init=False`.
Usage:
.. code-block:: python
prediction = get_prediction(init=True) # Initialize and retrieve
prediction.forecast_next_hour()
"""
global _prediction_eos
if _prediction_eos is None:
from akkudoktoreos.config.config import ConfigEOS
if not init and not ConfigEOS.documentation_mode():
raise RuntimeError("Prediction access before init.")
from akkudoktoreos.prediction.prediction import Prediction
_prediction_eos = Prediction()
return _prediction_eos
class PredictionMixin:
@@ -171,20 +345,50 @@ class PredictionMixin:
"""
@classproperty
def prediction(cls) -> Any:
def prediction(cls) -> Prediction:
"""Convenience class method/ attribute to retrieve the EOS prediction data.
Returns:
Prediction: The prediction.
"""
# avoid circular dependency at import time
global prediction_eos
if prediction_eos is None:
from akkudoktoreos.prediction.prediction import get_prediction
return get_prediction()
prediction_eos = get_prediction()
return prediction_eos
def get_ems(init: bool = False) -> EnergyManagement:
"""Retrieve the singleton EOS Energy Management System (EMS) instance.
This function provides access to the global EOS EMS instance. The instance
is created on first access if `init` is True. If the instance is accessed
before initialization and `init` is False, a RuntimeError is raised.
Args:
init (bool): If True, create the EMS instance if it does not exist.
Default is False.
Returns:
EnergyManagement: The global EOS EMS instance.
Raises:
RuntimeError: If accessed before initialization with `init=False`.
Usage:
.. code-block:: python
ems = get_ems(init=True) # Initialize and retrieve
ems.start_energy_management_loop()
"""
global _ems_eos
if _ems_eos is None:
from akkudoktoreos.config.config import ConfigEOS
if not init and not ConfigEOS.documentation_mode():
raise RuntimeError("EMS access before init.")
from akkudoktoreos.core.ems import EnergyManagement
_ems_eos = EnergyManagement()
return _ems_eos
class EnergyManagementSystemMixin:
@@ -200,7 +404,7 @@ class EnergyManagementSystemMixin:
global EnergyManagementSystem instance lazily to avoid import-time circular dependencies.
Attributes:
ems (EnergyManagementSystem): Property to access the global EOS energy management system.
ems (EnergyManagement): Property to access the global EOS energy management system.
Example:
.. code-block:: python
@@ -213,20 +417,120 @@ class EnergyManagementSystemMixin:
"""
@classproperty
def ems(cls) -> Any:
def ems(cls) -> EnergyManagement:
"""Convenience class method/ attribute to retrieve the EOS energy management system.
Returns:
EnergyManagementSystem: The energy management system.
"""
# avoid circular dependency at import time
global ems_eos
if ems_eos is None:
from akkudoktoreos.core.ems import get_ems
return get_ems()
ems_eos = get_ems()
return ems_eos
def get_database(init: bool = False) -> Database:
"""Retrieve the singleton EOS database instance.
This function provides access to the global EOS Database instance. The
instance is created on first access if `init` is True. If the instance is
accessed before initialization and `init` is False, a RuntimeError is raised.
Args:
init (bool): If True, create the Database instance if it does not exist.
Default is False.
Returns:
Database: The global EOS database instance.
Raises:
RuntimeError: If accessed before initialization with `init=False`.
Usage:
.. code-block:: python
db = get_database(init=True) # Initialize and retrieve
db.insert_measurement(...)
"""
global _database_eos
if _database_eos is None:
from akkudoktoreos.config.config import ConfigEOS
if not init and not ConfigEOS.documentation_mode():
raise RuntimeError("Database access before init.")
from akkudoktoreos.core.database import Database
_database_eos = Database()
return _database_eos
class DatabaseMixin:
"""Mixin class for managing EOS database access.
This class serves as a foundational component for EOS-related classes requiring access
to the EOS database. It provides a `database` property that dynamically retrieves
the database instance.
Usage:
Subclass this base class to gain access to the `database` attribute, which retrieves the
global database instance lazily to avoid import-time circular dependencies.
Attributes:
database (Database): Property to access the global EOS database.
Example:
.. code-block:: python
class MyOptimizationClass(PredictionMixin):
def store something(self):
db = self.database
"""
@classproperty
def database(cls) -> Database:
"""Convenience class method/ attribute to retrieve the EOS database.
Returns:
Database: The database.
"""
return get_database()
def get_resource_registry(init: bool = False) -> ResourceRegistry:
"""Retrieve the singleton EOS Resource Registry instance.
This function provides access to the global EOS ResourceRegistry instance.
The instance is created on first access if `init` is True. If the instance
is accessed before initialization and `init` is False, a RuntimeError is raised.
Args:
init (bool): If True, create the ResourceRegistry instance if it does not exist.
Default is False.
Returns:
ResourceRegistry: The global EOS Resource Registry instance.
Raises:
RuntimeError: If accessed before initialization with `init=False`.
Usage:
.. code-block:: python
registry = get_resource_registry(init=True) # Initialize and retrieve
registry.register_device(my_device)
"""
global _resource_registry_eos
if _resource_registry_eos is None:
from akkudoktoreos.config.config import ConfigEOS
if not init and not ConfigEOS.documentation_mode():
raise RuntimeError("ResourceRegistry access before init.")
from akkudoktoreos.devices.devices import ResourceRegistry
_resource_registry_eos = ResourceRegistry()
return _resource_registry_eos
class StartMixin(EnergyManagementSystemMixin):
@@ -243,14 +547,7 @@ class StartMixin(EnergyManagementSystemMixin):
Returns:
DateTime: The starting datetime of the current or latest energy management, or None.
"""
# avoid circular dependency at import time
global ems_eos
if ems_eos is None:
from akkudoktoreos.core.ems import get_ems
ems_eos = get_ems()
return ems_eos.start_datetime
return get_ems().start_datetime
class SingletonMixin:
@@ -332,3 +629,43 @@ class SingletonMixin:
if not hasattr(self, "_initialized"):
super().__init__(*args, **kwargs)
self._initialized = True
_singletons_init_running: bool = False
def singletons_init() -> None:
"""Initialize the singletons for adapter, config, measurement, prediction, database, resource registry."""
# Prevent recursive calling
global \
_singletons_init_running, \
_adapter_eos, \
_config_eos, \
_database_eos, \
_measurement_eos, \
_prediction_eos, \
_ems_eos, \
_resource_registry_eos
if _singletons_init_running:
return
_singletons_init_running = True
try:
if _config_eos is None:
get_config(init=True)
if _adapter_eos is None:
get_adapter(init=True)
if _database_eos is None:
get_database(init=True)
if _ems_eos is None:
get_ems(init=True)
if _measurement_eos is None:
get_measurement(init=True)
if _prediction_eos is None:
get_prediction(init=True)
if _resource_registry_eos is None:
get_resource_registry(init=True)
finally:
_singletons_init_running = False

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -24,7 +24,7 @@ from akkudoktoreos.optimization.genetic.geneticparams import (
)
from akkudoktoreos.optimization.genetic.geneticsolution import GeneticSolution
from akkudoktoreos.optimization.optimization import OptimizationSolution
from akkudoktoreos.utils.datetimeutil import DateTime, compare_datetimes, to_datetime
from akkudoktoreos.utils.datetimeutil import DateTime, to_datetime
# The executor to execute the CPU heavy energy management run
executor = ThreadPoolExecutor(max_workers=1)
@@ -44,6 +44,15 @@ class EnergyManagementStage(Enum):
return self.value
async def ems_manage_energy() -> None:
"""Repeating task for managing energy.
This task should be executed by the server regularly
to ensure proper energy management.
"""
await EnergyManagement().run()
class EnergyManagement(
SingletonMixin, ConfigMixin, PredictionMixin, AdapterMixin, PydanticBaseModel
):
@@ -286,6 +295,9 @@ class EnergyManagement(
error_msg = f"Adapter update failed - phase {cls._stage}: {e}\n{trace}"
logger.error(error_msg)
# Remember energy run datetime.
EnergyManagement._last_run_datetime = to_datetime()
# energy management run finished
cls._stage = EnergyManagementStage.IDLE
@@ -346,73 +358,3 @@ class EnergyManagement(
)
# Run optimization in background thread to avoid blocking event loop
await loop.run_in_executor(executor, func)
async def manage_energy(self) -> None:
"""Repeating task for managing energy.
This task should be executed by the server regularly (e.g., every 10 seconds)
to ensure proper energy management. Configuration changes to the energy management interval
will only take effect if this task is executed.
- Initializes and runs the energy management for the first time if it has never been run
before.
- If the energy management interval is not configured or invalid (NaN), the task will not
trigger any repeated energy management runs.
- Compares the current time with the last run time and runs the energy management if the
interval has elapsed.
- Logs any exceptions that occur during the initialization or execution of the energy
management.
Note: The task maintains the interval even if some intervals are missed.
"""
current_datetime = to_datetime()
interval = self.config.ems.interval # interval maybe changed in between
if EnergyManagement._last_run_datetime is None:
# Never run before
try:
# Remember energy run datetime.
EnergyManagement._last_run_datetime = current_datetime
# Try to run a first energy management. May fail due to config incomplete.
await self.run()
except Exception as e:
trace = "".join(traceback.TracebackException.from_exception(e).format())
message = f"EOS init: {e}\n{trace}"
logger.error(message)
return
if interval is None or interval == float("nan"):
# No Repetition
return
if (
compare_datetimes(current_datetime, EnergyManagement._last_run_datetime).time_diff
< interval
):
# Wait for next run
return
try:
await self.run()
except Exception as e:
trace = "".join(traceback.TracebackException.from_exception(e).format())
message = f"EOS run: {e}\n{trace}"
logger.error(message)
# Remember the energy management run - keep on interval even if we missed some intervals
while (
compare_datetimes(current_datetime, EnergyManagement._last_run_datetime).time_diff
>= interval
):
EnergyManagement._last_run_datetime = EnergyManagement._last_run_datetime.add(
seconds=interval
)
# Initialize the Energy Management System, it is a singleton.
ems = EnergyManagement()
def get_ems() -> EnergyManagement:
"""Gets the EOS Energy Management System."""
return ems

View File

@@ -29,10 +29,11 @@ class EnergyManagementCommonSettings(SettingsBaseModel):
},
)
interval: Optional[float] = Field(
default=None,
interval: float = Field(
default=300.0,
ge=60.0,
json_schema_extra={
"description": "Intervall in seconds between EOS energy management runs.",
"description": "Intervall between EOS energy management runs [seconds].",
"examples": ["300"],
},
)

View File

@@ -47,7 +47,12 @@ from pydantic import (
)
from pydantic.fields import ComputedFieldInfo, FieldInfo
from akkudoktoreos.utils.datetimeutil import DateTime, to_datetime, to_duration
from akkudoktoreos.utils.datetimeutil import (
DateTime,
to_datetime,
to_duration,
to_timezone,
)
# Global weakref dictionary to hold external state per model instance
# Used as a workaround for PrivateAttr not working in e.g. Mixin Classes
@@ -683,13 +688,8 @@ class PydanticBaseModel(PydanticModelNestedValueMixin, BaseModel):
self, *args: Any, include_computed_fields: bool = True, **kwargs: Any
) -> dict[str, Any]:
"""Custom dump method to serialize computed fields by default."""
result = super().model_dump(*args, **kwargs)
if not include_computed_fields:
for computed_field_name in self.__class__.model_computed_fields:
result.pop(computed_field_name, None)
return result
kwargs.setdefault("exclude_computed_fields", not include_computed_fields)
return super().model_dump(*args, **kwargs)
def to_dict(self) -> dict:
"""Convert this PredictionRecord instance to a dictionary representation.
@@ -1061,8 +1061,8 @@ class PydanticDateTimeDataFrame(PydanticBaseModel):
valid_base_dtypes = {"int64", "float64", "bool", "object", "string"}
def is_valid_dtype(dtype: str) -> bool:
# Allow timezone-aware or naive datetime64
if dtype.startswith("datetime64[ns"):
# Allow timezone-aware or naive datetime64 - pandas 3.0 also has us
if dtype.startswith("datetime64[ns") or dtype.startswith("datetime64[us"):
return True
return dtype in valid_base_dtypes
@@ -1102,7 +1102,7 @@ class PydanticDateTimeDataFrame(PydanticBaseModel):
# Apply dtypes
for col, dtype in self.dtypes.items():
if dtype.startswith("datetime64[ns"):
if dtype.startswith("datetime64[ns") or dtype.startswith("datetime64[us"):
df[col] = pd.to_datetime(df[col], utc=True)
elif dtype in dtype_mapping.keys():
df[col] = df[col].astype(dtype_mapping[dtype])
@@ -1111,20 +1111,59 @@ class PydanticDateTimeDataFrame(PydanticBaseModel):
return df
@classmethod
def _detect_data_tz(cls, df: pd.DataFrame) -> Optional[str]:
"""Detect timezone of pandas data."""
# Index first (strongest signal)
if isinstance(df.index, pd.DatetimeIndex) and df.index.tz is not None:
return str(df.index.tz)
# Then datetime columns
for col in df.columns:
if is_datetime64_any_dtype(df[col]):
tz = getattr(df[col].dt, "tz", None)
if tz is not None:
return str(tz)
return None
@classmethod
def from_dataframe(
cls, df: pd.DataFrame, tz: Optional[str] = None
) -> "PydanticDateTimeDataFrame":
"""Create a PydanticDateTimeDataFrame instance from a pandas DataFrame."""
index = pd.Index([to_datetime(dt, as_string=True, in_timezone=tz) for dt in df.index])
# resolve timezone
data_tz = cls._detect_data_tz(df)
if tz is not None:
if data_tz and data_tz != tz:
raise ValueError(f"Timezone mismatch: tz='{tz}' but data uses '{data_tz}'")
resolved_tz = tz
else:
if data_tz:
resolved_tz = data_tz
else:
# Use local timezone
resolved_tz = to_timezone(as_string=True)
# normalize index
index = pd.Index(
[to_datetime(dt, as_string=True, in_timezone=resolved_tz) for dt in df.index]
)
df.index = index
# normalize datetime columns
datetime_columns = [col for col in df.columns if is_datetime64_any_dtype(df[col])]
for col in datetime_columns:
if df[col].dt.tz is None:
df[col] = df[col].dt.tz_localize(resolved_tz)
else:
df[col] = df[col].dt.tz_convert(resolved_tz)
return cls(
data=df.to_dict(orient="index"),
dtypes={col: str(dtype) for col, dtype in df.dtypes.items()},
tz=tz,
tz=resolved_tz,
datetime_columns=datetime_columns,
)

View File

@@ -2,6 +2,7 @@
import hashlib
import re
from dataclasses import dataclass
from fnmatch import fnmatch
from pathlib import Path
from typing import Optional
@@ -16,14 +17,117 @@ HASH_EOS = ""
# Number of digits to append to .dev to identify a development version
VERSION_DEV_PRECISION = 8
# Hashing configuration
DIR_PACKAGE_ROOT = Path(__file__).resolve().parent.parent
ALLOWED_SUFFIXES: set[str] = {".py", ".md", ".json"}
EXCLUDED_DIR_PATTERNS: set[str] = {"*_autosum", "*__pycache__", "*_generated"}
EXCLUDED_FILES: set[Path] = set()
# ------------------------------
# Helpers for version generation
# ------------------------------
def is_excluded_dir(path: Path, excluded_dir_patterns: set[str]) -> bool:
"""Check whether a directory should be excluded based on name patterns."""
return any(fnmatch(path.name, pattern) for pattern in excluded_dir_patterns)
@dataclass
class HashConfig:
"""Configuration for file hashing."""
paths: list[Path]
allowed_suffixes: set[str]
excluded_dir_patterns: set[str]
excluded_files: set[Path]
def __post_init__(self) -> None:
"""Validate configuration."""
for path in self.paths:
if not path.exists():
raise ValueError(f"Path does not exist: {path}")
def is_excluded_dir(path: Path, patterns: set[str]) -> bool:
"""Check if directory matches any exclusion pattern.
Args:
path: Directory path to check
patterns: set of glob-like patterns (e.g., {``*__pycache__``, ``*_test``})
Returns:
True if directory should be excluded
"""
dir_name = path.name
return any(fnmatch(dir_name, pattern) for pattern in patterns)
def collect_files(config: HashConfig) -> list[Path]:
"""Collect all files that should be included in the hash.
This function only collects files - it doesn't hash them.
Makes it easy to inspect what will be hashed.
Args:
config: Hash configuration
Returns:
Sorted list of files to be hashed
Example:
>>> config = HashConfig(
... paths=[Path('src')],
... allowed_suffixes={'.py'},
... excluded_dir_patterns={'*__pycache__'},
... excluded_files=set()
... )
>>> files = collect_files(config)
>>> print(f"Will hash {len(files)} files")
>>> for f in files[:5]:
... print(f" {f}")
"""
collected_files: list[Path] = []
for root in config.paths:
for p in sorted(root.rglob("*")):
# Skip excluded directories
if p.is_dir() and is_excluded_dir(p, config.excluded_dir_patterns):
continue
# Skip files inside excluded directories
if any(is_excluded_dir(parent, config.excluded_dir_patterns) for parent in p.parents):
continue
# Skip excluded files
if p.resolve() in config.excluded_files:
continue
# Collect only allowed file types
if p.is_file() and p.suffix.lower() in config.allowed_suffixes:
collected_files.append(p.resolve())
return sorted(collected_files)
def hash_files(files: list[Path]) -> str:
"""Calculate SHA256 hash of file contents.
Args:
files: list of files to hash (order matters!)
Returns:
SHA256 hex digest
Example:
>>> files = [Path('file1.py'), Path('file2.py')]
>>> hash_value = hash_files(files)
"""
h = hashlib.sha256()
for file_path in files:
if not file_path.exists():
continue
h.update(file_path.read_bytes())
return h.hexdigest()
def hash_tree(
@@ -31,80 +135,93 @@ def hash_tree(
allowed_suffixes: set[str],
excluded_dir_patterns: set[str],
excluded_files: Optional[set[Path]] = None,
) -> str:
"""Return SHA256 hash for files under `paths`.
) -> tuple[str, list[Path]]:
"""Return SHA256 hash for files under `paths` and the list of files hashed.
Restricted by suffix, excluding excluded directory patterns and excluded_files.
Args:
paths: list of root paths to hash
allowed_suffixes: set of file suffixes to include (e.g., {'.py', '.json'})
excluded_dir_patterns: set of directory patterns to exclude
excluded_files: Optional set of specific files to exclude
Returns:
tuple of (hash_digest, list_of_hashed_files)
Example:
>>> hash_digest, files = hash_tree(
... paths=[Path('src')],
... allowed_suffixes={'.py'},
... excluded_dir_patterns={'*__pycache__'},
... )
>>> print(f"Hash: {hash_digest}")
>>> print(f"Based on {len(files)} files")
"""
h = hashlib.sha256()
excluded_files = excluded_files or set()
config = HashConfig(
paths=paths,
allowed_suffixes=allowed_suffixes,
excluded_dir_patterns=excluded_dir_patterns,
excluded_files=excluded_files or set(),
)
for root in paths:
if not root.exists():
raise ValueError(f"Root path does not exist: {root}")
for p in sorted(root.rglob("*")):
# Skip excluded directories
if p.is_dir() and is_excluded_dir(p, excluded_dir_patterns):
continue
files = collect_files(config)
digest = hash_files(files)
# Skip files inside excluded directories
if any(is_excluded_dir(parent, excluded_dir_patterns) for parent in p.parents):
continue
return digest, files
# Skip excluded files
if p.resolve() in excluded_files:
continue
# Hash only allowed file types
if p.is_file() and p.suffix.lower() in allowed_suffixes:
h.update(p.read_bytes())
digest = h.hexdigest()
return digest
# ---------------------
# Version hash function
# ---------------------
def _version_hash() -> str:
"""Calculate project hash.
Only package file ins src/akkudoktoreos can be hashed to make it work also for packages.
Only package files in src/akkudoktoreos can be hashed to make it work also for packages.
Returns:
SHA256 hash of the project files
"""
DIR_PACKAGE_ROOT = Path(__file__).resolve().parent.parent
if not str(DIR_PACKAGE_ROOT).endswith("src/akkudoktoreos"):
error_msg = f"DIR_PACKAGE_ROOT does not end with src/akkudoktoreos: {DIR_PACKAGE_ROOT}"
raise ValueError(error_msg)
# Allowed file suffixes to consider
ALLOWED_SUFFIXES: set[str] = {".py", ".md", ".json"}
# Directory patterns to exclude (glob-like)
EXCLUDED_DIR_PATTERNS: set[str] = {"*_autosum", "*__pycache__", "*_generated"}
# Files to exclude
EXCLUDED_FILES: set[Path] = set()
# Directories whose changes shall be part of the project hash
# Configuration
watched_paths = [DIR_PACKAGE_ROOT]
hash_current = hash_tree(
watched_paths, ALLOWED_SUFFIXES, EXCLUDED_DIR_PATTERNS, excluded_files=EXCLUDED_FILES
# Collect files and calculate hash
hash_digest, hashed_files = hash_tree(
watched_paths,
ALLOWED_SUFFIXES,
EXCLUDED_DIR_PATTERNS,
excluded_files=EXCLUDED_FILES,
)
return hash_current
return hash_digest
def _version_calculate() -> str:
"""Compute version."""
global HASH_EOS
HASH_EOS = _version_hash()
if VERSION_BASE.endswith("dev"):
"""Calculate the full version string.
For release versions: "x.y.z"
For dev versions: "x.y.z.dev<hash>"
Returns:
Full version string
"""
if VERSION_BASE.endswith(".dev"):
# After dev only digits are allowed - convert hexdigest to digits
hash_value = int(HASH_EOS, 16)
hash_value = int(_version_hash(), 16)
hash_digits = str(hash_value % (10**VERSION_DEV_PRECISION)).zfill(VERSION_DEV_PRECISION)
return f"{VERSION_BASE}{hash_digits}"
else:
# Release version - use base as-is
return VERSION_BASE
# ---------------------------
# Project version information
# ----------------------------
# ---------------------------
# The version
__version__ = _version_calculate()
@@ -114,16 +231,13 @@ __version__ = _version_calculate()
# Version info access
# -------------------
# Regular expression to split the version string into pieces
VERSION_RE = re.compile(
r"""
^(?P<base>\d+\.\d+\.\d+) # x.y.z
(?:[\.\+\-] # .dev<hash> starts here
(?:
(?P<dev>dev) # literal 'dev'
(?:(?P<hash>[A-Za-z0-9]+))? # optional <hash>
)
(?:\. # .dev<hash> starts here
(?P<dev>dev) # literal 'dev'
(?P<hash>[a-f0-9]+)? # optional <hash> (hex digits)
)?
$
""",
@@ -143,7 +257,7 @@ def version() -> dict[str, Optional[str]]:
.. code-block:: python
{
"version": "0.2.0+dev.a96a65",
"version": "0.2.0.dev.a96a65",
"base": "x.y.z",
"dev": "dev" or None,
"hash": "<hash>" or None,
@@ -153,7 +267,7 @@ def version() -> dict[str, Optional[str]]:
match = VERSION_RE.match(__version__)
if not match:
raise ValueError(f"Invalid version format: {version}")
raise ValueError(f"Invalid version format: {__version__}") # Fixed: was 'version'
info = match.groupdict()
info["version"] = __version__

View File

@@ -431,8 +431,3 @@ class ResourceRegistry(SingletonMixin, ConfigMixin, PydanticBaseModel):
self.history = loaded.history
except Exception as e:
logger.error("Can not load resource registry: {}", e)
def get_resource_registry() -> ResourceRegistry:
"""Gets the EOS resource registry."""
return ResourceRegistry()

View File

@@ -87,7 +87,7 @@ class Battery:
def reset(self) -> None:
"""Resets the battery state to its initial values."""
self.soc_wh = (self.initial_soc_percentage / 100) * self.capacity_wh
self.soc_wh = min(max(self.soc_wh, self.min_soc_wh), self.max_soc_wh)
self.soc_wh = min(self.soc_wh, self.max_soc_wh) # Only clamp to max
self.discharge_array = np.full(self.prediction_hours, 0)
self.charge_array = np.full(self.prediction_hours, 0)

View File

@@ -6,6 +6,7 @@ data records for measurements.
The measurements can be added programmatically or imported from a file or JSON string.
"""
from pathlib import Path
from typing import Any, Optional
import numpy as np
@@ -16,12 +17,26 @@ from pydantic import Field, computed_field
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.core.coreabc import SingletonMixin
from akkudoktoreos.core.dataabc import DataImportMixin, DataRecord, DataSequence
from akkudoktoreos.utils.datetimeutil import DateTime, Duration, to_duration
from akkudoktoreos.utils.datetimeutil import (
DateTime,
Duration,
to_datetime,
to_duration,
)
class MeasurementCommonSettings(SettingsBaseModel):
"""Measurement Configuration."""
historic_hours: Optional[int] = Field(
default=2 * 365 * 24,
ge=0,
json_schema_extra={
"description": "Number of hours into the past for measurement data",
"examples": [2 * 365 * 24],
},
)
load_emr_keys: Optional[list[str]] = Field(
default=None,
json_schema_extra={
@@ -94,6 +109,16 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
return
super().__init__(*args, **kwargs)
def _measurement_file_path(self) -> Optional[Path]:
"""Path to measurements file (may be used optional to database)."""
try:
return self.config.general.data_folder_path / "measurement.json"
except Exception:
logger.error(
"Path for measurements is missing. Please configure data folder path or database!"
)
return None
def _interval_count(
self, start_datetime: DateTime, end_datetime: DateTime, interval: Duration
) -> int:
@@ -143,30 +168,32 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
np.ndarray: A NumPy Array of the energy [kWh] per interval values calculated from
the meter readings.
"""
# Add one interval to end_datetime to assure we have a energy value interval for all
# datetimes from start_datetime (inclusive) to end_datetime (exclusive)
end_datetime += interval
size = self._interval_count(start_datetime, end_datetime, interval)
energy_mr_array = self.key_to_array(
key=key, start_datetime=start_datetime, end_datetime=end_datetime, interval=interval
key=key,
start_datetime=start_datetime,
end_datetime=end_datetime + interval,
interval=interval,
fill_method="time",
boundary="context",
)
if energy_mr_array.size != size:
if energy_mr_array.size != size + 1:
logging_msg = (
f"'{key}' meter reading array size: {energy_mr_array.size}"
f" does not fit to expected size: {size}, {energy_mr_array}"
f" does not fit to expected size: {size + 1}, {energy_mr_array}"
)
if energy_mr_array.size != 0:
logger.error(logging_msg)
raise ValueError(logging_msg)
logger.debug(logging_msg)
energy_array = np.zeros(size - 1)
energy_array = np.zeros(size)
elif np.any(energy_mr_array == None):
# 'key_to_array()' creates None values array if no data records are available.
# Array contains None value -> ignore
debug_msg = f"'{key}' meter reading None: {energy_mr_array}"
logger.debug(debug_msg)
energy_array = np.zeros(size - 1)
energy_array = np.zeros(size)
else:
# Calculate load per interval
debug_msg = f"'{key}' meter reading: {energy_mr_array}"
@@ -193,6 +220,9 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
np.ndarray: A NumPy Array of the total load energy [kWh] per interval values calculated from
the load meter readings.
"""
if interval is None:
interval = to_duration("1 hour")
if len(self) < 1:
# No data available
if start_datetime is None or end_datetime is None:
@@ -200,14 +230,14 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
else:
size = self._interval_count(start_datetime, end_datetime, interval)
return np.zeros(size)
if interval is None:
interval = to_duration("1 hour")
if start_datetime is None:
start_datetime = self[0].date_time
start_datetime = self.min_datetime
if end_datetime is None:
end_datetime = self[-1].date_time
end_datetime = self.max_datetime.add(seconds=1)
size = self._interval_count(start_datetime, end_datetime, interval)
load_total_kwh_array = np.zeros(size)
# Loop through all loads
if isinstance(self.config.measurement.load_emr_keys, list):
for key in self.config.measurement.load_emr_keys:
@@ -225,7 +255,66 @@ class Measurement(SingletonMixin, DataImportMixin, DataSequence):
return load_total_kwh_array
# ----------------------- Measurement Database Protocol ---------------------
def get_measurement() -> Measurement:
"""Gets the EOS measurement data."""
return Measurement()
def db_namespace(self) -> str:
return "Measurement"
def db_keep_datetime(self) -> Optional[DateTime]:
"""Earliest datetime from which database records should be retained.
Used when removing old records from database to free space.
Returns:
Datetime or None.
"""
return to_datetime().subtract(hours=self.config.measurement.historic_hours)
def save(self) -> bool:
"""Save the measurements to persistent storage.
Returns:
True in case the measurements were saved, False otherwise.
"""
# Use db storage if available
saved_to_db = DataSequence.save(self)
if not saved_to_db:
measurement_file_path = self._measurement_file_path()
if measurement_file_path is None:
return False
try:
measurement_file_path.write_text(
self.model_dump_json(indent=4),
encoding="utf-8",
newline="\n",
)
except Exception as e:
logger.exception("Cannot save measurements")
return True
def load(self) -> bool:
"""Load measurements from persistent storage.
Returns:
True in case the measurements were loaded, False otherwise.
"""
# Use db storage if available
loaded_from_db = DataSequence.load(self)
if not loaded_from_db:
measurement_file_path = self._measurement_file_path()
if measurement_file_path is None:
return False
if not measurement_file_path.exists():
return False
try:
# Validate into a temporary instance
loaded = self.__class__.model_validate_json(
measurement_file_path.read_text(encoding="utf-8")
)
# Explicitly add data records to the existing singleton
for record in loaded.records:
self.insert_by_datetime(record)
except Exception as e:
logger.exception("Cannot load measurements")
return True

View File

@@ -18,6 +18,7 @@ from akkudoktoreos.core.coreabc import (
ConfigMixin,
MeasurementMixin,
PredictionMixin,
get_ems,
)
from akkudoktoreos.optimization.genetic.geneticabc import GeneticParametersBaseModel
from akkudoktoreos.optimization.genetic.geneticdevices import (
@@ -161,9 +162,6 @@ class GeneticOptimizationParameters(
Raises:
ValueError: If required configuration values like start time are missing.
"""
# Avoid circular dependency
from akkudoktoreos.core.ems import get_ems
ems = get_ems()
# The optimization paramters
@@ -439,6 +437,7 @@ class GeneticOptimizationParameters(
initial_soc_factor = cls.measurement.key_to_value(
key=battery_config.measurement_key_soc_factor,
target_datetime=ems.start_datetime,
time_window=to_duration(to_duration("48 hours")),
)
if initial_soc_factor > 1.0 or initial_soc_factor < 0.0:
logger.error(
@@ -510,6 +509,7 @@ class GeneticOptimizationParameters(
initial_soc_factor = cls.measurement.key_to_value(
key=electric_vehicle_config.measurement_key_soc_factor,
target_datetime=ems.start_datetime,
time_window=to_duration(to_duration("48 hours")),
)
if initial_soc_factor > 1.0 or initial_soc_factor < 0.0:
logger.error(

View File

@@ -8,6 +8,8 @@ from pydantic import Field, field_validator
from akkudoktoreos.core.coreabc import (
ConfigMixin,
get_ems,
get_prediction,
)
from akkudoktoreos.core.emplan import (
DDBCInstruction,
@@ -22,7 +24,6 @@ from akkudoktoreos.devices.devicesabc import (
from akkudoktoreos.devices.genetic.battery import Battery
from akkudoktoreos.optimization.genetic.geneticdevices import GeneticParametersBaseModel
from akkudoktoreos.optimization.optimization import OptimizationSolution
from akkudoktoreos.prediction.prediction import get_prediction
from akkudoktoreos.utils.datetimeutil import to_datetime, to_duration
from akkudoktoreos.utils.utils import NumpyEncoder
@@ -272,8 +273,6 @@ class GeneticSolution(ConfigMixin, GeneticParametersBaseModel):
- GRID_SUPPORT_EXPORT: ac_charge == 0 and discharge_allowed == 1
- GRID_SUPPORT_IMPORT: ac_charge > 0 and discharge_allowed == 0 or 1
"""
from akkudoktoreos.core.ems import get_ems
start_datetime = get_ems().start_datetime
start_day_hour = start_datetime.in_timezone(self.config.general.timezone).hour
interval_hours = 1
@@ -567,8 +566,6 @@ class GeneticSolution(ConfigMixin, GeneticParametersBaseModel):
def energy_management_plan(self) -> EnergyManagementPlan:
"""Provide the genetic solution as an energy management plan."""
from akkudoktoreos.core.ems import get_ems
start_datetime = get_ems().start_datetime
start_day_hour = start_datetime.in_timezone(self.config.general.timezone).hour
plan = EnergyManagementPlan(

View File

@@ -3,6 +3,7 @@ from typing import Optional, Union
from pydantic import Field, computed_field, model_validator
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.core.coreabc import get_ems
from akkudoktoreos.core.pydantic import (
PydanticBaseModel,
PydanticDateTimeDataFrame,
@@ -91,10 +92,14 @@ class OptimizationCommonSettings(SettingsBaseModel):
@property
def keys(self) -> list[str]:
"""The keys of the solution."""
from akkudoktoreos.core.ems import get_ems
try:
ems_eos = get_ems()
except:
# ems might not be initialized
return []
key_list = []
optimization_solution = get_ems().optimization_solution()
optimization_solution = ems_eos.optimization_solution()
if optimization_solution:
# Prepare mapping
df = optimization_solution.solution.to_dataframe()

View File

@@ -3,21 +3,28 @@ from typing import Optional
from pydantic import Field, computed_field, field_validator
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.core.coreabc import get_prediction
from akkudoktoreos.prediction.elecpriceabc import ElecPriceProvider
from akkudoktoreos.prediction.elecpriceenergycharts import (
ElecPriceEnergyChartsCommonSettings,
)
from akkudoktoreos.prediction.elecpriceimport import ElecPriceImportCommonSettings
from akkudoktoreos.prediction.prediction import get_prediction
prediction_eos = get_prediction()
# Valid elecprice providers
elecprice_providers = [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, ElecPriceProvider)
]
def elecprice_provider_ids() -> list[str]:
"""Valid elecprice provider ids."""
try:
prediction_eos = get_prediction()
except:
# Prediction may not be initialized
# Return at least provider used in example
return ["ElecPriceAkkudoktor"]
return [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, ElecPriceProvider)
]
class ElecPriceCommonSettings(SettingsBaseModel):
@@ -61,14 +68,14 @@ class ElecPriceCommonSettings(SettingsBaseModel):
@property
def providers(self) -> list[str]:
"""Available electricity price provider ids."""
return elecprice_providers
return elecprice_provider_ids()
# Validators
@field_validator("provider", mode="after")
@classmethod
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
if value is None or value in elecprice_providers:
if value is None or value in elecprice_provider_ids():
return value
raise ValueError(
f"Provider '{value}' is not a valid electricity price provider: {elecprice_providers}."
f"Provider '{value}' is not a valid electricity price provider: {elecprice_provider_ids()}."
)

View File

@@ -3,19 +3,26 @@ from typing import Optional
from pydantic import Field, computed_field, field_validator
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.core.coreabc import get_prediction
from akkudoktoreos.prediction.feedintariffabc import FeedInTariffProvider
from akkudoktoreos.prediction.feedintarifffixed import FeedInTariffFixedCommonSettings
from akkudoktoreos.prediction.feedintariffimport import FeedInTariffImportCommonSettings
from akkudoktoreos.prediction.prediction import get_prediction
prediction_eos = get_prediction()
# Valid feedintariff providers
feedintariff_providers = [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, FeedInTariffProvider)
]
def elecprice_provider_ids() -> list[str]:
"""Valid feedintariff provider ids."""
try:
prediction_eos = get_prediction()
except:
# Prediction may not be initialized
# Return at least provider used in example
return ["FeedInTariffFixed", "FeedInTarifImport"]
return [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, FeedInTariffProvider)
]
class FeedInTariffCommonProviderSettings(SettingsBaseModel):
@@ -60,14 +67,14 @@ class FeedInTariffCommonSettings(SettingsBaseModel):
@property
def providers(self) -> list[str]:
"""Available feed in tariff provider ids."""
return feedintariff_providers
return elecprice_provider_ids()
# Validators
@field_validator("provider", mode="after")
@classmethod
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
if value is None or value in feedintariff_providers:
if value is None or value in elecprice_provider_ids():
return value
raise ValueError(
f"Provider '{value}' is not a valid feed in tariff provider: {feedintariff_providers}."
f"Provider '{value}' is not a valid feed in tariff provider: {elecprice_provider_ids()}."
)

View File

@@ -5,20 +5,27 @@ from typing import Optional
from pydantic import Field, computed_field, field_validator
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.core.coreabc import get_prediction
from akkudoktoreos.prediction.loadabc import LoadProvider
from akkudoktoreos.prediction.loadakkudoktor import LoadAkkudoktorCommonSettings
from akkudoktoreos.prediction.loadimport import LoadImportCommonSettings
from akkudoktoreos.prediction.loadvrm import LoadVrmCommonSettings
from akkudoktoreos.prediction.prediction import get_prediction
prediction_eos = get_prediction()
# Valid load providers
load_providers = [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, LoadProvider)
]
def load_providers() -> list[str]:
"""Valid load provider ids."""
try:
prediction_eos = get_prediction()
except:
# Prediction may not be initialized
# Return at least provider used in example
return ["LoadAkkudoktor", "LoadVrm", "LoadImport"]
return [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, LoadProvider)
]
class LoadCommonProviderSettings(SettingsBaseModel):
@@ -66,12 +73,12 @@ class LoadCommonSettings(SettingsBaseModel):
@property
def providers(self) -> list[str]:
"""Available load provider ids."""
return load_providers
return load_providers()
# Validators
@field_validator("provider", mode="after")
@classmethod
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
if value is None or value in load_providers:
if value is None or value in load_providers():
return value
raise ValueError(f"Provider '{value}' is not a valid load provider: {load_providers}.")
raise ValueError(f"Provider '{value}' is not a valid load provider: {load_providers()}.")

View File

@@ -132,23 +132,32 @@ class LoadAkkudoktorAdjusted(LoadAkkudoktor):
compare_dt = compare_start
for i in range(len(load_total_kwh_array)):
load_total_wh = load_total_kwh_array[i] * 1000
hour = compare_dt.hour
# Weight calculated by distance in days to the latest measurement
weight = 1 / ((compare_end - compare_dt).days + 1)
# Extract mean (index 0) and standard deviation (index 1) for the given day and hour
# Day indexing starts at 0, -1 because of that
hourly_stats = data_year_energy[compare_dt.day_of_year - 1, :, compare_dt.hour]
weight = 1 / ((compare_end - compare_dt).days + 1)
day_idx = compare_dt.day_of_year - 1
hourly_stats = data_year_energy[day_idx, :, hour]
# Calculate adjustments (working days and weekend)
if compare_dt.day_of_week < 5:
weekday_adjust[compare_dt.hour] += (load_total_wh - hourly_stats[0]) * weight
weekday_adjust_weight[compare_dt.hour] += weight
weekday_adjust[hour] += (load_total_wh - hourly_stats[0]) * weight
weekday_adjust_weight[hour] += weight
else:
weekend_adjust[compare_dt.hour] += (load_total_wh - hourly_stats[0]) * weight
weekend_adjust_weight[compare_dt.hour] += weight
weekend_adjust[hour] += (load_total_wh - hourly_stats[0]) * weight
weekend_adjust_weight[hour] += weight
compare_dt += compare_interval
# Calculate mean
for i in range(24):
if weekday_adjust_weight[i] > 0:
weekday_adjust[i] = weekday_adjust[i] / weekday_adjust_weight[i]
if weekend_adjust_weight[i] > 0:
weekend_adjust[i] = weekend_adjust[i] / weekend_adjust_weight[i]
for hour in range(24):
if weekday_adjust_weight[hour] > 0:
weekday_adjust[hour] = weekday_adjust[hour] / weekday_adjust_weight[hour]
if weekend_adjust_weight[hour] > 0:
weekend_adjust[hour] = weekend_adjust[hour] / weekend_adjust_weight[hour]
return (weekday_adjust, weekend_adjust)

View File

@@ -26,7 +26,7 @@ Attributes:
weather_clearoutside (WeatherClearOutside): Weather forecast provider using ClearOutside.
"""
from typing import List, Optional, Union
from typing import Optional, Union
from pydantic import Field
@@ -69,38 +69,6 @@ class PredictionCommonSettings(SettingsBaseModel):
)
class Prediction(PredictionContainer):
"""Prediction container to manage multiple prediction providers.
Attributes:
providers (List[Union[PVForecastAkkudoktor, WeatherBrightSky, WeatherClearOutside]]):
List of forecast provider instances, in the order they should be updated.
Providers may depend on updates from others.
"""
providers: List[
Union[
ElecPriceAkkudoktor,
ElecPriceEnergyCharts,
ElecPriceImport,
FeedInTariffFixed,
FeedInTariffImport,
LoadAkkudoktor,
LoadAkkudoktorAdjusted,
LoadVrm,
LoadImport,
PVForecastAkkudoktor,
PVForecastVrm,
PVForecastImport,
WeatherBrightSky,
WeatherClearOutside,
WeatherImport,
]
] = Field(
default_factory=list, json_schema_extra={"description": "List of prediction providers"}
)
# Initialize forecast providers, all are singletons.
elecprice_akkudoktor = ElecPriceAkkudoktor()
elecprice_energy_charts = ElecPriceEnergyCharts()
@@ -119,42 +87,85 @@ weather_clearoutside = WeatherClearOutside()
weather_import = WeatherImport()
def get_prediction() -> Prediction:
"""Gets the EOS prediction data."""
# Initialize Prediction instance with providers in the required order
def prediction_providers() -> list[
Union[
ElecPriceAkkudoktor,
ElecPriceEnergyCharts,
ElecPriceImport,
FeedInTariffFixed,
FeedInTariffImport,
LoadAkkudoktor,
LoadAkkudoktorAdjusted,
LoadVrm,
LoadImport,
PVForecastAkkudoktor,
PVForecastVrm,
PVForecastImport,
WeatherBrightSky,
WeatherClearOutside,
WeatherImport,
]
]:
"""Return list of prediction providers."""
global \
elecprice_akkudoktor, \
elecprice_energy_charts, \
elecprice_import, \
feedintariff_fixed, \
feedintariff_import, \
loadforecast_akkudoktor, \
loadforecast_akkudoktor_adjusted, \
loadforecast_vrm, \
loadforecast_import, \
pvforecast_akkudoktor, \
pvforecast_vrm, \
pvforecast_import, \
weather_brightsky, \
weather_clearoutside, \
weather_import
# Care for provider sequence as providers may rely on others to be updated before.
prediction = Prediction(
providers=[
elecprice_akkudoktor,
elecprice_energy_charts,
elecprice_import,
feedintariff_fixed,
feedintariff_import,
loadforecast_akkudoktor,
loadforecast_akkudoktor_adjusted,
loadforecast_vrm,
loadforecast_import,
pvforecast_akkudoktor,
pvforecast_vrm,
pvforecast_import,
weather_brightsky,
weather_clearoutside,
weather_import,
return [
elecprice_akkudoktor,
elecprice_energy_charts,
elecprice_import,
feedintariff_fixed,
feedintariff_import,
loadforecast_akkudoktor,
loadforecast_akkudoktor_adjusted,
loadforecast_vrm,
loadforecast_import,
pvforecast_akkudoktor,
pvforecast_vrm,
pvforecast_import,
weather_brightsky,
weather_clearoutside,
weather_import,
]
class Prediction(PredictionContainer):
"""Prediction container to manage multiple prediction providers."""
providers: list[
Union[
ElecPriceAkkudoktor,
ElecPriceEnergyCharts,
ElecPriceImport,
FeedInTariffFixed,
FeedInTariffImport,
LoadAkkudoktor,
LoadAkkudoktorAdjusted,
LoadVrm,
LoadImport,
PVForecastAkkudoktor,
PVForecastVrm,
PVForecastImport,
WeatherBrightSky,
WeatherClearOutside,
WeatherImport,
]
] = Field(
default_factory=prediction_providers,
json_schema_extra={"description": "List of prediction providers"},
)
return prediction
def main() -> None:
"""Main function to update and display predictions.
This function initializes and updates the forecast providers in sequence
according to the `Prediction` instance, then prints the updated prediction data.
"""
prediction = get_prediction()
prediction.update_data()
print(f"Prediction: {prediction}")
if __name__ == "__main__":
main()

View File

@@ -15,17 +15,17 @@ from pydantic import Field, computed_field
from akkudoktoreos.core.coreabc import MeasurementMixin
from akkudoktoreos.core.dataabc import (
DataBase,
DataABC,
DataContainer,
DataImportProvider,
DataProvider,
DataRecord,
DataSequence,
)
from akkudoktoreos.utils.datetimeutil import DateTime, to_duration
from akkudoktoreos.utils.datetimeutil import DateTime, Duration, to_duration
class PredictionBase(DataBase, MeasurementMixin):
class PredictionABC(DataABC, MeasurementMixin):
"""Base class for handling prediction data.
Enables access to EOS configuration data (attribute `config`) and EOS measurement data
@@ -95,7 +95,7 @@ class PredictionSequence(DataSequence):
)
class PredictionStartEndKeepMixin(PredictionBase):
class PredictionStartEndKeepMixin(PredictionABC):
"""A mixin to manage start, end, and historical retention datetimes for prediction data.
The starting datetime for prediction data generation is provided by the energy management
@@ -196,6 +196,35 @@ class PredictionProvider(PredictionStartEndKeepMixin, DataProvider):
Derived classes have to provide their own records field with correct record type set.
"""
def db_keep_datetime(self) -> Optional[DateTime]:
"""Earliest datetime from which database records should be retained.
Used when removing old records from database to free space.
Subclasses may override this method to provide a domain-specific default.
Returns:
Datetime or None.
"""
return self.keep_datetime
def db_initial_time_window(self) -> Optional[Duration]:
"""Return the initial time window used for database loading.
This window defines the initial symmetric time span around a target datetime
that should be loaded from the database when no explicit search time window
is specified. It serves as a loading hint and may be expanded by the caller
if no records are found within the initial range.
Subclasses may override this method to provide a domain-specific default.
Returns:
The initial loading time window as a Duration, or ``None`` to indicate
that no initial window constraint should be applied.
"""
hours = max(self.config.prediction.hours, self.config.prediction_historic_hours, 24)
return to_duration(hours * 3600)
def update_data(
self,
force_enable: Optional[bool] = False,
@@ -219,9 +248,6 @@ class PredictionProvider(PredictionStartEndKeepMixin, DataProvider):
# Call the custom update logic
self._update_data(force_update=force_update)
# Assure records are sorted.
self.sort_by_datetime()
class PredictionImportProvider(PredictionProvider, DataImportProvider):
"""Abstract base class for prediction providers that import prediction data.

View File

@@ -5,19 +5,26 @@ from typing import Any, List, Optional, Self
from pydantic import Field, computed_field, field_validator, model_validator
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.prediction.prediction import get_prediction
from akkudoktoreos.core.coreabc import get_prediction
from akkudoktoreos.prediction.pvforecastabc import PVForecastProvider
from akkudoktoreos.prediction.pvforecastimport import PVForecastImportCommonSettings
from akkudoktoreos.prediction.pvforecastvrm import PVForecastVrmCommonSettings
prediction_eos = get_prediction()
# Valid PV forecast providers
pvforecast_providers = [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, PVForecastProvider)
]
def pvforecast_provider_ids() -> list[str]:
"""Valid PV forecast providers."""
try:
prediction_eos = get_prediction()
except:
# Prediction may not be initialized
# Return at least provider used in example
return ["PVForecastAkkudoktor", "PVForecastImport", "PVForecastVrm"]
return [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, PVForecastProvider)
]
class PVForecastPlaneSetting(SettingsBaseModel):
@@ -264,16 +271,16 @@ class PVForecastCommonSettings(SettingsBaseModel):
@property
def providers(self) -> list[str]:
"""Available PVForecast provider ids."""
return pvforecast_providers
return pvforecast_provider_ids()
# Validators
@field_validator("provider", mode="after")
@classmethod
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
if value is None or value in pvforecast_providers:
if value is None or value in pvforecast_provider_ids():
return value
raise ValueError(
f"Provider '{value}' is not a valid PV forecast provider: {pvforecast_providers}."
f"Provider '{value}' is not a valid PV forecast provider: {pvforecast_provider_ids()}."
)
## Computed fields

View File

@@ -5,18 +5,25 @@ from typing import Optional
from pydantic import Field, computed_field, field_validator
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.prediction.prediction import get_prediction
from akkudoktoreos.core.coreabc import get_prediction
from akkudoktoreos.prediction.weatherabc import WeatherProvider
from akkudoktoreos.prediction.weatherimport import WeatherImportCommonSettings
prediction_eos = get_prediction()
# Valid weather providers
weather_providers = [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, WeatherProvider)
]
def weather_provider_ids() -> list[str]:
"""Valid weather provider ids."""
try:
prediction_eos = get_prediction()
except:
# Prediction may not be initialized
# Return at least provider used in example
return ["WeatherImport"]
return [
provider.provider_id()
for provider in prediction_eos.providers
if isinstance(provider, WeatherProvider)
]
class WeatherCommonProviderSettings(SettingsBaseModel):
@@ -56,14 +63,14 @@ class WeatherCommonSettings(SettingsBaseModel):
@property
def providers(self) -> list[str]:
"""Available weather provider ids."""
return weather_providers
return weather_provider_ids()
# Validators
@field_validator("provider", mode="after")
@classmethod
def validate_provider(cls, value: Optional[str]) -> Optional[str]:
if value is None or value in weather_providers:
if value is None or value in weather_provider_ids():
return value
raise ValueError(
f"Provider '{value}' is not a valid weather provider: {weather_providers}."
f"Provider '{value}' is not a valid weather provider: {weather_provider_ids()}."
)

View File

@@ -402,6 +402,75 @@ def AdminConfig(
)
def AdminDatabase(
eos_host: str, eos_port: Union[str, int], data: Optional[dict], config: Optional[dict[str, Any]]
) -> tuple[str, Union[Card, list[Card]]]:
"""Creates a cache management card.
Args:
eos_host (str): The hostname of the EOS server.
eos_port (Union[str, int]): The port of the EOS server.
data (Optional[dict]): Incoming data containing action and category for processing.
Returns:
tuple[str, Union[Card, list[Card]]]: A tuple containing the cache category label and the `Card` UI component.
"""
server = f"http://{eos_host}:{eos_port}"
eos_hostname = "EOS server"
eosdash_hostname = "EOSdash server"
category = "database"
status_vacuum = (None,)
if data and data.get("category", None) == category:
# This data is for us
if data["action"] == "vacuum":
# Remove old records from database
try:
result = requests.post(f"{server}/v1/admin/database/vacuum", timeout=30)
result.raise_for_status()
status_vacuum = Success(
f"Removed old data records from database on '{eos_hostname}'"
)
except requests.exceptions.HTTPError as e:
detail = result.json()["detail"]
status_vacuum = Error(
f"Can not remove old data records from database on '{eos_hostname}': {e}, {detail}"
)
except Exception as e:
status_vacuum = Error(
f"Can not remove old data records from database on '{eos_hostname}': {e}"
)
return (
category,
[
Card(
Details(
Summary(
Grid(
DivHStacked(
UkIcon(icon="play"),
ConfigButton(
"Vacuum",
hx_post=request_url_for("/eosdash/admin"),
hx_target="#page-content",
hx_swap="innerHTML",
hx_vals='{"category": "database", "action": "vacuum"}',
),
P(f"Remove old data records from database on '{eos_hostname}'"),
),
status_vacuum,
),
cls="list-none",
),
P(f"Remove old data records from database on '{eos_hostname}'."),
),
),
],
)
def Admin(eos_host: str, eos_port: Union[str, int], data: Optional[dict] = None) -> Div:
"""Generates the administrative dashboard layout.
@@ -450,6 +519,7 @@ def Admin(eos_host: str, eos_port: Union[str, int], data: Optional[dict] = None)
for category, admin in [
AdminCache(eos_host, eos_port, data, config),
AdminConfig(eos_host, eos_port, data, config, config_backup),
AdminDatabase(eos_host, eos_port, data, config),
]:
if category != last_category:
rows.append(H3(category))

View File

@@ -7,7 +7,7 @@ from monsterui.franken import A, ButtonT, DivFullySpaced, P
from requests.exceptions import RequestException
import akkudoktoreos.server.dash.eosstatus as eosstatus
from akkudoktoreos.config.config import get_config
from akkudoktoreos.core.coreabc import get_config
def get_alive(eos_host: str, eos_port: Union[str, int]) -> str:

View File

@@ -206,13 +206,20 @@ def SolutionCard(solution: OptimizationSolution, config: SettingsEOS, data: Opti
else:
continue
# Adjust to similar y-axis 0-point
values_min_max = [
(energy_wh_min, energy_wh_max),
(amt_kwh_min, amt_kwh_max),
(amt_min, amt_max),
(soc_factor_min, soc_factor_max),
]
# First get the maximum factor for the min value related the maximum value
min_max_factor = max(
(energy_wh_min * -1.0) / energy_wh_max,
(amt_kwh_min * -1.0) / amt_kwh_max,
(amt_min * -1.0) / amt_max,
(soc_factor_min * -1.0) / soc_factor_max,
)
min_max_factor = 0.0
for value_min, value_max in values_min_max:
if value_max > 0:
value_factor = (value_min * -1.0) / value_max
if value_factor > min_max_factor:
min_max_factor = value_factor
# Adapt the min values to have the same relative min/max factor on all y-axis
energy_wh_min = min_max_factor * energy_wh_max * -1.0
amt_kwh_min = min_max_factor * amt_kwh_max * -1.0

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ from monsterui.core import FastHTML, Theme
from starlette.middleware import Middleware
from starlette.requests import Request
from akkudoktoreos.config.config import get_config
from akkudoktoreos.core.coreabc import get_config
from akkudoktoreos.core.logabc import LOGGING_LEVELS
from akkudoktoreos.core.logging import logging_track_config
from akkudoktoreos.core.version import __version__
@@ -39,7 +39,7 @@ from akkudoktoreos.server.server import (
)
from akkudoktoreos.utils.stringutil import str2bool
config_eos = get_config()
config_eos = get_config(init=True)
# ------------------------------------

View File

@@ -0,0 +1,149 @@
import argparse
from loguru import logger
from akkudoktoreos.core.coreabc import get_config
from akkudoktoreos.core.logabc import LOGGING_LEVELS
from akkudoktoreos.server.server import get_default_host
from akkudoktoreos.utils.stringutil import str2bool
def cli_argument_parser() -> argparse.ArgumentParser:
"""Build argument parser for EOS cli."""
parser = argparse.ArgumentParser(description="Start EOS server.")
parser.add_argument(
"--host",
type=str,
help="Host for the EOS server (default: value from config)",
)
parser.add_argument(
"--port",
type=int,
help="Port for the EOS server (default: value from config)",
)
parser.add_argument(
"--log_level",
type=str,
default="none",
help='Log level for the server console. Options: "critical", "error", "warning", "info", "debug", "trace" (default: "none")',
)
parser.add_argument(
"--reload",
type=str2bool,
default=False,
help="Enable or disable auto-reload. Useful for development. Options: True or False (default: False)",
)
parser.add_argument(
"--startup_eosdash",
type=str2bool,
default=None,
help="Enable or disable automatic EOSdash startup. Options: True or False (default: value from config)",
)
parser.add_argument(
"--run_as_user",
type=str,
help="The unprivileged user account the EOS server shall switch to after performing root-level startup tasks.",
)
return parser
def cli_parse_args(
argv: list[str] | None = None,
) -> tuple[argparse.Namespace, list[str]]:
"""Parse command-line arguments for the EOS CLI.
This function parses known EOS-specific command-line arguments and
returns any remaining unknown arguments unmodified. Unknown arguments
can be forwarded to other subsystems (e.g. Uvicorn).
If ``argv`` is ``None``, arguments are read from ``sys.argv[1:]``.
If ``argv`` is provided, it is used instead.
Args:
argv: Optional list of command-line arguments to parse. If omitted,
the arguments are taken from ``sys.argv[1:]``.
Returns:
A tuple containing:
- A namespace with parsed EOS CLI arguments.
- A list of unparsed (unknown) command-line arguments.
"""
args, args_unknown = cli_argument_parser().parse_known_args(argv)
return args, args_unknown
def cli_apply_args_to_config(args: argparse.Namespace) -> None:
"""Apply parsed CLI arguments to the EOS configuration.
This function updates the EOS configuration with values provided via
the command line. For each parameter, the precedence is:
CLI argument > existing config value > default value
Currently handled arguments:
- log_level: Updates "logging/console_level" in config.
- host: Updates "server/host" in config.
- port: Updates "server/port" in config.
- startup_eosdash: Updates "server/startup_eosdash" in config.
- eosdash_host/port: Initialized if EOSdash is enabled and not already set.
Args:
args: Parsed command-line arguments from argparse.
"""
config_eos = get_config()
# Setup parameters from args, config_eos and default
# Remember parameters in config
# Setup EOS logging level - first to have the other logging messages logged
if args.log_level is not None:
log_level = args.log_level.upper()
# Ensure log_level from command line is in config settings
if log_level in LOGGING_LEVELS:
# Setup console logging level using nested value
# - triggers logging configuration by logging_track_config
config_eos.set_nested_value("logging/console_level", log_level)
logger.debug(f"logging/console_level configuration set by argument to {log_level}")
# Setup EOS server host
if args.host:
host = args.host
logger.debug(f"server/host configuration set by argument to {host}")
elif config_eos.server.host:
host = config_eos.server.host
else:
host = get_default_host()
# Ensure host from command line is in config settings
config_eos.set_nested_value("server/host", host)
# Setup EOS server port
if args.port:
port = args.port
logger.debug(f"server/port configuration set by argument to {port}")
elif config_eos.server.port:
port = config_eos.server.port
else:
port = 8503
# Ensure port from command line is in config settings
config_eos.set_nested_value("server/port", port)
# Setup EOSdash startup
if args.startup_eosdash is not None:
# Ensure startup_eosdash from command line is in config settings
config_eos.set_nested_value("server/startup_eosdash", args.startup_eosdash)
logger.debug(
f"server/startup_eosdash configuration set by argument to {args.startup_eosdash}"
)
if config_eos.server.startup_eosdash:
# Ensure EOSdash host and port config settings are at least set to default values
# Setup EOS server host
if config_eos.server.eosdash_host is None:
config_eos.set_nested_value("server/eosdash_host", host)
# Setup EOS server host
if config_eos.server.eosdash_port is None:
config_eos.set_nested_value("server/eosdash_port", port + 1)

View File

@@ -8,14 +8,12 @@ from typing import Any, MutableMapping
from loguru import logger
from akkudoktoreos.config.config import get_config
from akkudoktoreos.core.coreabc import get_config
from akkudoktoreos.server.server import (
validate_ip_or_hostname,
wait_for_port_free,
)
config_eos = get_config()
# Loguru to HA stdout
logger.add(sys.stdout, format="{time} | {level} | {message}", enqueue=True)
@@ -277,14 +275,18 @@ async def forward_stream(stream: asyncio.StreamReader, prefix: str = "") -> None
_emit_drop_warning()
# Path to eosdash
eosdash_path = Path(__file__).parent.resolve().joinpath("eosdash.py")
async def run_eosdash_supervisor() -> None:
"""Starts EOSdash, pipes its logs, restarts it if it crashes.
Runs forever.
"""
global eosdash_log_queue
global eosdash_log_queue, eosdash_path
eosdash_path = Path(__file__).parent.resolve().joinpath("eosdash.py")
config_eos = get_config()
while True:
await asyncio.sleep(5)

View File

@@ -90,3 +90,73 @@ def repeat_every(
return wrapped
return decorator
def make_repeated_task(
func: NoArgsNoReturnAnyFuncT,
*,
seconds: float,
wait_first: float | None = None,
max_repetitions: int | None = None,
on_complete: NoArgsNoReturnAnyFuncT | None = None,
on_exception: ExcArgNoReturnAnyFuncT | None = None,
) -> NoArgsNoReturnAsyncFuncT:
"""Create a version of the given function that runs periodically.
This function wraps `func` with the `repeat_every` decorator at runtime,
allowing decorator parameters to be determined dynamically rather than at import time.
Args:
func (Callable[[], None] | Callable[[], Coroutine[Any, Any, None]]):
The function to execute periodically. Must accept no arguments.
seconds (float):
Interval in seconds between repeated calls.
wait_first (float | None, optional):
If provided, the function will wait this many seconds before the first call.
max_repetitions (int | None, optional):
Maximum number of times to repeat the function. If None, repeats indefinitely.
on_complete (Callable[[], None] | Callable[[], Coroutine[Any, Any, None]] | None, optional):
Function to call once the repetitions are complete.
on_exception (Callable[[Exception], None] | Callable[[Exception], Coroutine[Any, Any, None]] | None, optional):
Function to call if an exception is raised by `func`.
Returns:
Callable[[], Coroutine[Any, Any, None]]:
An async function that starts the periodic execution when called.
Usage:
.. code-block:: python
from my_task import my_task
from akkudoktoreos.core.coreabc import get_config
from akkudoktoreos.server.rest.tasks import make_repeated_task
config = get_config()
# Create a periodic task using configuration-dependent interval
repeated_task = make_repeated_task(
my_task,
seconds=config.server.poll_interval,
wait_first=5,
max_repetitions=None
)
# Run the task in the event loop
import asyncio
asyncio.run(repeated_task())
Notes:
- This pattern avoids starting the loop at import time.
- Arguments such as `seconds` can be read from runtime sources (config, CLI args, environment variables).
- The returned function must be awaited to start the periodic loop.
"""
# Return decorated function
return repeat_every(
seconds=seconds,
wait_first=wait_first,
max_repetitions=max_repetitions,
on_complete=on_complete,
on_exception=on_exception,
)(func)

View File

@@ -0,0 +1,390 @@
"""Retention Manager for Akkudoktor-EOS server.
This module provides a single long-running background task that owns the scheduling of all periodic
server-maintenance jobs (cache cleanup, DB autosave, config reload, …).
Responsibilities:
- Run a fast "heartbeat" loop (default 5 s) — the *compaction tick*.
- Maintain a registry of ``ManagedJob`` entries, each with its own interval.
- Re-read the live configuration on every tick so interval changes take effect
immediately without a server restart.
- Track per-job state: last run time, last duration, last error, run count.
- Expose that state for health-check / metrics endpoints.
Example:
Typical usage inside your FastAPI lifespan::
from akkudoktoreos.core.coreabc import get_config
from akkudoktoreos.server.rest.retention_manager import RetentionManager
from akkudoktoreos.server.rest.tasks import make_repeated_task
manager = RetentionManager(get_config().get_nested_value)
manager.register("cache_cleanup", cache_cleanup_fn, interval_attr="server/cache_cleanup_interval")
manager.register("db_autosave", db_autosave_fn, interval_attr="server/db_autosave_interval")
@asynccontextmanager
async def lifespan(app: FastAPI):
tick_task = make_repeated_task(manager.tick, seconds=5, wait_first=2)
await tick_task()
yield
"""
from __future__ import annotations
import asyncio
import time
from dataclasses import dataclass
from typing import Any, Callable, Coroutine, Optional, Union
from loguru import logger
from starlette.concurrency import run_in_threadpool
NoArgsNoReturnAnyFuncT = Union[Callable[[], None], Callable[[], Coroutine[Any, Any, None]]]
ExcArgNoReturnAnyFuncT = Union[
Callable[[Exception], None], Callable[[Exception], Coroutine[Any, Any, None]]
]
ConfigGetterFuncT = Callable[[str], Any]
# ---------------------------------------------------------------------------
# Job state — one per registered maintenance task
# ---------------------------------------------------------------------------
@dataclass
class JobState:
"""Runtime state tracked for a single managed job.
Attributes:
name: Unique human-readable job name used in logs and metrics.
func: The maintenance callable. Must accept no arguments.
interval_attr: Key passed to ``config_getter`` to retrieve the interval in seconds
for this job.
fallback_interval: Interval in seconds used when the key is not found or returns zero.
config_getter: Callable that accepts a string key and returns the corresponding
configuration value. Invoked with ``interval_attr`` to obtain the interval
in seconds.
on_exception: Optional callable invoked with the raised exception whenever
``func`` fails. May be sync or async.
last_run_at: Monotonic timestamp of the last completed run; ``0.0`` means never run.
last_duration: How long the last run took, in seconds.
last_error: String representation of the last exception, or ``None`` if the last run succeeded.
run_count: Total number of completed runs (successful or not).
is_running: ``True`` while the job coroutine is currently executing.
"""
name: str
func: NoArgsNoReturnAnyFuncT
interval_attr: str # key passed to config_getter to obtain the interval in seconds
fallback_interval: float # used when the key is not found or returns zero
config_getter: ConfigGetterFuncT # callable(key: str) -> Any; returns interval in seconds
on_exception: Optional[ExcArgNoReturnAnyFuncT] = None # optional cleanup/alerting hook
# mutable state
last_run_at: float = 0.0 # monotonic timestamp; 0.0 means "never run"
last_duration: float = 0.0 # seconds the job took
last_error: Optional[str] = None
run_count: int = 0
is_running: bool = False
def interval(self) -> Optional[float]:
"""Retrieve the current interval by calling ``config_getter`` with ``interval_attr``.
Returns ``None`` when the config value is ``None``, which signals that the
job is disabled and must never fire. Falls back to ``fallback_interval``
when the key is not found.
Returns:
The interval in seconds, or ``None`` if the job is disabled.
"""
try:
value = self.config_getter(self.interval_attr)
if value is None:
return None
return float(value) if value else self.fallback_interval
except (KeyError, IndexError):
logger.warning(
"RetentionManager: config key '{}' not found, using fallback {}s",
self.interval_attr,
self.fallback_interval,
)
return self.fallback_interval
def is_due(self) -> bool:
"""Check whether enough time has elapsed since the last run to execute this job again.
Returns ``False`` immediately when `interval` returns ``None``
(job is disabled), so a disabled job never fires regardless of when it
last ran.
Returns:
``True`` if the job should be executed on this tick, ``False`` otherwise.
"""
interval = self.interval()
if interval is None:
return False
return (time.monotonic() - self.last_run_at) >= interval
def summary(self) -> dict:
"""Build a serialisable snapshot of the job's current state.
Returns:
A dictionary suitable for JSON serialisation, containing the job name,
interval key, last run timestamp, last duration, last error,
run count, and whether the job is currently running.
"""
return {
"name": self.name,
"interval_attr": self.interval_attr,
"interval_s": self.interval(),
"last_run_at": self.last_run_at,
"last_duration_s": round(self.last_duration, 4),
"last_error": self.last_error,
"run_count": self.run_count,
"is_running": self.is_running,
}
# ---------------------------------------------------------------------------
# Retention Manager
# ---------------------------------------------------------------------------
class RetentionManager:
"""Orchestrates all periodic server-maintenance jobs.
The manager itself is driven by an external ``make_repeated_task`` heartbeat
(the *compaction tick*). A ``config_getter`` callable — accepting a string key
and returning the corresponding value — is supplied at initialisation and
stored on every registered job, keeping the manager decoupled from any
specific config implementation.
Jobs are launched as independent ``asyncio.Task`` objects so they run
concurrently without blocking the tick. Call `shutdown` during
application teardown to wait for any in-flight tasks to complete before
the event loop closes. A configurable shutdown_timeout prevents the
wait from blocking indefinitely; jobs still running after the timeout are
reported by name but not cancelled.
"""
def __init__(
self,
config_getter: ConfigGetterFuncT,
*,
shutdown_timeout: float = 30.0,
) -> None:
"""Initialise the manager with a configuration accessor.
Args:
config_getter: Callable that accepts a string key and returns the
corresponding configuration value. Used by each registered job
to look up its interval in seconds.
shutdown_timeout: Maximum number of seconds to wait for in-flight
jobs to finish during `shutdown`. If the timeout elapses
before all tasks complete, an error is logged and the names of
the still-running jobs are reported. The tasks are not cancelled
so they may continue running until the event loop closes.
Defaults to 30.0.
Example::
manager = RetentionManager(get_config().get_nested_value, shutdown_timeout=60.0)
"""
self._config_getter = config_getter
self._shutdown_timeout = shutdown_timeout
self._jobs: dict[str, JobState] = {}
self._running_tasks: set[asyncio.Task] = set()
# ------------------------------------------------------------------
# Registration
# ------------------------------------------------------------------
def register(
self,
name: str,
func: NoArgsNoReturnAnyFuncT,
*,
interval_attr: str,
fallback_interval: float = 300.0,
on_exception: Optional[ExcArgNoReturnAnyFuncT] = None,
) -> None:
"""Register a maintenance function with the manager.
Args:
name: Unique human-readable job name used in logs and metrics.
func: The maintenance callable. Must accept no arguments.
interval_attr: Key passed to ``config_getter`` to retrieve the interval
in seconds for this job. When the config value is ``None`` the job
is treated as disabled and will never fire.
fallback_interval: Seconds to use when the config attribute is missing or zero.
Defaults to ``300.0``.
on_exception: Optional callable invoked with the raised exception whenever
``func`` fails. Useful for cleanup or alerting. May be sync or async.
Raises:
ValueError: If a job with the given ``name`` is already registered.
"""
if name in self._jobs:
raise ValueError(f"RetentionManager: job '{name}' is already registered")
self._jobs[name] = JobState(
name=name,
func=func,
interval_attr=interval_attr,
fallback_interval=fallback_interval,
config_getter=self._config_getter,
on_exception=on_exception,
)
logger.info("RetentionManager: registered job '{}' (config: {})", name, interval_attr)
def unregister(self, name: str) -> None:
"""Remove a previously registered job from the manager.
If no job with the given name exists, this is a no-op.
Args:
name: The name of the job to remove.
"""
self._jobs.pop(name, None)
# ------------------------------------------------------------------
# Tick — called by the external heartbeat loop
# ------------------------------------------------------------------
async def tick(self) -> None:
"""Single compaction tick: check every job and fire those that are due.
Each job resolves its own interval via the ``config_getter`` captured at
registration time. Jobs whose interval is ``None`` are silently skipped
(disabled). Due jobs are launched as independent ``asyncio.Task`` objects
so they run concurrently without blocking the tick. Each task is tracked
in ``_running_tasks`` and removed automatically on completion, allowing
`shutdown` to await all of them gracefully.
Jobs that are still running from a previous tick are skipped to prevent
overlapping executions.
Note:
This is the function you pass to ``make_repeated_task``.
"""
due = [job for job in self._jobs.values() if not job.is_running and job.is_due()]
if not due:
return
logger.debug("RetentionManager: {} job(s) due this tick", len(due))
for job in due:
task = asyncio.ensure_future(self._run_job(job))
task.set_name(job.name) # used by shutdown() to report timed-out jobs by name
self._running_tasks.add(task)
task.add_done_callback(self._running_tasks.discard)
async def shutdown(self) -> None:
"""Wait for all currently running job tasks to complete.
Waits up to shutdown_timeout seconds (configured at initialisation)
for in-flight tasks to finish. If the timeout elapses before all tasks
complete, an error is logged listing the names of the jobs that are still
running. Those tasks are **not** cancelled — they continue until the event
loop closes — but `shutdown` returns so that application teardown
is not blocked indefinitely.
Returns immediately if no tasks are running.
Example::
@asynccontextmanager
async def lifespan(app: FastAPI):
tick_task = make_repeated_task(manager.tick, seconds=5, wait_first=2)
await tick_task()
Yield:
await manager.shutdown()
"""
if not self._running_tasks:
return
logger.info(
"RetentionManager: shutdown — waiting up to {}s for {} task(s) to finish",
self._shutdown_timeout,
len(self._running_tasks),
)
done, pending = await asyncio.wait(self._running_tasks, timeout=self._shutdown_timeout)
if pending:
# Task names were set to the job name when the task was created in tick().
pending_names = [t.get_name() for t in pending]
logger.error(
"RetentionManager: shutdown timed out after {}s — {} job(s) still running: {}",
self._shutdown_timeout,
len(pending),
pending_names,
)
else:
logger.info("RetentionManager: all tasks finished, shutdown complete")
self._running_tasks.clear()
# ------------------------------------------------------------------
# Internal helpers
# ------------------------------------------------------------------
async def _run_job(self, job: JobState) -> None:
"""Execute a single job and update its state regardless of outcome.
Handles both async and sync callables for both the main function and the
optional ``on_exception`` hook. Exceptions from ``func`` are caught, logged,
stored on the job, and forwarded to ``on_exception`` if provided, so a
failing job never disrupts other concurrent jobs or future ticks.
Args:
job: The `JobState` instance to execute.
"""
job.is_running = True
start = time.monotonic()
logger.debug("RetentionManager: starting job '{}'", job.name)
try:
if asyncio.iscoroutinefunction(job.func):
await job.func()
else:
await run_in_threadpool(job.func)
job.last_error = None
logger.debug(
"RetentionManager: job '{}' completed in {:.3f}s",
job.name,
time.monotonic() - start,
)
except Exception as exc: # noqa: BLE001
job.last_error = str(exc)
logger.exception("RetentionManager: job '{}' raised an exception: {}", job.name, exc)
if job.on_exception is not None:
if asyncio.iscoroutinefunction(job.on_exception):
await job.on_exception(exc)
else:
await run_in_threadpool(job.on_exception, exc)
finally:
job.last_duration = time.monotonic() - start
job.last_run_at = time.monotonic()
job.run_count += 1
job.is_running = False
# ------------------------------------------------------------------
# Observability
# ------------------------------------------------------------------
def status(self) -> list[dict]:
"""Return a snapshot of every job's state for health or metrics endpoints.
Returns:
A list of dictionaries, one per registered job, each produced by
`JobState.summary`.
"""
return [job.summary() for job in self._jobs.values()]
def __repr__(self) -> str: # pragma: no cover
return f"<RetentionManager jobs={list(self._jobs)}>"

View File

@@ -14,6 +14,7 @@ from loguru import logger
from pydantic import Field, field_validator
from akkudoktoreos.config.configabc import SettingsBaseModel
from akkudoktoreos.core.coreabc import get_config
def get_default_host() -> str:
@@ -258,8 +259,6 @@ def fix_data_directories_permissions(run_as_user: Optional[str] = None) -> None:
run_as_user (Optional[str]): The user who should own the data directories and files.
Defaults to current one.
"""
from akkudoktoreos.config.config import get_config
config_eos = get_config()
base_dirs = [

View File

@@ -1868,6 +1868,28 @@ def to_duration(
raise ValueError(error_msg)
# Timezone names that are semantically identical to UTC and should be
# canonicalized. Keys are lower-cased for case-insensitive matching.
_UTC_ALIASES: dict[str, str] = {
"utc": "UTC",
"gmt": "UTC",
"z": "UTC",
"etc/utc": "UTC",
"etc/gmt": "UTC",
"etc/gmt+0": "UTC",
"etc/gmt-0": "UTC",
"etc/gmt0": "UTC",
"etc/greenwich": "UTC",
"etc/universal": "UTC",
"etc/zulu": "UTC",
}
def _canonicalize_tz_name(name: str) -> str:
"""Return 'UTC' when *name* is a known UTC alias, otherwise return unchanged."""
return _UTC_ALIASES.get(name.lower(), name)
@overload
def to_timezone(
utc_offset: Optional[float] = None,
@@ -1891,6 +1913,9 @@ def to_timezone(
) -> Union[Timezone, str]:
"""Determines the timezone either by UTC offset, geographic location, or local system timezone.
Timezone names that are semantically equivalent to UTC (e.g. ``GMT``, ``Z``,
``Etc/GMT``) are canonicalized to ``"UTC"`` before returning.
By default, it returns a `Timezone` object representing the timezone.
If `as_string` is set to `True`, the function returns the timezone name as a string instead.
@@ -1925,7 +1950,15 @@ def to_timezone(
if not -24 <= utc_offset <= 24:
raise ValueError("UTC offset must be within the range -24 to +24 hours.")
# Convert UTC offset to an Etc/GMT-compatible format
# Offset of exactly 0 is plain UTC no need for Etc/GMT+0 etc.
if utc_offset == 0:
if as_string:
return "UTC"
return pendulum.timezone("UTC")
# Convert UTC offset to an Etc/GMT-compatible format.
# NOTE: Etc/GMT sign convention is *inverted* relative to the common
# expectation: Etc/GMT+5 means UTC-5. We therefore flip the sign.
hours = int(utc_offset)
minutes = int((abs(utc_offset) - abs(hours)) * 60)
sign = "-" if utc_offset >= 0 else "+"
@@ -1951,6 +1984,8 @@ def to_timezone(
except Exception as e:
raise ValueError(f"Error determining timezone for location {location}: {e}") from e
tz_name = _canonicalize_tz_name(tz_name)
if as_string:
return tz_name
return pendulum.timezone(tz_name)
@@ -1958,7 +1993,9 @@ def to_timezone(
# Fallback to local timezone
local_tz = pendulum.local_timezone()
if isinstance(local_tz, str):
local_tz = pendulum.timezone(local_tz)
local_tz = pendulum.timezone(_canonicalize_tz_name(local_tz))
else:
local_tz = pendulum.timezone(_canonicalize_tz_name(local_tz.name))
if as_string:
return local_tz.name
return local_tz

View File

@@ -11,8 +11,7 @@ import numpy as np
import pendulum
from matplotlib.backends.backend_pdf import PdfPages
from akkudoktoreos.core.coreabc import ConfigMixin
from akkudoktoreos.core.ems import get_ems
from akkudoktoreos.core.coreabc import ConfigMixin, get_ems
from akkudoktoreos.optimization.genetic.genetic import GeneticOptimizationParameters
from akkudoktoreos.utils.datetimeutil import DateTime, to_datetime