feat: add Home Assistant and NodeRED adapters (#764)

Adapters for Home Assistant and NodeRED integration are added.
Akkudoktor-EOS can now be run as Home Assistant add-on and standalone.

As Home Assistant add-on EOS uses ingress to fully integrate the EOSdash dashboard
in Home Assistant.

The fix includes several bug fixes that are not directly related to the adapter
implementation but are necessary to keep EOS running properly and to test and
document the changes.

* fix: development version scheme

  The development versioning scheme is adaptet to fit to docker and
  home assistant expectations. The new scheme is x.y.z and x.y.z.dev<hash>.
  Hash is only digits as expected by home assistant. Development version
  is appended by .dev as expected by docker.

* fix: use mean value in interval on resampling for array

  When downsampling data use the mean value of all values within the new
  sampling interval.

* fix: default battery ev soc and appliance wh

  Make the genetic simulation return default values for the
  battery SoC, electric vehicle SoC and appliance load if these
  assets are not used.

* fix: import json string

  Strip outer quotes from JSON strings on import to be compliant to json.loads()
  expectation.

* fix: default interval definition for import data

  Default interval must be defined in lowercase human definition to
  be accepted by pendulum.

* fix: clearoutside schema change

* feat: add adapters for integrations

  Adapters for Home Assistant and NodeRED integration are added.
  Akkudoktor-EOS can now be run as Home Assistant add-on and standalone.

  As Home Assistant add-on EOS uses ingress to fully integrate the EOSdash dashboard
  in Home Assistant.

* feat: allow eos to be started with root permissions and drop priviledges

  Home assistant starts all add-ons with root permissions. Eos now drops
  root permissions if an applicable user is defined by paramter --run_as_user.
  The docker image defines the user eos to be used.

* feat: make eos supervise and monitor EOSdash

  Eos now not only starts EOSdash but also monitors EOSdash during runtime
  and restarts EOSdash on fault. EOSdash logging is captured by EOS
  and forwarded to the EOS log to provide better visibility.

* feat: add duration to string conversion

  Make to_duration to also return the duration as string on request.

* chore: Use info logging to report missing optimization parameters

  In parameter preparation for automatic optimization an error was logged for missing paramters.
  Log is now down using the info level.

* chore: make EOSdash use the EOS data directory for file import/ export

  EOSdash use the EOS data directory for file import/ export by default.
  This allows to use the configuration import/ export function also
  within docker images.

* chore: improve EOSdash config tab display

  Improve display of JSON code and add more forms for config value update.

* chore: make docker image file system layout similar to home assistant

  Only use /data directory for persistent data. This is handled as a
  docker volume. The /data volume is mapped to ~/.local/share/net.akkudoktor.eos
  if using docker compose.

* chore: add home assistant add-on development environment

  Add VSCode devcontainer and task definition for home assistant add-on
  development.

* chore: improve documentation
This commit is contained in:
Bobby Noelte
2025-12-30 22:08:21 +01:00
committed by GitHub
parent 02c794460f
commit 58d70e417b
111 changed files with 6815 additions and 1199 deletions

View File

@@ -27,7 +27,10 @@ class CacheCommonSettings(SettingsBaseModel):
# Do not make this a pydantic computed field. The pydantic model must be fully initialized
# to have access to config.general, which may not be the case if it is a computed field.
def path(self) -> Optional[Path]:
"""Compute cache path based on general.data_folder_path."""
"""Computed cache path based on general.data_folder_path."""
if self.config.general.home_assistant_addon:
# Only /data is persistent for home assistant add-on
return Path("/data/cache")
data_cache_path = self.config.general.data_folder_path
if data_cache_path is None or self.subpath is None:
return None

View File

@@ -18,12 +18,53 @@ from loguru import logger
from akkudoktoreos.core.decorators import classproperty
from akkudoktoreos.utils.datetimeutil import DateTime
adapter_eos: Any = None
config_eos: Any = None
measurement_eos: Any = None
prediction_eos: Any = None
ems_eos: Any = None
class AdapterMixin:
"""Mixin class for managing EOS adapter.
This class serves as a foundational component for EOS-related classes requiring access
to the global EOS adapters. It provides a `adapter` property that dynamically retrieves
the adapter instance.
Usage:
Subclass this base class to gain access to the `adapter` attribute, which retrieves the
global adapter instance lazily to avoid import-time circular dependencies.
Attributes:
adapter (Adapter): Property to access the global EOS adapter.
Example:
.. code-block:: python
class MyEOSClass(AdapterMixin):
def my_method(self):
self.adapter.update_date()
"""
@classproperty
def adapter(cls) -> Any:
"""Convenience class method/ attribute to retrieve the EOS adapters.
Returns:
Adapter: The adapters.
"""
# avoid circular dependency at import time
global adapter_eos
if adapter_eos is None:
from akkudoktoreos.adapter.adapter import get_adapter
adapter_eos = get_adapter()
return adapter_eos
class ConfigMixin:
"""Mixin class for managing EOS configuration data.

View File

@@ -1018,7 +1018,7 @@ class DataSequence(DataBase, MutableSequence):
end_datetime: Optional[DateTime] = None,
interval: Optional[Duration] = None,
fill_method: Optional[str] = None,
dropna: Optional[bool] = None,
dropna: Optional[bool] = True,
) -> NDArray[Shape["*"], Any]:
"""Extract an array indexed by fixed time intervals from data records within an optional date range.
@@ -1032,17 +1032,19 @@ class DataSequence(DataBase, MutableSequence):
- 'ffill': Forward fill missing values.
- 'bfill': Backward fill missing values.
- 'none': Defaults to 'linear' for numeric values, otherwise 'ffill'.
dropna: (bool, optional): Whether to drop NAN/ None values before processing. Defaults to True.
dropna: (bool, optional): Whether to drop NAN/ None values before processing.
Defaults to True.
Returns:
np.ndarray: A NumPy Array of the values extracted from the specified key.
np.ndarray: A NumPy Array of the values at the chosen frequency extracted from the
specified key.
Raises:
KeyError: If the specified key is not found in any of the DataRecords.
"""
self._validate_key(key)
# General check on fill_method
# Validate fill method
if fill_method not in ("ffill", "bfill", "linear", "none", None):
raise ValueError(f"Unsupported fill method: {fill_method}")
@@ -1050,13 +1052,17 @@ class DataSequence(DataBase, MutableSequence):
start_datetime = to_datetime(start_datetime, to_maxtime=False) if start_datetime else None
end_datetime = to_datetime(end_datetime, to_maxtime=False) if end_datetime else None
resampled = None
if interval is None:
interval = to_duration("1 hour")
resample_freq = "1h"
else:
resample_freq = to_duration(interval, as_string="pandas")
# Load raw lists (already sorted & filtered)
dates, values = self.key_to_lists(key=key, dropna=dropna)
values_len = len(values)
# Bring lists into shape
if values_len < 1:
# No values, assume at least one value set to None
if start_datetime is not None:
@@ -1092,40 +1098,40 @@ class DataSequence(DataBase, MutableSequence):
dates.append(end_datetime)
values.append(values[-1])
series = pd.Series(data=values, index=pd.DatetimeIndex(dates), name=key)
if not series.index.inferred_type == "datetime64":
# Construct series
series = pd.Series(values, index=pd.DatetimeIndex(dates), name=key)
if series.index.inferred_type != "datetime64":
raise TypeError(
f"Expected DatetimeIndex, but got {type(series.index)} "
f"infered to {series.index.inferred_type}: {series}"
)
# Handle missing values
if series.dtype in [np.float64, np.float32, np.int64, np.int32]:
# Numeric types
if fill_method is None:
# Determine default fill method depending on dtype
if fill_method is None:
if pd.api.types.is_numeric_dtype(series):
fill_method = "linear"
# Resample the series to the specified interval
resampled = series.resample(interval, origin=resample_origin).first()
if fill_method == "linear":
resampled = resampled.interpolate(method="linear")
elif fill_method == "ffill":
resampled = resampled.ffill()
elif fill_method == "bfill":
resampled = resampled.bfill()
elif fill_method != "none":
raise ValueError(f"Unsupported fill method: {fill_method}")
else:
# Non-numeric types
if fill_method is None:
else:
fill_method = "ffill"
# Resample the series to the specified interval
# Perform the resampling
if pd.api.types.is_numeric_dtype(series):
# numeric → use mean
resampled = series.resample(interval, origin=resample_origin).mean()
else:
# non-numeric → fallback (first, last, mode, or ffill)
resampled = series.resample(interval, origin=resample_origin).first()
if fill_method == "ffill":
resampled = resampled.ffill()
elif fill_method == "bfill":
resampled = resampled.bfill()
elif fill_method != "none":
raise ValueError(f"Unsupported fill method for non-numeric data: {fill_method}")
# Handle missing values after resampling
if fill_method == "linear" and pd.api.types.is_numeric_dtype(series):
resampled = resampled.interpolate("linear")
elif fill_method == "ffill":
resampled = resampled.ffill()
elif fill_method == "bfill":
resampled = resampled.bfill()
elif fill_method == "none":
pass
else:
raise ValueError(f"Unsupported fill method: {fill_method}")
logger.debug(
"Resampled for '{}' with length {}: {}...{}",
@@ -1141,6 +1147,16 @@ class DataSequence(DataBase, MutableSequence):
if end_datetime is not None and len(resampled) > 0:
resampled = resampled.truncate(after=end_datetime.subtract(seconds=1))
array = resampled.values
# Convert NaN to None if there are actually NaNs
if (
isinstance(array, np.ndarray)
and np.issubdtype(array.dtype.type, np.floating)
and pd.isna(array).any()
):
array = array.astype(object)
array[pd.isna(array)] = None
logger.debug(
"Array for '{}' with length {}: {}...{}", key, len(array), array[:10], array[-10:]
)
@@ -1691,6 +1707,14 @@ class DataImportMixin:
}
"""
# Strip quotes if provided - does not effect unquoted string
json_str = json_str.strip() # strip white space at start and end
if (json_str.startswith("'") and json_str.endswith("'")) or (
json_str.startswith('"') and json_str.endswith('"')
):
json_str = json_str[1:-1] # strip outer quotes
json_str = json_str.strip() # strip remaining white space at start and end
# Try pandas dataframe with orient="split"
try:
import_data = PydanticDateTimeDataFrame.model_validate_json(json_str)
@@ -1720,10 +1744,15 @@ class DataImportMixin:
logger.debug(f"PydanticDateTimeData import: {error_msg}")
# Use simple dict format
import_data = json.loads(json_str)
self.import_from_dict(
import_data, key_prefix=key_prefix, start_datetime=start_datetime, interval=interval
)
try:
import_data = json.loads(json_str)
self.import_from_dict(
import_data, key_prefix=key_prefix, start_datetime=start_datetime, interval=interval
)
except Exception as e:
error_msg = f"Invalid JSON string '{json_str}': {e}"
logger.debug(error_msg)
raise ValueError(error_msg) from e
def import_from_file(
self,

View File

@@ -25,11 +25,11 @@ class classproperty:
Methods:
__get__: Retrieves the value of the class property by calling the
decorated method on the class.
decorated method on the class.
Parameters:
fget (Callable[[Any], Any]): A method that takes the class as an
argument and returns a value.
argument and returns a value.
Raises:
RuntimeError: If `fget` is not defined when `__get__` is called.

View File

@@ -10,9 +10,11 @@ Demand Driven Based Control.
import uuid
from abc import ABC, abstractmethod
from collections import defaultdict
from enum import Enum
from typing import Annotated, Literal, Optional, Union
from loguru import logger
from pydantic import Field, computed_field, model_validator
from akkudoktoreos.core.pydantic import PydanticBaseModel
@@ -2257,20 +2259,60 @@ class EnergyManagementPlan(PydanticBaseModel):
self.valid_from = to_datetime()
self.valid_until = None
def get_resources(self) -> list[str]:
"""Retrieves the resource_ids for the resources the plan currently holds instructions for.
Returns a list of resource ids.
"""
resource_ids = []
for instr in self.instructions:
resource_id = instr.resource_id
if resource_id not in resource_ids:
resource_ids.append(resource_id)
return resource_ids
def get_active_instructions(
self, now: Optional[DateTime] = None
) -> list[EnergyManagementInstruction]:
"""Retrieves all currently active instructions at the specified time."""
) -> list["EnergyManagementInstruction"]:
"""Retrieves the currently active instruction for each resource at the specified time.
Semantics:
- For each resource, consider only instructions with execution_time <= now.
- Choose the instruction with the latest execution_time (the most recent).
- If that instruction has a duration (timedelta), it's active only if now < execution_time + duration.
- If that instruction has no duration (None), treat it as open-ended (active until superseded).
Returns a list with at most one instruction per resource (the active one).
"""
now = now or to_datetime()
active = []
# Group instructions by resource_id
by_resource: dict[str, list["EnergyManagementInstruction"]] = defaultdict(list)
for instr in self.instructions:
instr_duration = instr.duration()
# skip instructions scheduled in the future
if instr.execution_time <= now:
by_resource[instr.resource_id].append(instr)
active: list["EnergyManagementInstruction"] = []
for resource_id, instrs in by_resource.items():
# pick latest instruction by execution_time
latest = max(instrs, key=lambda i: i.execution_time)
if len(instrs) == 0:
# No instructions, ther shall be at least one
error_msg = f"No instructions for {resource_id}"
logger.error(error_msg)
raise ValueError(error_msg)
instr_duration = latest.duration() # expected: Duration| None
if instr_duration is None:
if instr.execution_time <= now:
active.append(instr)
# open-ended (active until replaced) -> active because we selected latest <= now
active.append(latest)
else:
if instr.execution_time <= now < instr.execution_time + instr_duration:
active.append(instr)
# active only if now is strictly before execution_time + duration
if latest.execution_time + instr_duration > now:
active.append(latest)
return active
def get_next_instruction(

View File

@@ -1,6 +1,7 @@
import traceback
from asyncio import Lock, get_running_loop
from concurrent.futures import ThreadPoolExecutor
from enum import Enum
from functools import partial
from typing import ClassVar, Optional
@@ -8,7 +9,12 @@ from loguru import logger
from pydantic import computed_field
from akkudoktoreos.core.cache import CacheEnergyManagementStore
from akkudoktoreos.core.coreabc import ConfigMixin, PredictionMixin, SingletonMixin
from akkudoktoreos.core.coreabc import (
AdapterMixin,
ConfigMixin,
PredictionMixin,
SingletonMixin,
)
from akkudoktoreos.core.emplan import EnergyManagementPlan
from akkudoktoreos.core.emsettings import EnergyManagementMode
from akkudoktoreos.core.pydantic import PydanticBaseModel
@@ -24,7 +30,23 @@ from akkudoktoreos.utils.datetimeutil import DateTime, compare_datetimes, to_dat
executor = ThreadPoolExecutor(max_workers=1)
class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBaseModel):
class EnergyManagementStage(Enum):
"""Enumeration of the main stages in the energy management lifecycle."""
IDLE = "IDLE"
DATA_ACQUISITION = "DATA_AQUISITION"
FORECAST_RETRIEVAL = "FORECAST_RETRIEVAL"
OPTIMIZATION = "OPTIMIZATION"
CONTROL_DISPATCH = "CONTROL_DISPATCH"
def __str__(self) -> str:
"""Return the string representation of the stage."""
return self.value
class EnergyManagement(
SingletonMixin, ConfigMixin, PredictionMixin, AdapterMixin, PydanticBaseModel
):
"""Energy management."""
# Start datetime.
@@ -33,6 +55,9 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
# last run datetime. Used by energy management task
_last_run_datetime: ClassVar[Optional[DateTime]] = None
# Current energy management stage
_stage: ClassVar[EnergyManagementStage] = EnergyManagementStage.IDLE
# energy management plan of latest energy management run with optimization
_plan: ClassVar[Optional[EnergyManagementPlan]] = None
@@ -81,6 +106,15 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
cls._start_datetime = start_datetime.set(minute=0, second=0, microsecond=0)
return cls._start_datetime
@classmethod
def stage(cls) -> EnergyManagementStage:
"""Get the the stage of the energy management.
Returns:
EnergyManagementStage: The current stage of energy management.
"""
return cls._stage
@classmethod
def plan(cls) -> Optional[EnergyManagementPlan]:
"""Get the latest energy management plan.
@@ -122,6 +156,7 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
"""Run the energy management.
This method initializes the energy management run by setting its
start datetime, updating predictions, and optionally starting
optimization depending on the selected mode or configuration.
@@ -157,6 +192,8 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
logger.info("Starting energy management run.")
cls._stage = EnergyManagementStage.DATA_ACQUISITION
# Remember/ set the start datetime of this energy management run.
# None leads
cls.set_start_datetime(start_datetime)
@@ -164,12 +201,23 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
# Throw away any memory cached results of the last energy management run.
CacheEnergyManagementStore().clear()
# Do data aquisition by adapters
try:
cls.adapter.update_data(force_enable)
except Exception as e:
trace = "".join(traceback.TracebackException.from_exception(e).format())
error_msg = f"Adapter update failed - phase {cls._stage}: {e}\n{trace}"
logger.error(error_msg)
cls._stage = EnergyManagementStage.FORECAST_RETRIEVAL
if mode is None:
mode = cls.config.ems.mode
if mode is None or mode == "PREDICTION":
# Update the predictions
cls.prediction.update_data(force_enable=force_enable, force_update=force_update)
logger.info("Energy management run done (predictions updated)")
cls._stage = EnergyManagementStage.IDLE
return
# Prepare optimization parameters
@@ -184,8 +232,12 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
logger.error(
"Energy management run canceled. Could not prepare optimisation parameters."
)
cls._stage = EnergyManagementStage.IDLE
return
cls._stage = EnergyManagementStage.OPTIMIZATION
logger.info("Starting energy management optimization.")
# Take values from config if not given
if genetic_individuals is None:
genetic_individuals = cls.config.optimization.genetic.individuals
@@ -195,7 +247,6 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
if cls._start_datetime is None: # Make mypy happy - already set by us
raise RuntimeError("Start datetime not set.")
logger.info("Starting energy management optimization.")
try:
optimization = GeneticOptimization(
verbose=bool(cls.config.server.verbose),
@@ -208,8 +259,11 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
)
except:
logger.exception("Energy management optimization failed.")
cls._stage = EnergyManagementStage.IDLE
return
cls._stage = EnergyManagementStage.CONTROL_DISPATCH
# Make genetic solution public
cls._genetic_solution = solution
@@ -224,6 +278,17 @@ class EnergyManagement(SingletonMixin, ConfigMixin, PredictionMixin, PydanticBas
logger.debug("Energy management plan:\n{}", cls._plan)
logger.info("Energy management run done (optimization updated)")
# Do control dispatch by adapters
try:
cls.adapter.update_data(force_enable)
except Exception as e:
trace = "".join(traceback.TracebackException.from_exception(e).format())
error_msg = f"Adapter update failed - phase {cls._stage}: {e}\n{trace}"
logger.error(error_msg)
# energy management run finished
cls._stage = EnergyManagementStage.IDLE
async def run(
self,
start_datetime: Optional[DateTime] = None,

View File

@@ -65,7 +65,7 @@ console_handler_id = None
file_handler_id = None
def track_logging_config(config_eos: Any, path: str, old_value: Any, value: Any) -> None:
def logging_track_config(config_eos: Any, path: str, old_value: Any, value: Any) -> None:
"""Track logging config changes."""
global console_handler_id, file_handler_id

View File

@@ -400,7 +400,21 @@ class PydanticModelNestedValueMixin:
# Get next value
next_value = None
if isinstance(model, BaseModel):
if isinstance(model, RootModel):
# If this is the final key, set the value
if is_final_key:
try:
model.validate_and_set(key, value)
except Exception as e:
raise ValueError(f"Error updating model: {e}") from e
return
next_value = model.root
elif isinstance(model, BaseModel):
logger.debug(
f"Detected base model {model.__class__.__name__} of type {type(model)}"
)
# Track parent and key for possible assignment later
parent = model
parent_key = [
@@ -432,6 +446,7 @@ class PydanticModelNestedValueMixin:
next_value = getattr(model, key, None)
elif isinstance(model, list):
logger.debug(f"Detected list of type {type(model)}")
# Handle lists (ensure index exists and modify safely)
try:
idx = int(key)
@@ -468,6 +483,7 @@ class PydanticModelNestedValueMixin:
return
elif isinstance(model, dict):
logger.debug(f"Detected dict of type {type(model)}")
# Handle dictionaries (auto-create missing keys)
# Get next type from parent key type information
@@ -795,29 +811,61 @@ class PydanticBaseModel(PydanticModelNestedValueMixin, BaseModel):
@classmethod
def field_description(cls, field_name: str) -> Optional[str]:
"""Return the description metadata of a model field, if available.
"""Return a human-readable description for a model field.
This method retrieves the `Field` specification from the model's
`model_fields` registry and extracts its description from the field's
`json_schema_extra` / `extra` metadata (as provided by
`_field_extra_dict`). If the field does not exist or no description is
present, ``None`` is returned.
Looks up descriptions for both regular and computed fields.
Resolution order:
Normal fields:
1) json_schema_extra["description"]
2) field.description
Computed fields:
1) ComputedFieldInfo.description
2) function docstring (func.__doc__)
3) json_schema_extra["description"]
If a field exists but no description is found, returns "-".
If the field does not exist, returns None.
Args:
field_name (str):
Name of the field whose description should be returned.
field_name: Field name.
Returns:
Optional[str]:
The textual description if present, otherwise ``None``.
Description string, "-" if missing, or None if not a field.
"""
field = cls.model_fields.get(field_name)
if not field:
# 1) Regular declared fields
field: FieldInfo | None = cls.model_fields.get(field_name)
if field is not None:
extra = cls._field_extra_dict(field)
if "description" in extra:
return str(extra["description"])
# some FieldInfo may also have .description directly
if getattr(field, "description", None):
return str(field.description)
return None
extra = cls._field_extra_dict(field)
# 2) Computed fields live in a separate mapping
cfield: ComputedFieldInfo | None = cls.model_computed_fields.get(field_name)
if cfield is None:
return None
# 2a) ComputedFieldInfo may have a description attribute
if getattr(cfield, "description", None):
return str(cfield.description)
# 2b) fallback to wrapped property's docstring
func = getattr(cfield, "func", None)
if func and func.__doc__:
return func.__doc__.strip()
# 2c) last resort: json_schema_extra if you use it for computed fields
extra = cls._field_extra_dict(cfield)
if "description" in extra:
return str(extra["description"])
return None
return "-"
@classmethod
def field_deprecated(cls, field_name: str) -> Optional[str]:
@@ -887,7 +935,7 @@ class PydanticDateTimeData(RootModel):
{
"start_datetime": "2024-01-01 00:00:00", # optional
"interval": "1 Hour", # optional
"interval": "1 hour", # optional
"loadforecast_power_w": [20.5, 21.0, 22.1],
"load_min": [18.5, 19.0, 20.1]
}

View File

@@ -6,13 +6,15 @@ from fnmatch import fnmatch
from pathlib import Path
from typing import Optional
# For development add `+dev` to previous release
# For release omit `+dev`.
VERSION_BASE = "0.2.0+dev"
# For development add `.dev` to previous release
# For release omit `.dev`.
VERSION_BASE = "0.2.0.dev"
# Project hash of relevant files
HASH_EOS = ""
# Number of digits to append to .dev to identify a development version
VERSION_DEV_PRECISION = 8
# ------------------------------
# Helpers for version generation
@@ -91,8 +93,11 @@ def _version_calculate() -> str:
"""Compute version."""
global HASH_EOS
HASH_EOS = _version_hash()
if VERSION_BASE.endswith("+dev"):
return f"{VERSION_BASE}.{HASH_EOS[:6]}"
if VERSION_BASE.endswith("dev"):
# After dev only digits are allowed - convert hexdigest to digits
hash_value = int(HASH_EOS, 16)
hash_digits = str(hash_value % (10**VERSION_DEV_PRECISION)).zfill(VERSION_DEV_PRECISION)
return f"{VERSION_BASE}{hash_digits}"
else:
return VERSION_BASE
@@ -114,10 +119,10 @@ __version__ = _version_calculate()
VERSION_RE = re.compile(
r"""
^(?P<base>\d+\.\d+\.\d+) # x.y.z
(?:\+ # +dev.hash starts here
(?:[\.\+\-] # .dev<hash> starts here
(?:
(?P<dev>dev) # literal 'dev'
(?:\.(?P<hash>[A-Za-z0-9]+))? # optional .hash
(?:(?P<hash>[A-Za-z0-9]+))? # optional <hash>
)
)?
$
@@ -131,8 +136,8 @@ def version() -> dict[str, Optional[str]]:
The version string shall be of the form:
x.y.z
x.y.z+dev
x.y.z+dev.HASH
x.y.z.dev
x.y.z.dev<HASH>
Returns:
.. code-block:: python