Fix2 config and predictions revamp. (#281)

measurement:

- Add new measurement class to hold real world measurements.
- Handles load meter readings, grid import and export meter readings.
- Aggregates load meter readings aka. measurements to total load.
- Can import measurements from files, pandas datetime series,
    pandas datetime dataframes, simple daetime arrays and
    programmatically.
- Maybe expanded to other measurement values.
- Should be used for load prediction adaptions by real world
    measurements.

core/coreabc:

- Add mixin class to access measurements

core/pydantic:

- Add pydantic models for pandas datetime series and dataframes.
- Add pydantic models for simple datetime array

core/dataabc:

- Provide DataImport mixin class for generic import handling.
    Imports from JSON string and files. Imports from pandas datetime dataframes
    and simple datetime arrays. Signature of import method changed to
    allow import datetimes to be given programmatically and by data content.
- Use pydantic models for datetime series, dataframes, arrays
- Validate generic imports by pydantic models
- Provide new attributes min_datetime and max_datetime for DataSequence.
- Add parameter dropna to drop NAN/ None values when creating lists, pandas series
    or numpy array from DataSequence.

config/config:

- Add common settings for the measurement module.

predictions/elecpriceakkudoktor:

- Use mean values of last 7 days to fill prediction values not provided by
    akkudoktor.net (only provides 24 values).

prediction/loadabc:

- Extend the generic prediction keys by 'load_total_adjusted' for load predictions
    that adjust the predicted total load by measured load values.

prediction/loadakkudoktor:

- Extend the Akkudoktor load prediction by load adjustment using measured load
    values.

prediction/load_aggregator:

- Module removed. Load aggregation is now handled by the measurement module.

prediction/load_corrector:

- Module removed. Load correction (aka. adjustment of load prediction by
    measured load energy) is handled by the LoadAkkudoktor prediction and
    the generic 'load_mean_adjusted' prediction key.

prediction/load_forecast:

- Module removed. Functionality now completely handled by the LoadAkkudoktor
    prediction.

utils/cacheutil:

- Use pydantic.
- Fix potential bug in ttl (time to live) duration handling.

utils/datetimeutil:

- Added missing handling of pendulum.DateTime and pendulum.Duration instances
    as input. Handled before as datetime.datetime and datetime.timedelta.

utils/visualize:

- Move main to generate_example_report() for better testing support.

server/server:

- Added new configuration option server_fastapi_startup_server_fasthtml
  to make startup of FastHTML server by FastAPI server conditional.

server/fastapi_server:

- Add APIs for measurements
- Improve APIs to provide or take pandas datetime series and
    datetime dataframes controlled by pydantic model.
- Improve APIs to provide or take simple datetime data arrays
    controlled by pydantic model.
- Move fastAPI server API to v1 for new APIs.
- Update pre v1 endpoints to use new prediction and measurement capabilities.
- Only start FastHTML server if 'server_fastapi_startup_server_fasthtml'
    config option is set.

tests:

- Adapt import tests to changed import method signature
- Adapt server test to use the v1 API
- Extend the dataabc test to test for array generation from data
    with several data interval scenarios.
- Extend the datetimeutil test to also test for correct handling
    of to_datetime() providing now().
- Adapt LoadAkkudoktor test for new adjustment calculation.
- Adapt visualization test to use example report function instead of visualize.py
    run as process.
- Removed test_load_aggregator. Functionality is now tested in test_measurement.
- Added tests for measurement module

docs:

- Remove sphinxcontrib-openapi as it prevents build of documentation.
    "site-packages/sphinxcontrib/openapi/openapi31.py", line 305, in _get_type_from_schema
    for t in schema["anyOf"]: KeyError: 'anyOf'"

Signed-off-by: Bobby Noelte <b0661n0e17e@gmail.com>
This commit is contained in:
Bobby Noelte
2024-12-29 18:42:49 +01:00
committed by GitHub
parent 2a8e11d7dc
commit 830af85fca
38 changed files with 3671 additions and 948 deletions

View File

@@ -2,13 +2,13 @@
import io
import pickle
from datetime import date, datetime, time, timedelta
from datetime import date, datetime, timedelta
from time import sleep
import pytest
from akkudoktoreos.utils.cacheutil import CacheFileStore, cache_in_file
from akkudoktoreos.utils.datetimeutil import to_datetime
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime, to_duration
# -----------------------------
# CacheFileStore
@@ -18,24 +18,63 @@ from akkudoktoreos.utils.datetimeutil import to_datetime
@pytest.fixture
def cache_store():
"""A pytest fixture that creates a new CacheFileStore instance for testing."""
return CacheFileStore()
cache = CacheFileStore()
cache.clear(clear_all=True)
assert len(cache._store) == 0
return cache
def test_generate_cache_file_key(cache_store):
"""Test cache file key generation based on URL and date."""
key = "http://example.com"
until_dt = to_datetime("2024-10-01").date()
cache_file_key, cache_file_until_dt = cache_store._generate_cache_file_key(key, until_dt)
# Provide until date - assure until_dt is used.
until_dt = to_datetime("2024-10-01")
cache_file_key, cache_file_until_dt, ttl_duration = cache_store._generate_cache_file_key(
key=key, until_datetime=until_dt
)
assert cache_file_key is not None
assert cache_file_until_dt == until_dt
assert compare_datetimes(cache_file_until_dt, until_dt).equal
# Provide until date again - assure same key is generated.
cache_file_key1, cache_file_until_dt1, ttl_duration1 = cache_store._generate_cache_file_key(
key=key, until_datetime=until_dt
)
assert cache_file_key1 == cache_file_key
assert compare_datetimes(cache_file_until_dt1, until_dt).equal
# Provide no until date - assure today EOD is used.
until_dt = datetime.combine(date.today(), time.max)
cache_file_key, cache_file_until_dt = cache_store._generate_cache_file_key(key, None)
assert cache_file_until_dt == until_dt
cache_file_key1, cache_file_until_dt1 = cache_store._generate_cache_file_key(key, until_dt)
assert cache_file_key == cache_file_key1
assert cache_file_until_dt == until_dt
no_until_dt = to_datetime().end_of("day")
cache_file_key, cache_file_until_dt, ttl_duration = cache_store._generate_cache_file_key(key)
assert cache_file_key is not None
assert compare_datetimes(cache_file_until_dt, no_until_dt).equal
# Provide with_ttl - assure until_dt is used.
until_dt = to_datetime().add(hours=1)
cache_file_key, cache_file_until_dt, ttl_duration = cache_store._generate_cache_file_key(
key, with_ttl="1 hour"
)
assert cache_file_key is not None
assert compare_datetimes(cache_file_until_dt, until_dt).approximately_equal
assert ttl_duration == to_duration("1 hour")
# Provide with_ttl again - assure same key is generated.
until_dt = to_datetime().add(hours=1)
cache_file_key1, cache_file_until_dt1, ttl_duration1 = cache_store._generate_cache_file_key(
key=key, with_ttl="1 hour"
)
assert cache_file_key1 == cache_file_key
assert compare_datetimes(cache_file_until_dt1, until_dt).approximately_equal
assert ttl_duration1 == to_duration("1 hour")
# Provide different with_ttl - assure different key is generated.
until_dt = to_datetime().add(hours=1, minutes=1)
cache_file_key2, cache_file_until_dt2, ttl_duration2 = cache_store._generate_cache_file_key(
key=key, with_ttl="1 hour 1 minute"
)
assert cache_file_key2 != cache_file_key
assert compare_datetimes(cache_file_until_dt2, until_dt).approximately_equal
assert ttl_duration2 == to_duration("1 hour 1 minute")
def test_get_file_path(cache_store):
@@ -46,6 +85,77 @@ def test_get_file_path(cache_store):
assert file_path is not None
def test_until_datetime_by_options(cache_store):
"""Test until datetime calculation based on options."""
now = to_datetime()
# Test with until_datetime
result, ttl_duration = cache_store._until_datetime_by_options(until_datetime=now)
assert result == now
assert ttl_duration is None
# -- From now on we expect a until_datetime in one hour
ttl_duration_expected = to_duration("1 hour")
# Test with with_ttl as timedelta
until_datetime_expected = to_datetime().add(hours=1)
ttl = timedelta(hours=1)
result, ttl_duration = cache_store._until_datetime_by_options(with_ttl=ttl)
assert compare_datetimes(result, until_datetime_expected).approximately_equal
assert ttl_duration == ttl_duration_expected
# Test with with_ttl as int (seconds)
until_datetime_expected = to_datetime().add(hours=1)
ttl_seconds = 3600
result, ttl_duration = cache_store._until_datetime_by_options(with_ttl=ttl_seconds)
assert compare_datetimes(result, until_datetime_expected).approximately_equal
assert ttl_duration == ttl_duration_expected
# Test with with_ttl as string ("1 hour")
until_datetime_expected = to_datetime().add(hours=1)
ttl_string = "1 hour"
result, ttl_duration = cache_store._until_datetime_by_options(with_ttl=ttl_string)
assert compare_datetimes(result, until_datetime_expected).approximately_equal
assert ttl_duration == ttl_duration_expected
# -- From now on we expect a until_datetime today at end of day
until_datetime_expected = to_datetime().end_of("day")
ttl_duration_expected = None
# Test default case (end of today)
result, ttl_duration = cache_store._until_datetime_by_options()
assert compare_datetimes(result, until_datetime_expected).equal
assert ttl_duration == ttl_duration_expected
# -- From now on we expect a until_datetime in one day at end of day
until_datetime_expected = to_datetime().add(days=1).end_of("day")
assert ttl_duration == ttl_duration_expected
# Test with until_date as date
until_date = date.today() + timedelta(days=1)
result, ttl_duration = cache_store._until_datetime_by_options(until_date=until_date)
assert compare_datetimes(result, until_datetime_expected).equal
assert ttl_duration == ttl_duration_expected
# -- Test with multiple options (until_datetime takes precedence)
specific_datetime = to_datetime().add(days=2)
result, ttl_duration = cache_store._until_datetime_by_options(
until_date=to_datetime().add(days=1).date(),
until_datetime=specific_datetime,
with_ttl=ttl,
)
assert compare_datetimes(result, specific_datetime).equal
assert ttl_duration is None
# Test with invalid inputs
with pytest.raises(ValueError):
cache_store._until_datetime_by_options(until_date="invalid-date")
with pytest.raises(ValueError):
cache_store._until_datetime_by_options(with_ttl="invalid-ttl")
with pytest.raises(ValueError):
cache_store._until_datetime_by_options(until_datetime="invalid-datetime")
def test_create_cache_file(cache_store):
"""Test the creation of a cache file and ensure it is stored correctly."""
# Create a cache file for today's date
@@ -145,7 +255,7 @@ def test_clear_cache_files_by_date(cache_store):
assert cache_store.get("file2") is cache_file2
# Clear cache files that are older than today
cache_store.clear(before_datetime=datetime.combine(date.today(), time.min))
cache_store.clear(before_datetime=to_datetime().start_of("day"))
# Ensure the files are in the store
assert cache_store.get("file1") is cache_file1
@@ -228,7 +338,7 @@ def test_cache_in_file_decorator_caches_function_result(cache_store):
# Check if the result was written to the cache file
key = next(iter(cache_store._store))
cache_file = cache_store._store[key][0]
cache_file = cache_store._store[key].cache_file
assert cache_file is not None
# Assert correct content was written to the file
@@ -248,12 +358,12 @@ def test_cache_in_file_decorator_uses_cache(cache_store):
return "New result"
# Call the decorated function (should store result in cache)
result = my_function(until_date=datetime.now() + timedelta(days=1))
result = my_function(until_date=to_datetime().add(days=1))
assert result == "New result"
# Assert result was written to cache file
key = next(iter(cache_store._store))
cache_file = cache_store._store[key][0]
cache_file = cache_store._store[key].cache_file
assert cache_file is not None
cache_file.seek(0) # Move to the start of the file
assert cache_file.read() == result
@@ -264,7 +374,7 @@ def test_cache_in_file_decorator_uses_cache(cache_store):
cache_file.write(result2)
# Call the decorated function again (should get result from cache)
result = my_function(until_date=datetime.now() + timedelta(days=1))
result = my_function(until_date=to_datetime().add(days=1))
assert result == result2
@@ -279,7 +389,7 @@ def test_cache_in_file_decorator_forces_update_data(cache_store):
def my_function(until_date=None):
return "New result"
until_date = datetime.now() + timedelta(days=1)
until_date = to_datetime().add(days=1).date()
# Call the decorated function (should store result in cache)
result1 = "New result"
@@ -288,7 +398,7 @@ def test_cache_in_file_decorator_forces_update_data(cache_store):
# Assert result was written to cache file
key = next(iter(cache_store._store))
cache_file = cache_store._store[key][0]
cache_file = cache_store._store[key].cache_file
assert cache_file is not None
cache_file.seek(0) # Move to the start of the file
assert cache_file.read() == result
@@ -297,6 +407,8 @@ def test_cache_in_file_decorator_forces_update_data(cache_store):
result2 = "Cached result"
cache_file.seek(0)
cache_file.write(result2)
cache_file.seek(0) # Move to the start of the file
assert cache_file.read() == result2
# Call the decorated function again with force update (should get result from function)
result = my_function(until_date=until_date, force_update=True) # type: ignore[call-arg]
@@ -309,9 +421,6 @@ def test_cache_in_file_decorator_forces_update_data(cache_store):
def test_cache_in_file_handles_ttl(cache_store):
"""Test that the cache_infile decorator handles the with_ttl parameter."""
# Clear store to assure it is empty
cache_store.clear(clear_all=True)
assert len(cache_store._store) == 0
# Define a simple function to decorate
@cache_in_file(mode="w+")
@@ -319,26 +428,37 @@ def test_cache_in_file_handles_ttl(cache_store):
return "New result"
# Call the decorated function
result = my_function(with_ttl="1 second") # type: ignore[call-arg]
result1 = my_function(with_ttl="1 second") # type: ignore[call-arg]
assert result1 == "New result"
assert len(cache_store._store) == 1
key = list(cache_store._store.keys())[0]
# Overwrite cache file
# Assert result was written to cache file
key = next(iter(cache_store._store))
cache_file = cache_store._store[key][0]
cache_file = cache_store._store[key].cache_file
assert cache_file is not None
cache_file.seek(0) # Move to the start of the file
cache_file.write("Modified result")
cache_file.seek(0) # Move to the start of the file
assert cache_file.read() == "Modified result"
assert cache_file.read() == result1
# Modify cache file
result2 = "Cached result"
cache_file.seek(0)
cache_file.write(result2)
cache_file.seek(0) # Move to the start of the file
assert cache_file.read() == result2
# Call the decorated function again
result = my_function(with_ttl="1 second") # type: ignore[call-arg]
assert result == "Modified result"
cache_file.seek(0) # Move to the start of the file
assert cache_file.read() == result2
assert result == result2
# Wait one second to let the cache time out
sleep(1)
sleep(2)
# Call again - cache should be timed out
result = my_function(with_ttl="1 second") # type: ignore[call-arg]
assert result == "New result"
assert result == result1
def test_cache_in_file_handles_bytes_return(cache_store):
@@ -357,7 +477,7 @@ def test_cache_in_file_handles_bytes_return(cache_store):
# Check if the binary data was written to the cache file
key = next(iter(cache_store._store))
cache_file = cache_store._store[key][0]
cache_file = cache_store._store[key].cache_file
assert len(cache_store._store) == 1
assert cache_file is not None
cache_file.seek(0)
@@ -367,5 +487,5 @@ def test_cache_in_file_handles_bytes_return(cache_store):
# Access cache
result = my_function(until_date=datetime.now() + timedelta(days=1))
assert len(cache_store._store) == 1
assert cache_store._store[key][0] is not None
assert cache_store._store[key].cache_file is not None
assert result1 == result

View File

@@ -346,6 +346,127 @@ class TestDataSequence:
assert array[1] == 7
assert array[2] == last_datetime.day
def test_key_to_array_linear_interpolation(self, sequence):
"""Test key_to_array with linear interpolation for numeric data."""
interval = to_duration("1 hour")
record1 = self.create_test_record(pendulum.datetime(2023, 11, 6, 0), 0.8)
record2 = self.create_test_record(pendulum.datetime(2023, 11, 6, 2), 1.0) # Gap of 2 hours
sequence.insert_by_datetime(record1)
sequence.insert_by_datetime(record2)
array = sequence.key_to_array(
key="data_value",
start_datetime=pendulum.datetime(2023, 11, 6),
end_datetime=pendulum.datetime(2023, 11, 6, 3),
interval=interval,
fill_method="linear",
)
assert len(array) == 3
assert array[0] == 0.8
assert array[1] == 0.9 # Interpolated value
assert array[2] == 1.0
def test_key_to_array_ffill(self, sequence):
"""Test key_to_array with forward filling for missing values."""
interval = to_duration("1 hour")
record1 = self.create_test_record(pendulum.datetime(2023, 11, 6, 0), 0.8)
record2 = self.create_test_record(pendulum.datetime(2023, 11, 6, 2), 1.0)
sequence.insert_by_datetime(record1)
sequence.insert_by_datetime(record2)
array = sequence.key_to_array(
key="data_value",
start_datetime=pendulum.datetime(2023, 11, 6),
end_datetime=pendulum.datetime(2023, 11, 6, 3),
interval=interval,
fill_method="ffill",
)
assert len(array) == 3
assert array[0] == 0.8
assert array[1] == 0.8 # Forward-filled value
assert array[2] == 1.0
def test_key_to_array_bfill(self, sequence):
"""Test key_to_array with backward filling for missing values."""
interval = to_duration("1 hour")
record1 = self.create_test_record(pendulum.datetime(2023, 11, 6, 0), 0.8)
record2 = self.create_test_record(pendulum.datetime(2023, 11, 6, 2), 1.0)
sequence.insert_by_datetime(record1)
sequence.insert_by_datetime(record2)
array = sequence.key_to_array(
key="data_value",
start_datetime=pendulum.datetime(2023, 11, 6),
end_datetime=pendulum.datetime(2023, 11, 6, 3),
interval=interval,
fill_method="bfill",
)
assert len(array) == 3
assert array[0] == 0.8
assert array[1] == 1.0 # Backward-filled value
assert array[2] == 1.0
def test_key_to_array_with_truncation(self, sequence):
"""Test truncation behavior in key_to_array."""
interval = to_duration("1 hour")
record1 = self.create_test_record(pendulum.datetime(2023, 11, 5, 23), 0.8)
record2 = self.create_test_record(pendulum.datetime(2023, 11, 6, 1), 1.0)
sequence.insert_by_datetime(record1)
sequence.insert_by_datetime(record2)
array = sequence.key_to_array(
key="data_value",
start_datetime=pendulum.datetime(2023, 11, 6),
end_datetime=pendulum.datetime(2023, 11, 6, 2),
interval=interval,
)
assert len(array) == 2
assert array[0] == 0.9 # Interpolated from previous day
assert array[1] == 1.0
def test_key_to_array_with_none(self, sequence):
"""Test handling of empty series in key_to_array."""
interval = to_duration("1 hour")
array = sequence.key_to_array(
key="data_value",
start_datetime=pendulum.datetime(2023, 11, 6),
end_datetime=pendulum.datetime(2023, 11, 6, 3),
interval=interval,
)
assert isinstance(array, np.ndarray)
assert np.all(array == None)
def test_key_to_array_with_one(self, sequence):
"""Test handling of one element series in key_to_array."""
interval = to_duration("1 hour")
record1 = self.create_test_record(pendulum.datetime(2023, 11, 5, 23), 0.8)
sequence.insert_by_datetime(record1)
array = sequence.key_to_array(
key="data_value",
start_datetime=pendulum.datetime(2023, 11, 6),
end_datetime=pendulum.datetime(2023, 11, 6, 2),
interval=interval,
)
assert len(array) == 2
assert array[0] == 0.8 # Interpolated from previous day
assert array[1] == 0.8
def test_key_to_array_invalid_fill_method(self, sequence):
"""Test invalid fill_method raises an error."""
interval = to_duration("1 hour")
record1 = self.create_test_record(pendulum.datetime(2023, 11, 6, 0), 0.8)
sequence.insert_by_datetime(record1)
with pytest.raises(ValueError, match="Unsupported fill method: invalid"):
sequence.key_to_array(
key="data_value",
start_datetime=pendulum.datetime(2023, 11, 6),
end_datetime=pendulum.datetime(2023, 11, 6, 1),
interval=interval,
fill_method="invalid",
)
def test_to_datetimeindex(self, sequence2):
record1 = self.create_test_record(datetime(2023, 11, 5), 0.8)
record2 = self.create_test_record(datetime(2023, 11, 6), 0.9)
@@ -531,10 +652,9 @@ class TestDataImportProvider:
],
)
def test_import_datetimes(self, provider, start_datetime, value_count, expected_mapping_count):
ems_eos = get_ems()
ems_eos.set_start_datetime(to_datetime(start_datetime, in_timezone="Europe/Berlin"))
start_datetime = to_datetime(start_datetime, in_timezone="Europe/Berlin")
value_datetime_mapping = provider.import_datetimes(value_count)
value_datetime_mapping = provider.import_datetimes(start_datetime, value_count)
assert len(value_datetime_mapping) == expected_mapping_count
@@ -551,11 +671,10 @@ class TestDataImportProvider:
self, set_other_timezone, provider, start_datetime, value_count, expected_mapping_count
):
original_tz = set_other_timezone("Etc/UTC")
ems_eos = get_ems()
ems_eos.set_start_datetime(to_datetime(start_datetime, in_timezone="Europe/Berlin"))
assert ems_eos.start_datetime.timezone.name == "Europe/Berlin"
start_datetime = to_datetime(start_datetime, in_timezone="Europe/Berlin")
assert start_datetime.timezone.name == "Europe/Berlin"
value_datetime_mapping = provider.import_datetimes(value_count)
value_datetime_mapping = provider.import_datetimes(start_datetime, value_count)
assert len(value_datetime_mapping) == expected_mapping_count
@@ -636,7 +755,7 @@ class TestDataContainer:
del container_with_providers["data_value"]
series = container_with_providers["data_value"]
assert series.name == "data_value"
assert series.tolist() == [None, None, None]
assert series.tolist() == []
def test_delitem_non_existing_key(self, container_with_providers):
with pytest.raises(KeyError, match="Key 'non_existent_key' not found"):

View File

@@ -19,7 +19,7 @@ from akkudoktoreos.utils.datetimeutil import (
# Test cases for valid pendulum.duration inputs
@pytest.mark.parametrize(
"test_case, local_timezone, date_input, as_string, in_timezone, to_naiv, to_maxtime, expected_output",
"test_case, local_timezone, date_input, as_string, in_timezone, to_naiv, to_maxtime, expected_output, expected_approximately",
[
# ---------------------------------------
# from string to pendulum.datetime object
@@ -34,6 +34,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 1, 1, 0, 0, 0, tz="Etc/UTC"),
False,
),
(
"TC002",
@@ -44,6 +45,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 1, 1, 0, 0, 0, tz="Europe/Berlin"),
False,
),
(
"TC003",
@@ -54,6 +56,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2023, 12, 31, 23, 0, 0, tz="Etc/UTC"),
False,
),
(
"TC004",
@@ -64,6 +67,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 1, 1, 0, 0, 0, tz="Europe/Paris"),
False,
),
(
"TC005",
@@ -74,6 +78,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 1, 1, 1, 0, 0, tz="Europe/Berlin"),
False,
),
(
"TC006",
@@ -84,6 +89,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2023, 12, 31, 23, 0, 0, tz="Etc/UTC"),
False,
),
(
"TC007",
@@ -102,6 +108,7 @@ from akkudoktoreos.utils.datetimeutil import (
0,
tz="Atlantic/Canary",
),
False,
),
(
"TC008",
@@ -112,6 +119,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 1, 1, 13, 0, 0, tz="Europe/Berlin"),
False,
),
(
"TC009",
@@ -122,6 +130,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 1, 1, 11, 0, 0, tz="Etc/UTC"),
False,
),
# - with timezone
(
@@ -133,6 +142,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 2, 2, 0, 0, 0, tz="Europe/Berlin"),
False,
),
(
"TC011",
@@ -143,6 +153,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
None,
pendulum.datetime(2024, 3, 3, 10, 20, 30, 0, tz="Europe/Berlin"),
False,
),
(
"TC012",
@@ -153,6 +164,7 @@ from akkudoktoreos.utils.datetimeutil import (
False,
None,
pendulum.datetime(2024, 4, 4, 10, 20, 30, 0, tz="Europe/Berlin"),
False,
),
(
"TC013",
@@ -163,6 +175,7 @@ from akkudoktoreos.utils.datetimeutil import (
True,
None,
pendulum.naive(2024, 5, 5, 10, 20, 30, 0),
False,
),
# - without local timezone as UTC
(
@@ -174,6 +187,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 2, 2, 0, 0, 0, tz="UTC"),
False,
),
(
"TC015",
@@ -184,6 +198,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
None,
pendulum.datetime(2024, 3, 3, 10, 20, 30, 0, tz="UTC"),
False,
),
# ---------------------------------------
# from pendulum.datetime to pendulum.datetime object
@@ -197,6 +212,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 4, 4, 0, 0, 0, tz="Etc/UTC"),
False,
),
(
"TC017",
@@ -207,6 +223,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 4, 4, 3, 0, 0, tz="Europe/Berlin"),
False,
),
(
"TC018",
@@ -217,6 +234,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 4, 4, 3, 0, 0, tz="Europe/Berlin"),
False,
),
(
"TC019",
@@ -227,6 +245,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
False,
pendulum.datetime(2024, 4, 4, 0, 0, 0, tz="Etc/UTC"),
False,
),
# ---------------------------------------
# from string to UTC string
@@ -242,6 +261,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
None,
"2023-11-06T00:00:00Z",
False,
),
# local timezone "Europe/Berlin"
(
@@ -253,6 +273,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
None,
"2023-11-05T23:00:00Z",
False,
),
# - no microseconds
(
@@ -264,6 +285,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
None,
"2024-10-29T23:00:00Z",
False,
),
(
"TC023",
@@ -274,6 +296,7 @@ from akkudoktoreos.utils.datetimeutil import (
None,
None,
"2024-10-30T00:00:00Z",
False,
),
# - with microseconds
(
@@ -285,6 +308,23 @@ from akkudoktoreos.utils.datetimeutil import (
None,
None,
"2024-10-07T08:20:30Z",
False,
),
# ---------------------------------------
# from None to pendulum.datetime object
# ---------------------------------------
# - no timezone
# local timezone
(
"TC025",
None,
None,
None,
None,
None,
None,
pendulum.now(),
True,
),
],
)
@@ -298,6 +338,7 @@ def test_to_datetime(
to_naiv,
to_maxtime,
expected_output,
expected_approximately,
):
"""Test pendulum.datetime conversion with valid inputs."""
set_other_timezone(local_timezone)
@@ -326,7 +367,10 @@ def test_to_datetime(
# print(f"Expected: {expected_output} tz={expected_output.timezone}")
# print(f"Result: {result} tz={result.timezone}")
# print(f"Compare: {compare}")
assert compare.equal == True
if expected_approximately:
assert compare.time_diff < 200
else:
assert compare.equal == True
# -----------------------------

View File

@@ -2,7 +2,9 @@ import json
from pathlib import Path
from unittest.mock import Mock, patch
import numpy as np
import pytest
import requests
from akkudoktoreos.core.ems import get_ems
from akkudoktoreos.prediction.elecpriceakkudoktor import (
@@ -66,6 +68,35 @@ def test_invalid_provider(elecprice_provider, monkeypatch):
# ------------------------------------------------
@patch("akkudoktoreos.prediction.elecpriceakkudoktor.logger.error")
def test_validate_data_invalid_format(mock_logger, elecprice_provider):
"""Test validation for invalid Akkudoktor data."""
invalid_data = '{"invalid": "data"}'
with pytest.raises(ValueError):
elecprice_provider._validate_data(invalid_data)
mock_logger.assert_called_once_with(mock_logger.call_args[0][0])
def test_calculate_weighted_mean(elecprice_provider):
"""Test calculation of weighted mean for electricity prices."""
elecprice_provider.elecprice_8days = np.random.rand(24, 8) * 100
price_mean = elecprice_provider._calculate_weighted_mean(day_of_week=2, hour=10)
assert isinstance(price_mean, float)
assert not np.isnan(price_mean)
expected = np.array(
[
[1.0, 0.5, 0.25, 0.125, 0.0625, 0.03125, 0.015625, 1.0],
[0.25, 1.0, 0.5, 0.125, 0.0625, 0.03125, 0.015625, 1.0],
[0.125, 0.5, 1.0, 0.25, 0.0625, 0.03125, 0.015625, 1.0],
[0.0625, 0.125, 0.5, 1.0, 0.25, 0.03125, 0.015625, 1.0],
[0.0625, 0.125, 0.25, 0.5, 1.0, 0.03125, 0.015625, 1.0],
[0.015625, 0.03125, 0.0625, 0.125, 0.5, 1.0, 0.25, 1.0],
[0.015625, 0.03125, 0.0625, 0.125, 0.25, 0.5, 1.0, 1.0],
]
)
np.testing.assert_array_equal(elecprice_provider.elecprice_8days_weights_day_of_week, expected)
@patch("requests.get")
def test_request_forecast(mock_get, elecprice_provider, sample_akkudoktor_1_json):
"""Test requesting forecast from Akkudoktor."""
@@ -110,7 +141,7 @@ def test_update_data(mock_get, elecprice_provider, sample_akkudoktor_1_json, cac
# Assert: Verify the result is as expected
mock_get.assert_called_once()
assert len(elecprice_provider) == 25
assert len(elecprice_provider) == 49 # prediction hours + 1
# Assert we get prediction_hours prioce values by resampling
np_price_array = elecprice_provider.key_to_array(
@@ -124,6 +155,63 @@ def test_update_data(mock_get, elecprice_provider, sample_akkudoktor_1_json, cac
# f_out.write(elecprice_provider.to_json())
@patch("requests.get")
def test_update_data_with_incomplete_forecast(mock_get, elecprice_provider):
"""Test `_update_data` with incomplete or missing forecast data."""
incomplete_data: dict = {"meta": {}, "values": []}
mock_response = Mock()
mock_response.status_code = 200
mock_response.content = json.dumps(incomplete_data)
mock_get.return_value = mock_response
with pytest.raises(ValueError):
elecprice_provider._update_data(force_update=True)
@pytest.mark.parametrize(
"status_code, exception",
[(400, requests.exceptions.HTTPError), (500, requests.exceptions.HTTPError), (200, None)],
)
@patch("requests.get")
def test_request_forecast_status_codes(
mock_get, elecprice_provider, sample_akkudoktor_1_json, status_code, exception
):
"""Test handling of various API status codes."""
mock_response = Mock()
mock_response.status_code = status_code
mock_response.content = json.dumps(sample_akkudoktor_1_json)
mock_response.raise_for_status.side_effect = (
requests.exceptions.HTTPError if exception else None
)
mock_get.return_value = mock_response
if exception:
with pytest.raises(exception):
elecprice_provider._request_forecast()
else:
elecprice_provider._request_forecast()
@patch("akkudoktoreos.utils.cacheutil.CacheFileStore")
def test_cache_integration(mock_cache, elecprice_provider):
"""Test caching of 8-day electricity price data."""
mock_cache_instance = mock_cache.return_value
mock_cache_instance.get.return_value = None # Simulate no cache
elecprice_provider._update_data(force_update=True)
mock_cache_instance.create.assert_called_once()
mock_cache_instance.get.assert_called_once()
def test_key_to_array_resampling(elecprice_provider):
"""Test resampling of forecast data to NumPy array."""
elecprice_provider.update_data(force_update=True)
array = elecprice_provider.key_to_array(
key="elecprice_marketprice",
start_datetime=elecprice_provider.start_datetime,
end_datetime=elecprice_provider.end_datetime,
)
assert isinstance(array, np.ndarray)
assert len(array) == elecprice_provider.total_hours
# ------------------------------------------------
# Development Akkudoktor
# ------------------------------------------------

View File

@@ -96,7 +96,9 @@ def test_import(elecprice_provider, sample_import_1_json, start_datetime, from_f
assert elecprice_provider.total_hours is not None
assert compare_datetimes(elecprice_provider.start_datetime, ems_eos.start_datetime).equal
values = sample_import_1_json["elecprice_marketprice"]
value_datetime_mapping = elecprice_provider.import_datetimes(len(values))
value_datetime_mapping = elecprice_provider.import_datetimes(
ems_eos.start_datetime, len(values)
)
for i, mapping in enumerate(value_datetime_mapping):
assert i < len(elecprice_provider.records)
expected_datetime, expected_value_index = mapping

View File

@@ -1,39 +0,0 @@
import pytest
from akkudoktoreos.prediction.load_aggregator import LoadAggregator
def test_initialization():
aggregator = LoadAggregator()
assert aggregator.prediction_hours == 24
assert aggregator.loads == {}
def test_add_load_valid():
aggregator = LoadAggregator(prediction_hours=3)
aggregator.add_load("Source1", [10.0, 20.0, 30.0])
assert aggregator.loads["Source1"] == [10.0, 20.0, 30.0]
def test_add_load_invalid_length():
aggregator = LoadAggregator(prediction_hours=3)
with pytest.raises(ValueError, match="Total load inconsistent lengths in arrays: Source1 2"):
aggregator.add_load("Source1", [10.0, 20.0])
def test_calculate_total_load_empty():
aggregator = LoadAggregator()
assert aggregator.calculate_total_load() == []
def test_calculate_total_load():
aggregator = LoadAggregator(prediction_hours=3)
aggregator.add_load("Source1", [10.0, 20.0, 30.0])
aggregator.add_load("Source2", [5.0, 15.0, 25.0])
assert aggregator.calculate_total_load() == [15.0, 35.0, 55.0]
def test_calculate_total_load_single_source():
aggregator = LoadAggregator(prediction_hours=3)
aggregator.add_load("Source1", [10.0, 20.0, 30.0])
assert aggregator.calculate_total_load() == [10.0, 20.0, 30.0]

View File

@@ -6,18 +6,20 @@ import pytest
from akkudoktoreos.config.config import get_config
from akkudoktoreos.core.ems import get_ems
from akkudoktoreos.measurement.measurement import MeasurementDataRecord, get_measurement
from akkudoktoreos.prediction.loadakkudoktor import (
LoadAkkudoktor,
LoadAkkudoktorCommonSettings,
)
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime, to_duration
config_eos = get_config()
ems_eos = get_ems()
@pytest.fixture
def load_provider(monkeypatch):
"""Fixture to create a LoadAkkudoktor instance."""
def load_provider():
"""Fixture to initialise the LoadAkkudoktor instance."""
settings = {
"load_provider": "LoadAkkudoktor",
"load_name": "Akkudoktor Profile",
@@ -27,6 +29,30 @@ def load_provider(monkeypatch):
return LoadAkkudoktor()
@pytest.fixture
def measurement_eos():
"""Fixture to initialise the Measurement instance."""
measurement = get_measurement()
load0_mr = 500
load1_mr = 500
dt = to_datetime("2024-01-01T00:00:00")
interval = to_duration("1 hour")
for i in range(25):
measurement.records.append(
MeasurementDataRecord(
date_time=dt,
measurement_load0_mr=load0_mr,
measurement_load1_mr=load1_mr,
)
)
dt += interval
load0_mr += 50
load1_mr += 50
assert compare_datetimes(measurement.min_datetime, to_datetime("2024-01-01T00:00:00")).equal
assert compare_datetimes(measurement.max_datetime, to_datetime("2024-01-02T00:00:00")).equal
return measurement
@pytest.fixture
def mock_load_profiles_file(tmp_path):
"""Fixture to create a mock load profiles file."""
@@ -97,3 +123,90 @@ def test_update_data(mock_load_data, load_provider):
# Validate that update_value is called
assert len(load_provider) > 0
def test_calculate_adjustment(load_provider, measurement_eos):
"""Test `_calculate_adjustment` for various scenarios."""
data_year_energy = np.random.rand(365, 2, 24)
# Call the method and validate results
weekday_adjust, weekend_adjust = load_provider._calculate_adjustment(data_year_energy)
assert weekday_adjust.shape == (24,)
assert weekend_adjust.shape == (24,)
data_year_energy = np.zeros((365, 2, 24))
weekday_adjust, weekend_adjust = load_provider._calculate_adjustment(data_year_energy)
assert weekday_adjust.shape == (24,)
expected = np.array(
[
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
100.0,
]
)
np.testing.assert_array_equal(weekday_adjust, expected)
assert weekend_adjust.shape == (24,)
expected = np.array(
[
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
]
)
np.testing.assert_array_equal(weekend_adjust, expected)
def test_load_provider_adjustments_with_mock_data(load_provider):
"""Test full integration of adjustments with mock data."""
with patch(
"akkudoktoreos.prediction.loadakkudoktor.LoadAkkudoktor._calculate_adjustment"
) as mock_adjust:
mock_adjust.return_value = (np.zeros(24), np.zeros(24))
# Test execution
load_provider._update_data()
assert mock_adjust.called

218
tests/test_measurement.py Normal file
View File

@@ -0,0 +1,218 @@
import numpy as np
import pytest
from pendulum import datetime, duration
from akkudoktoreos.config.config import SettingsEOS
from akkudoktoreos.measurement.measurement import MeasurementDataRecord, get_measurement
@pytest.fixture
def measurement_eos():
"""Fixture to create a Measurement instance."""
measurement = get_measurement()
measurement.records = [
MeasurementDataRecord(
date_time=datetime(2023, 1, 1, hour=0),
measurement_load0_mr=100,
measurement_load1_mr=200,
),
MeasurementDataRecord(
date_time=datetime(2023, 1, 1, hour=1),
measurement_load0_mr=150,
measurement_load1_mr=250,
),
MeasurementDataRecord(
date_time=datetime(2023, 1, 1, hour=2),
measurement_load0_mr=200,
measurement_load1_mr=300,
),
MeasurementDataRecord(
date_time=datetime(2023, 1, 1, hour=3),
measurement_load0_mr=250,
measurement_load1_mr=350,
),
MeasurementDataRecord(
date_time=datetime(2023, 1, 1, hour=4),
measurement_load0_mr=300,
measurement_load1_mr=400,
),
MeasurementDataRecord(
date_time=datetime(2023, 1, 1, hour=5),
measurement_load0_mr=350,
measurement_load1_mr=450,
),
]
return measurement
def test_interval_count(measurement_eos):
"""Test interval count calculation."""
start = datetime(2023, 1, 1, 0)
end = datetime(2023, 1, 1, 3)
interval = duration(hours=1)
assert measurement_eos._interval_count(start, end, interval) == 3
def test_interval_count_invalid_end_before_start(measurement_eos):
"""Test interval count raises ValueError when end_datetime is before start_datetime."""
start = datetime(2023, 1, 1, 3)
end = datetime(2023, 1, 1, 0)
interval = duration(hours=1)
with pytest.raises(ValueError, match="end_datetime must be after start_datetime"):
measurement_eos._interval_count(start, end, interval)
def test_interval_count_invalid_non_positive_interval(measurement_eos):
"""Test interval count raises ValueError when interval is non-positive."""
start = datetime(2023, 1, 1, 0)
end = datetime(2023, 1, 1, 3)
with pytest.raises(ValueError, match="interval must be positive"):
measurement_eos._interval_count(start, end, duration(hours=0))
def test_energy_from_meter_readings_valid_input(measurement_eos):
"""Test _energy_from_meter_readings with valid inputs and proper alignment of load data."""
key = "measurement_load0_mr"
start_datetime = datetime(2023, 1, 1, 0)
end_datetime = datetime(2023, 1, 1, 5)
interval = duration(hours=1)
load_array = measurement_eos._energy_from_meter_readings(
key, start_datetime, end_datetime, interval
)
expected_load_array = np.array([50, 50, 50, 50, 50]) # Differences between consecutive readings
np.testing.assert_array_equal(load_array, expected_load_array)
def test_energy_from_meter_readings_empty_array(measurement_eos):
"""Test _energy_from_meter_readings with no data (empty array)."""
key = "measurement_load0_mr"
start_datetime = datetime(2023, 1, 1, 0)
end_datetime = datetime(2023, 1, 1, 5)
interval = duration(hours=1)
# Use empyt records array
measurement_eos.records = []
load_array = measurement_eos._energy_from_meter_readings(
key, start_datetime, end_datetime, interval
)
# Expected: an array of zeros with one less than the number of intervals
expected_size = (
measurement_eos._interval_count(start_datetime, end_datetime + interval, interval) - 1
)
expected_load_array = np.zeros(expected_size)
np.testing.assert_array_equal(load_array, expected_load_array)
def test_energy_from_meter_readings_misaligned_array(measurement_eos):
"""Test _energy_from_meter_readings with misaligned array size."""
key = "measurement_load1_mr"
start_datetime = measurement_eos.min_datetime
end_datetime = measurement_eos.max_datetime
interval = duration(hours=1)
# Use misaligned array, latest interval set to 2 hours (instead of 1 hour)
measurement_eos.records[-1].date_time = datetime(2023, 1, 1, 6)
load_array = measurement_eos._energy_from_meter_readings(
key, start_datetime, end_datetime, interval
)
expected_load_array = np.array([50, 50, 50, 50, 25]) # Differences between consecutive readings
np.testing.assert_array_equal(load_array, expected_load_array)
def test_energy_from_meter_readings_partial_data(measurement_eos, caplog):
"""Test _energy_from_meter_readings with partial data (misaligned but empty array)."""
key = "measurement_load2_mr"
start_datetime = datetime(2023, 1, 1, 0)
end_datetime = datetime(2023, 1, 1, 5)
interval = duration(hours=1)
with caplog.at_level("DEBUG"):
load_array = measurement_eos._energy_from_meter_readings(
key, start_datetime, end_datetime, interval
)
expected_size = (
measurement_eos._interval_count(start_datetime, end_datetime + interval, interval) - 1
)
expected_load_array = np.zeros(expected_size)
np.testing.assert_array_equal(load_array, expected_load_array)
def test_energy_from_meter_readings_negative_interval(measurement_eos):
"""Test _energy_from_meter_readings with a negative interval."""
key = "measurement_load3_mr"
start_datetime = datetime(2023, 1, 1, 0)
end_datetime = datetime(2023, 1, 1, 5)
interval = duration(hours=-1)
with pytest.raises(ValueError, match="interval must be positive"):
measurement_eos._energy_from_meter_readings(key, start_datetime, end_datetime, interval)
def test_load_total(measurement_eos):
"""Test total load calculation."""
start = datetime(2023, 1, 1, 0)
end = datetime(2023, 1, 1, 2)
interval = duration(hours=1)
result = measurement_eos.load_total(start_datetime=start, end_datetime=end, interval=interval)
# Expected total load per interval
expected = np.array([100, 100]) # Differences between consecutive meter readings
np.testing.assert_array_equal(result, expected)
def test_load_total_no_data(measurement_eos):
"""Test total load calculation with no data."""
measurement_eos.records = []
start = datetime(2023, 1, 1, 0)
end = datetime(2023, 1, 1, 3)
interval = duration(hours=1)
result = measurement_eos.load_total(start_datetime=start, end_datetime=end, interval=interval)
expected = np.zeros(3) # No data, so all intervals are zero
np.testing.assert_array_equal(result, expected)
def test_name_to_key(measurement_eos):
"""Test name_to_key functionality."""
settings = SettingsEOS(
measurement_load0_name="Household",
measurement_load1_name="Heat Pump",
)
measurement_eos.config.merge_settings(settings)
assert measurement_eos.name_to_key("Household", "measurement_load") == "measurement_load0_mr"
assert measurement_eos.name_to_key("Heat Pump", "measurement_load") == "measurement_load1_mr"
assert measurement_eos.name_to_key("Unknown", "measurement_load") is None
def test_name_to_key_invalid_topic(measurement_eos):
"""Test name_to_key with an invalid topic."""
settings = SettingsEOS(
measurement_load0_name="Household",
measurement_load1_name="Heat Pump",
)
measurement_eos.config.merge_settings(settings)
assert measurement_eos.name_to_key("Household", "invalid_topic") is None
def test_load_total_partial_intervals(measurement_eos):
"""Test total load calculation with partial intervals."""
start = datetime(2023, 1, 1, 0, 30) # Start in the middle of an interval
end = datetime(2023, 1, 1, 1, 30) # End in the middle of another interval
interval = duration(hours=1)
result = measurement_eos.load_total(start_datetime=start, end_datetime=end, interval=interval)
expected = np.array([100]) # Only one complete interval covered
np.testing.assert_array_equal(result, expected)

View File

@@ -406,7 +406,7 @@ class TestPredictionContainer:
del container_with_providers["prediction_value"]
series = container_with_providers["prediction_value"]
assert series.name == "prediction_value"
assert series.tolist() == [None, None, None]
assert series.tolist() == []
def test_delitem_non_existing_key(self, container_with_providers):
with pytest.raises(KeyError, match="Key 'non_existent_key' not found"):

View File

@@ -303,5 +303,5 @@ def test_timezone_behaviour(
forecast_measured = provider.key_to_series(
"pvforecastakkudoktor_ac_power_measured", other_start_datetime, other_end_datetime
)
assert len(forecast_measured) == 48
assert len(forecast_measured) == 1
assert forecast_measured.iloc[0] == 1000.0 # changed before

View File

@@ -96,7 +96,9 @@ def test_import(pvforecast_provider, sample_import_1_json, start_datetime, from_
assert pvforecast_provider.total_hours is not None
assert compare_datetimes(pvforecast_provider.start_datetime, ems_eos.start_datetime).equal
values = sample_import_1_json["pvforecast_ac_power"]
value_datetime_mapping = pvforecast_provider.import_datetimes(len(values))
value_datetime_mapping = pvforecast_provider.import_datetimes(
ems_eos.start_datetime, len(values)
)
for i, mapping in enumerate(value_datetime_mapping):
assert i < len(pvforecast_provider.records)
expected_datetime, expected_value_index = mapping

116
tests/test_pydantic.py Normal file
View File

@@ -0,0 +1,116 @@
from typing import Optional
import pandas as pd
import pendulum
import pytest
from pydantic import Field, ValidationError
from akkudoktoreos.core.pydantic import (
PydanticBaseModel,
PydanticDateTimeData,
PydanticDateTimeDataFrame,
PydanticDateTimeSeries,
)
from akkudoktoreos.utils.datetimeutil import compare_datetimes, to_datetime
class PydanticTestModel(PydanticBaseModel):
datetime_field: pendulum.DateTime = Field(
..., description="A datetime field with pendulum support."
)
optional_field: Optional[str] = Field(default=None, description="An optional field.")
class TestPydanticBaseModel:
def test_valid_pendulum_datetime(self):
dt = pendulum.now()
model = PydanticTestModel(datetime_field=dt)
assert model.datetime_field == dt
def test_invalid_datetime_string(self):
with pytest.raises(ValidationError, match="Input should be an instance of DateTime"):
PydanticTestModel(datetime_field="invalid_datetime")
def test_iso8601_serialization(self):
dt = pendulum.datetime(2024, 12, 21, 15, 0, 0)
model = PydanticTestModel(datetime_field=dt)
serialized = model.to_dict()
expected_dt = to_datetime(dt)
result_dt = to_datetime(serialized["datetime_field"])
assert compare_datetimes(result_dt, expected_dt)
def test_reset_to_defaults(self):
dt = pendulum.now()
model = PydanticTestModel(datetime_field=dt, optional_field="some value")
model.reset_to_defaults()
assert model.datetime_field == dt
assert model.optional_field is None
def test_from_dict_and_to_dict(self):
dt = pendulum.now()
model = PydanticTestModel(datetime_field=dt)
data = model.to_dict()
restored_model = PydanticTestModel.from_dict(data)
assert restored_model.datetime_field == dt
def test_to_json_and_from_json(self):
dt = pendulum.now()
model = PydanticTestModel(datetime_field=dt)
json_data = model.to_json()
restored_model = PydanticTestModel.from_json(json_data)
assert restored_model.datetime_field == dt
class TestPydanticDateTimeData:
def test_valid_list_lengths(self):
data = {
"timestamps": ["2024-12-21T15:00:00+00:00"],
"values": [100],
}
model = PydanticDateTimeData(root=data)
assert pendulum.parse(model.root["timestamps"][0]) == pendulum.parse(
"2024-12-21T15:00:00+00:00"
)
def test_invalid_list_lengths(self):
data = {
"timestamps": ["2024-12-21T15:00:00+00:00"],
"values": [100, 200],
}
with pytest.raises(
ValidationError, match="All lists in the dictionary must have the same length"
):
PydanticDateTimeData(root=data)
class TestPydanticDateTimeDataFrame:
def test_valid_dataframe(self):
df = pd.DataFrame(
{
"value": [100, 200],
},
index=pd.to_datetime(["2024-12-21", "2024-12-22"]),
)
model = PydanticDateTimeDataFrame.from_dataframe(df)
result = model.to_dataframe()
# Check index
assert len(result.index) == len(df.index)
for i, dt in enumerate(df.index):
expected_dt = to_datetime(dt)
result_dt = to_datetime(result.index[i])
assert compare_datetimes(result_dt, expected_dt).equal
class TestPydanticDateTimeSeries:
def test_valid_series(self):
series = pd.Series([100, 200], index=pd.to_datetime(["2024-12-21", "2024-12-22"]))
model = PydanticDateTimeSeries.from_series(series)
result = model.to_series()
# Check index
assert len(result.index) == len(series.index)
for i, dt in enumerate(series.index):
expected_dt = to_datetime(dt)
result_dt = to_datetime(result.index[i])
assert compare_datetimes(result_dt, expected_dt).equal

View File

@@ -13,5 +13,5 @@ def test_server(server):
assert config_eos.data_folder_path is not None
assert config_eos.data_folder_path.is_dir()
result = requests.get(f"{server}/config?")
result = requests.get(f"{server}/v1/config?")
assert result.status_code == HTTPStatus.OK

View File

@@ -1,10 +1,10 @@
import os
import subprocess
from pathlib import Path
from matplotlib.testing.compare import compare_images
from akkudoktoreos.config.config import get_config
from akkudoktoreos.utils.visualize import generate_example_report
filename = "example_report.pdf"
@@ -17,14 +17,13 @@ DIR_TESTDATA = Path(__file__).parent / "testdata"
reference_file = DIR_TESTDATA / "test_example_report.pdf"
def test_generate_pdf_main():
def test_generate_pdf_example():
"""Test generation of example visualization report."""
# Delete the old generated file if it exists
if os.path.isfile(output_file):
os.remove(output_file)
# Execute the __main__ block of visualize.py by running it as a script
script_path = Path(__file__).parent.parent / "src" / "akkudoktoreos" / "utils" / "visualize.py"
subprocess.run(["python", str(script_path)], check=True)
generate_example_report(filename)
# Check if the file exists
assert os.path.isfile(output_file)

View File

@@ -96,7 +96,7 @@ def test_import(weather_provider, sample_import_1_json, start_datetime, from_fil
assert weather_provider.total_hours is not None
assert compare_datetimes(weather_provider.start_datetime, ems_eos.start_datetime).equal
values = sample_import_1_json["weather_temp_air"]
value_datetime_mapping = weather_provider.import_datetimes(len(values))
value_datetime_mapping = weather_provider.import_datetimes(ems_eos.start_datetime, len(values))
for i, mapping in enumerate(value_datetime_mapping):
assert i < len(weather_provider.records)
expected_datetime, expected_value_index = mapping