mirror of
https://github.com/Akkudoktor-EOS/EOS.git
synced 2025-10-11 11:56:17 +00:00
Improve caching. (#431)
* Move the caching module to core. Add an in memory cache that for caching function and method results during an energy management run (optimization run). Two decorators are provided for methods and functions. * Improve the file cache store by load and save functions. Make EOS load the cache file store on startup and save it on shutdown. Add a cyclic task that cleans the cache file store from outdated cache files. * Improve startup of EOSdash by EOS Make EOS starting EOSdash adhere to path configuration given in EOS. The whole environment from EOS is now passed to EOSdash. Should also prevent test errors due to unwanted/ wrong config file creation. Both servers now provide a health endpoint that can be used to detect whether the server is running. This is also used for testing now. * Improve startup of EOS EOS now has got an energy management task that runs shortly after startup. It tries to execute energy management runs with predictions newly fetched or initialized from cached data on first run. * Improve shutdown of EOS EOS has now a shutdown task that shuts EOS down gracefully with some time delay to allow REST API requests for shutdwon or restart to be fully serviced. * Improve EMS Add energy management task for repeated energy management controlled by startup delay and interval configuration parameters. Translate EnergieManagementSystem to english EnergyManagement. * Add administration endpoints - endpoints to control caching from REST API. - endpoints to control server restart (will not work on Windows) and shutdown from REST API * Improve doc generation Use "\n" linenend convention also on Windows when generating doc files. Replace Windows specific 127.0.0.1 address by standard 0.0.0.0. * Improve test support (to be able to test caching) - Add system test option to pytest for running tests with "real" resources - Add new test fixture to start server for test class and test function - Make kill signal adapt to Windows/ Linux - Use consistently "\n" for lineends when writing text files in doc test - Fix test_logging under Windows - Fix conftest config_default_dirs test fixture under Windows From @Lasall * Improve Windows support - Use 127.0.0.1 as default config host (model defaults) and addionally redirect 0.0.0.0 to localhost on Windows (because default config file still has 0.0.0.0). - Update install/startup instructions as package installation is required atm. Signed-off-by: Bobby Noelte <b0661n0e17e@gmail.com>
This commit is contained in:
@@ -953,6 +953,44 @@ class DataSequence(DataBase, MutableSequence):
|
||||
array = resampled.values
|
||||
return array
|
||||
|
||||
def to_dataframe(
|
||||
self,
|
||||
start_datetime: Optional[DateTime] = None,
|
||||
end_datetime: Optional[DateTime] = None,
|
||||
) -> pd.DataFrame:
|
||||
"""Converts the sequence of DataRecord instances into a Pandas DataFrame.
|
||||
|
||||
Args:
|
||||
start_datetime (Optional[datetime]): The lower bound for filtering (inclusive).
|
||||
Defaults to the earliest possible datetime if None.
|
||||
end_datetime (Optional[datetime]): The upper bound for filtering (exclusive).
|
||||
Defaults to the latest possible datetime if None.
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: A DataFrame containing the filtered data from all records.
|
||||
"""
|
||||
if not self.records:
|
||||
return pd.DataFrame() # Return empty DataFrame if no records exist
|
||||
|
||||
# Use filter_by_datetime to get filtered records
|
||||
filtered_records = self.filter_by_datetime(start_datetime, end_datetime)
|
||||
|
||||
# Convert filtered records to a dictionary list
|
||||
data = [record.model_dump() for record in filtered_records]
|
||||
|
||||
# Convert to DataFrame
|
||||
df = pd.DataFrame(data)
|
||||
if df.empty:
|
||||
return df
|
||||
|
||||
# Ensure `date_time` column exists and use it for the index
|
||||
if not "date_time" in df.columns:
|
||||
error_msg = f"Cannot create dataframe: no `date_time` column in `{df}`."
|
||||
logger.error(error_msg)
|
||||
raise TypeError(error_msg)
|
||||
df.index = pd.DatetimeIndex(df["date_time"])
|
||||
return df
|
||||
|
||||
def sort_by_datetime(self, reverse: bool = False) -> None:
|
||||
"""Sort the DataRecords in the sequence by their date_time attribute.
|
||||
|
||||
@@ -1465,7 +1503,7 @@ class DataImportMixin:
|
||||
error_msg += f"Field: {field}\nError: {message}\nType: {error_type}\n"
|
||||
logger.debug(f"PydanticDateTimeDataFrame import: {error_msg}")
|
||||
|
||||
# Try dictionary with special keys start_datetime and intervall
|
||||
# Try dictionary with special keys start_datetime and interval
|
||||
try:
|
||||
import_data = PydanticDateTimeData.model_validate_json(json_str)
|
||||
self.import_from_dict(import_data.to_dict())
|
||||
@@ -1525,7 +1563,7 @@ class DataImportMixin:
|
||||
and `key_prefix = "load"`, only the "load_mean" key will be processed even though
|
||||
both keys are in the record.
|
||||
"""
|
||||
with import_file_path.open("r") as import_file:
|
||||
with import_file_path.open("r", encoding="utf-8", newline=None) as import_file:
|
||||
import_str = import_file.read()
|
||||
self.import_from_json(
|
||||
import_str, key_prefix=key_prefix, start_datetime=start_datetime, interval=interval
|
||||
|
Reference in New Issue
Block a user