Skip to content

Commit

Permalink
Fix Various Warnings (#460)
Browse files Browse the repository at this point in the history
* fix deprecated datetime.datetime.utcnow call

* fix deprecated datetime.datetime.utcnow call

* fix deprecated datetime.datetime.utcnow call

* fix deprecated datetime.datetime.utcnow call

* fix deprecated numpy.in1d call

* catch and filter RankWarning and OptimizeWarning

* only include df for concat if not empty, avoids pandas FutureWarning

* import snuck in there

* just format here

* this should work around the unavailability of Rankwarning on 3.9

* also catch out the OptimizeWarning - REVERT if not desired

* imports here

* context manager to ignore the NaturalNameWarning - RESET if we wish to keep them

* a tiny bit of formatting

* FIX Series.__getitem__ warning by using recommended .iloc

* Introduce CI extras just for our GA workflows, to speed up without polluting developper requirements

* install with CI extra and set auto workers determination

* some type checking

* catch and ignore UserWarning: The figure layout has changed to tight because we do it on purpose

* verbose flag to get some more output

* import order

* FIX UserWarning: set_ticklabels() etc by setting both the ticks and labels at the same time, as recommended

* FIX UserWarning: : No artists with labels found to put in legend by catching, this can happen due to leading underscores in labels

* remove unused imports

* cleaner

* try using only 2 to spare the tests triggering multiprocessing

* unused imports

* changelog entries

* patch version

* typo

* let's forget about this for now

* have to remove this too

* and this part too

* bump to 0.16.1 and target a merge after #458

* make linting happy

* remove this too

* ask coverage workflow to display the coverage stats

* add citation instruction to doc index

* do not bare catch

* let's let linters complain

* expand for josch

* better as list comprehension

* also raise for the user

* filter out warnings and log them

* filter out warnings and log them

* do not intercept this

* filter out warnings and log them

* unused imports

* more cleanup

* specify new behaviour in changelog

* oops typo

* remove this comment

* should be today

* dotdict
  • Loading branch information
fsoubelet committed Sep 20, 2024
1 parent d8c8923 commit 95024bc
Show file tree
Hide file tree
Showing 19 changed files with 306 additions and 169 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/coverage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,5 @@ jobs:
uses: pylhc/.github/.github/workflows/coverage.yml@master
with:
src-dir: omc3
pytest-options: -m "not cern_network"
pytest-options: -m "not cern_network" --cov-report term-missing
secrets: inherit
37 changes: 26 additions & 11 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,21 @@
# OMC3 Changelog

#### 2024-09-20 - v0.16.1 - _fsoubelet_

- Fixed:
- Fixed `DepracationWarning`s related datetime operations.
- Fixed `DeprecationWarning` occuring due to the use of old `numpy` functions.
- Fixed `FutureWarning` happening during edge-cases of dataframe concatenation by performing checks ahead of time.
- Fixed `FutureWarning`s occuring due to deprecated `pandas.Series` accesses.
- Fixed `UserWarning` occuring when wrongly setting ticks and labels for correction plots.

- Changed:
- Masked `NaturalNameWarning`s happening during HDF5 tables operations, as the use of names such as `kq4.r8b2` is not avoidable and `pandas` properly handles access operations for us.
- Masked `UserWarning`s happening during plotting for operations that are explicitely demanded.
- Intercept `RankWarning` which can happen during a `polyfit` of data and re-emit as log message.
- Intercept `OptimizeWarning` happening when the covariance parameters could not be estimated in `kmod` analysis and re-emit as log message.
- Intercept `OptimizeWarning` happening when the covariance parameters could not be estimated in `rdt` analysis and re-emit as log message.

#### 2024-09-18 - v0.16.0 - _jdilly_

- Added:
Expand Down Expand Up @@ -94,7 +110,7 @@
#### 2023-11-29 - v0.12.0 - _jdilly_

- Added to harmonic analysis:
- `suffix` input parameter: adds suffix to output files, which e.g. allows running the same file
- `suffix` input parameter: adds suffix to output files, which e.g. allows running the same file
with different parameters without overwriting it.
- `bunch_ids` input parameter: in case of multibunch-files only analyse these bunches.
If not given, all bunches will be analysed, as before.
Expand All @@ -117,15 +133,15 @@
- Plot Optics: making normalized dispersion plot a special case.

- Added:
- Plot Optics: optional input "--labels" to manually set the legend-labels.
- Plot Optics: optional input "--labels" to manually set the legend-labels.

#### 2023-06-16 - v0.11.1 - _jdilly_

- Fixed:
- OptionalString: 'None' as input is converted to None.
- Missing Kerberos config added to MANIFEST for packaging.
- Plot Optics plots now correct error-column, e.g. for beta-beating.
- Added warnings/errors for too few bpms in N-BPM/3-BPM methods.
- Added warnings/errors for too few bpms in N-BPM/3-BPM methods.
- Added navbar to sphinx documentation.

- Tests:
Expand Down Expand Up @@ -170,12 +186,11 @@
#### 2023-01-20 - v0.7.1 - _jdilly_

- Added:
- Amplitude Detuning plots: Switch to plot only with/without BBQ correction
- Amplitude Detuning plots: Switch to plot only with/without BBQ correction.

- Fixed:
- Second Order Amplitude Detuning fit now working
- Correct print/calculation of second order direct terms for forced
kicks in plot-labels.
- Correct print/calculation of second order direct terms for forced kicks in plot-labels.

#### 2022-11-08 - v0.7.0 - _jdilly_

Expand All @@ -187,14 +202,14 @@

#### 2022-11-01 - v0.6.6

- Bugfixes
- Bugfixes:
- correction: fullresponse is converted to Path.
- fake measurement from model: dont randomize errors and values by default.
- fake measurement from model: dont randomize errors and values by default.

#### 2022-10-15 - v0.6.5

- Added to `knob_extractor`:
- proper state extraction.
- proper state extraction.
- IP2 and IP8 separation/crossing variables.

#### 2022-10-12 - v0.6.4
Expand Down Expand Up @@ -264,7 +279,7 @@
#### 2022-05-19 - v0.3.0 - _jdilly_

- Added:
- Linfile cleaning script.
- Linfile cleaning script.

#### 2022-04-25 - v0.2.7 - _awegshe_

Expand Down Expand Up @@ -359,7 +374,7 @@
- Spectrum Plotting
- Turn-by-Turn Converter

- `setup.py` and packaging functionality
- `setup.py` and packaging functionality
- Automated CI
- Multiple versions of python
- Accuracy tests
Expand Down
15 changes: 15 additions & 0 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,21 @@ Package Reference

modules/*

Citing
======

If ``omc3`` has been significant in your work, and you would like to acknowledge the package in your academic publication, please consider citing the following:

.. code-block:: bibtex
@software{omc3,
author = {OMC-Team and Malina, L and Dilly, J and Hofer, M and Soubelet, F and Wegscheider, A and Coello De Portugal, J and Le Garrec, M and Persson, T and Keintzel, J and Garcia Morales, H and Tomás, R},
doi = {10.5281/zenodo.5705625},
publisher = {Zenodo},
title = {OMC3},
url = {https://doi.org/10.5281/zenodo.5705625},
year = 2022
}
Indices and tables
==================
Expand Down
2 changes: 1 addition & 1 deletion omc3/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
__title__ = "omc3"
__description__ = "An accelerator physics tools package for the OMC team at CERN."
__url__ = "https://github.com/pylhc/omc3"
__version__ = "0.16.0"
__version__ = "0.16.1"
__author__ = "pylhc"
__author_email__ = "pylhc@github.com"
__license__ = "MIT"
Expand Down
110 changes: 70 additions & 40 deletions omc3/correction/response_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,104 +4,134 @@
Input and output functions for response matrices.
"""

from __future__ import annotations

import logging
import warnings
from collections import defaultdict
from collections.abc import Sequence
from contextlib import contextmanager
from pathlib import Path
from typing import Dict, Sequence, Set

import pandas as pd
import logging

from tables import NaturalNameWarning

LOG = logging.getLogger(__name__)

COMPLIB = 'blosc' # zlib is the standard compression
COMPLEVEL = 9 # goes from 0-9, 9 is highest compression, None deactivates if COMPLIB is None
COMPLIB: str = "blosc" # zlib is the standard compression
COMPLEVEL: int = 9 # goes from 0-9, 9 is highest compression, None deactivates if COMPLIB is None


# Fullresponse -----------------------------------------------------------------


def read_fullresponse(path: Path, optics_parameters: Sequence[str] = None) -> Dict[str, pd.DataFrame]:
def read_fullresponse(path: Path, optics_parameters: Sequence[str] = None) -> dict[str, pd.DataFrame]:
"""Load the response matrices from disk.
Beware: As empty DataFrames are skipped on write,
default for not found entries are empty DataFrames.
"""
if not path.exists():
raise IOError(f"Fullresponse file {str(path)} does not exist.")

LOG.info(f"Loading response matrices from file '{str(path)}'")
with pd.HDFStore(path, mode='r') as store:
_check_keys(store, optics_parameters, 'fullresponse')

fullresponse = defaultdict(pd.DataFrame)
if optics_parameters is None:
optics_parameters = _main_store_groups(store)
for param in optics_parameters:
fullresponse[param] = store[param]

# If encountering issues, remove the context manager and debug
with ignore_natural_name_warning():
with pd.HDFStore(path, mode="r") as store:
_check_keys(store, optics_parameters, "fullresponse")

fullresponse = defaultdict(pd.DataFrame)
if optics_parameters is None:
optics_parameters = _main_store_groups(store)
for param in optics_parameters:
fullresponse[param] = store[param]
return fullresponse


def write_fullresponse(path: Path, fullresponse: Dict[str, pd.DataFrame]):
def write_fullresponse(path: Path, fullresponse: dict[str, pd.DataFrame]):
"""Write the full response matrices to disk.
Beware: Empty Dataframes are skipped! (HDF creates gigantic files otherwise)
"""
LOG.info(f"Saving response matrices into file '{str(path)}'")
if path.exists():
LOG.warning(f"Fullresponse file {str(path)} already exist and will be overwritten.")
with pd.HDFStore(path, mode='w', complib=COMPLIB, complevel=COMPLEVEL) as store:
for param, response_df in fullresponse.items():
store.put(value=response_df, key=param, format="table")

# If encountering issues, remove the context manager and debug
with ignore_natural_name_warning():
with pd.HDFStore(path, mode="w", complib=COMPLIB, complevel=COMPLEVEL) as store:
for param, response_df in fullresponse.items():
store.put(value=response_df, key=param, format="table")


# Varmap -----------------------------------------------------------------------


def read_varmap(path: Path, k_values: Sequence[str] = None) -> Dict[str, Dict[str, pd.Series]]:
def read_varmap(path: Path, k_values: Sequence[str] = None) -> dict[str, dict[str, pd.Series]]:
"""Load the variable mapping file from disk.
Beware: As empty DataFrames are skipped on write,
default for not found entries are empty Series.
"""
if not path.exists():
raise IOError(f"Varmap file {str(path)} does not exist.")

LOG.info(f"Loading varmap from file '{str(path)}'")
with pd.HDFStore(path, mode='r') as store:
_check_keys(store, k_values, 'varmap')

varmap = defaultdict(lambda: defaultdict(pd.Series))
for key in store.keys():
_, param, subparam = key.split('/')
if k_values is not None and param not in k_values:
continue
varmap[param][subparam] = store[key]

# If encountering issues, remove the context manager and debug
with ignore_natural_name_warning():
with pd.HDFStore(path, mode="r") as store:
_check_keys(store, k_values, "varmap")

varmap = defaultdict(lambda: defaultdict(pd.Series))
for key in store.keys():
_, param, subparam = key.split("/")
if k_values is not None and param not in k_values:
continue
varmap[param][subparam] = store[key]
return varmap


def write_varmap(path: Path, varmap: Dict[str, Dict[str, pd.Series]]):
def write_varmap(path: Path, varmap: dict[str, dict[str, pd.Series]]):
"""Write the variable mapping file to disk.
Beware: Empty Dataframes are skipped! (HDF creates gigantic files otherwise)
"""
LOG.info(f"Saving varmap into file '{str(path)}'")
with pd.HDFStore(path, mode='w', complib=COMPLIB, complevel=COMPLEVEL) as store:
for param, sub in varmap.items():
for subparam, varmap_series in sub.items():
store.put(value=varmap_series, key=f"{param}/{subparam}", format="table")

# If encountering issues, remove the context manager and debug
with ignore_natural_name_warning():
with pd.HDFStore(path, mode="w", complib=COMPLIB, complevel=COMPLEVEL) as store:
for param, sub in varmap.items():
for subparam, varmap_series in sub.items():
store.put(value=varmap_series, key=f"{param}/{subparam}", format="table")


# Helper -----------------------------------------------------------------------


def _check_keys(store: pd.HDFStore, keys: Sequence[str], id:str):
def _check_keys(store: pd.HDFStore, keys: Sequence[str], id: str):
if keys is None:
return

groups = _main_store_groups(store)
not_found = [k for k in keys if k not in groups]
if len(not_found):
raise ValueError(f"The following keys could not be found in {id} file:"
f" {', '.join(not_found)}")
raise ValueError(f"The following keys could not be found in {id} file:" f" {', '.join(not_found)}")


def _main_store_groups(store: pd.HDFStore) -> Set[str]:
def _main_store_groups(store: pd.HDFStore) -> set[str]:
"""Returns sequence of unique main store groups."""
return {k.split('/')[1] for k in store.keys()}
return {k.split("/")[1] for k in store.keys()}


@contextmanager
def ignore_natural_name_warning():
"""
This context manager catches and ignores the 'NaturalNameWarning'
emitted within, which is our case comes from pytables. It warns about
table entries such as 'kq4.r8b2' which we can't access with syntax such
as some_table.kq4.r8b2 but we don't care about this. We let pandas handle
the access with getattr (which works).
If encountering issues, comment out the context manager and debug.
"""
with warnings.catch_warnings():
warnings.simplefilter("ignore", NaturalNameWarning)
yield
4 changes: 2 additions & 2 deletions omc3/definitions/formats.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
Recurring formats are defined here.
"""
from pathlib import Path
from datetime import datetime
from datetime import datetime, timezone

TIME = "%Y_%m_%d@%H_%M_%S_%f" # CERN default
CONFIG_FILENAME = "{script:s}_{time:s}.ini"
Expand All @@ -16,6 +16,6 @@ def get_config_filename(script):
"""Default Filename for config-files. Call from script with ``__file__``."""
return CONFIG_FILENAME.format(
script=Path(script).name.split('.')[0],
time=datetime.utcnow().strftime(TIME)
time=datetime.now(timezone.utc).strftime(TIME)
)

Loading

0 comments on commit 95024bc

Please sign in to comment.