gunz_ml.integrations package
Submodules
gunz_ml.integrations.hydra module
Provides helper functions for working with Hydra configurations.
This module includes utilities for initializing and validating Hydra configurations and for resolving OmegaConf objects into standard Python dicts.
- gunz_ml.integrations.hydra.init_hydra_and_check_config(cfg: DictConfig, script_name: str | None = None, check_script_name: bool = True, check_paths: bool = True, allow_unresolved_keys: bool = False) HydraConfig[source]
Initializes and validates the Hydra configuration for an experiment.
- Parameters:
cfg (DictConfig) – The configuration object provided by Hydra.
script_name (str | None, optional) – The expected name of the main script. If None, it’s inferred from the Hydra config. Defaults to None.
check_script_name (bool, optional) – If True, validates that the experiment config matches the script name. Defaults to True.
check_paths (bool, optional) – If True, ensures that all paths in cfg.path exist. If False, it creates them instead. Defaults to True.
allow_unresolved_keys (bool, optional) – If False, raises an error if any keys in the config are missing (i.e., have a value of ‘???’). Defaults to False.
- Returns:
The active Hydra configuration object.
- Return type:
HydraConfig
- Raises:
ValueError – If the experiment configuration is invalid for the current script.
RuntimeError – If allow_unresolved_keys is False and missing keys are found.
- gunz_ml.integrations.hydra.resolve_cfg(cfg: DictConfig | None, default_to_empty_dict: bool = False) dict | None[source]
Resolves an OmegaConf DictConfig object into a standard Python dictionary.
- Parameters:
cfg (DictConfig | None) – The configuration object to resolve.
default_to_empty_dict (bool, optional) – If True, returns an empty dict if cfg is None. If False, returns None. Defaults to False.
- Returns:
The resolved configuration as a dictionary, or None.
- Return type:
dict | None
gunz_ml.integrations.lightning module
Provides helper functions and constants for PyTorch Lightning integration.
This module includes utilities for managing common PyTorch Lightning warnings and defines standardized status enums for logging purposes.
- class gunz_ml.integrations.lightning.FinalizeStatus(value)[source]
Bases:
StrEnumEnumeration for the final status of a run or trial.
- FAILED = 'failed'
- FINISHED = 'finished'
- SUCCESS = 'success'
- gunz_ml.integrations.lightning.ignore_pl_warnings(dataloader_num_workers: bool = True, slurm_srun: bool = True, mixed_precision: bool = True)[source]
Suppresses common, often noisy, warnings from PyTorch Lightning globally.
- Parameters:
dataloader_num_workers (bool, optional) – If True, suppresses the warning about using a small number of workers in the DataLoader. Defaults to True.
slurm_srun (bool, optional) – If True, suppresses the warning about the srun command being available on the system. Defaults to True.
mixed_precision (bool, optional) – If True, suppresses the historical usage warning for 16-bit mixed precision. Defaults to True.
- gunz_ml.integrations.lightning.suppress_pl_warnings(dataloader_num_workers: bool = True, slurm_srun: bool = True, mixed_precision: bool = True)[source]
A context manager to temporarily suppress common PyTorch Lightning warnings.
- Parameters:
dataloader_num_workers (bool, optional) – If True, suppresses the warning about using a small number of workers in the DataLoader. Defaults to True.
slurm_srun (bool, optional) – If True, suppresses the warning about the srun command being available on the system. Defaults to True.
mixed_precision (bool, optional) – If True, suppresses the historical usage warning for 16-bit mixed precision. Defaults to True.