bnelearn.util.logging module

This module contains utilities for logging of experiments

class bnelearn.util.logging.CustomSummaryWriter(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')[source]

Bases: SummaryWriter

Extends SummaryWriter with two methods:

  • a method to add multiple scalars in the way that we intend. The original

    SummaryWriter can either add a single scalar at a time or multiple scalars, but in the latter case, multiple runs are created without the option to control these.

  • overwriting the the add_hparams method to write hparams without creating

    another tensorboard run file

add_hparams(hparam_dict=None, metric_dict=None, global_step=None)[source]

Overides the parent method to prevent the creation of unwanted additional subruns while logging hyperparams, as it is done by the original PyTorch method

add_metrics_dict(metrics_dict: dict, run_suffices: List[str], global_step=None, walltime=None, group_prefix: Optional[str] = None, metric_tag_mapping: Optional[dict] = None)[source]
Args:
metric_dict (dict): A dict of metrics. Keys are tag names, values are values.

values can be float, List[float] or Tensor. When List or (nonscalar) tensor, the length must match n_models

run_suffices (List[str]): if each value in metrics_dict is scalar, doesn’t need to be supplied.

When metrics_dict contains lists/iterables, they must all have the same length which should be equal to the length of run_suffices

global_step (int, optional): The step/iteration at which the metrics are being logged. walltime group_prefix (str, optional): If given each metric name will be prepended with this prefix (and a ‘/’),

which will group tags in tensorboard into categories.

metric_tag_mapping (dict, optional): A dactionary that provides a mapping between the metrics (keys of metrics_dict)

and the desired tag names in tensorboard. If given, each metric name will be converted to the corresponding tag name. NOTE: bnelearn.util.metrics.MAPPING_METRICS_TAGS contains a standard mapping for common metrics. These already include (metric-specific) prefixes.

bnelearn.util.logging.export_stepwise_linear_bid(experiment_dir, bidders: List[Bidder], step=0.01)[source]

expoerting grid valuations and corresponding bids for usage of verifier.

Args

experiment_dir: str, dir where export is going to be saved bidders: List[Bidder], to be evaluated here step: float, step length

Returns

to disk: List[csv]

bnelearn.util.logging.log_git_commit_hash(experiment_dir)[source]

Saves the hash of the current git commit into experiment_dir.

bnelearn.util.logging.print_aggregate_tensorboard_logs(experiment_dir)[source]

Prints in a tabular form the aggregate log from all the runs in the current experiment, reads data from the csv file in the experiment directory

bnelearn.util.logging.print_full_tensorboard_logs(experiment_dir, first_row: int = 0, last_row=None)[source]

Prints in a tabular form the full log from all the runs in the current experiment, reads data from a pkl file in the experiment directory :param first_row: the first row to be printed if the full log is used :param last_row: the last row to be printed if the full log is used

bnelearn.util.logging.process_figure(fig, epoch=None, figure_name='plot', tb_group='eval', tb_writer=None, display=False, output_dir=None, save_png=False, save_svg=False)[source]

displays, logs and/or saves a figure

bnelearn.util.logging.read_bne_utility_database(exp: Experiment)[source]

Check if this setting’s BNE has been saved to disk before.

Args:

exp: Experiment

Returns:
db_batch_size: int

sample size of a DB entry if found, else -1

db_bne_utility: List[n_players]

list of the saved BNE utilites

bnelearn.util.logging.save_experiment_config(experiment_log_dir, experiment_configuration: ExperimentConfig)[source]

Serializes ExperimentConfiguration into a readable JSON file

Parameters:
  • experiment_log_dir – full path except for the file name

  • experiment_configuration – experiment configuration as given by ConfigurationManager

bnelearn.util.logging.tabulate_tensorboard_logs(experiment_dir, write_aggregate=True, write_detailed=False, write_binary=False)[source]

This function reads all tensorboard event log files in subdirectories and converts their content into a single csv file containing info of all runs.

bnelearn.util.logging.write_bne_utility_database(exp: Experiment, bne_utilities_sampled: list)[source]

Write the sampled BNE utilities to disk.

Args:

exp: Experiment bne_utilities_sampled: list

BNE utilites that are to be writen to disk