bnelearn.experiment.configurations module¶
This module provides dataclasses that are used to hold configs. The values which are set to None are either not necessary, specific only for certain kinds of experiments or are set later depending on other values
- class bnelearn.experiment.configurations.EnhancedJSONEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]¶
Bases:
JSONEncoder
- default(o)[source]¶
Implement this method in a subclass such that it returns a serializable object for
o
, or calls the base implementation (to raise aTypeError
).For example, to support arbitrary iterators, you could implement default like this:
def default(self, o): try: iterable = iter(o) except TypeError: pass else: return list(iterable) # Let the base class default method raise the TypeError return JSONEncoder.default(self, o)
- class bnelearn.experiment.configurations.ExperimentConfig(experiment_class: str, running: bnelearn.experiment.configurations.RunningConfig, setting: bnelearn.experiment.configurations.SettingConfig, learning: bnelearn.experiment.configurations.LearningConfig, logging: bnelearn.experiment.configurations.LoggingConfig, hardware: bnelearn.experiment.configurations.HardwareConfig)[source]¶
Bases:
object
- experiment_class: str¶
- hardware: HardwareConfig¶
- learning: LearningConfig¶
- logging: LoggingConfig¶
- running: RunningConfig¶
- setting: SettingConfig¶
- class bnelearn.experiment.configurations.HardwareConfig(cuda: bool, specific_gpu: int, fallback: bool, max_cpu_threads: int, device: str = None)[source]¶
Bases:
object
- cuda: bool¶
- device: str = None¶
- fallback: bool¶
- max_cpu_threads: int¶
- specific_gpu: int¶
- class bnelearn.experiment.configurations.LearningConfig(model_sharing: bool, learner_type: str, learner_hyperparams: dict, optimizer_type: str, optimizer_hyperparams: dict, scheduler_type: str, scheduler_hyperparams: dict, hidden_nodes: List[int], pretrain_iters: int, pretrain_to_bne: int, batch_size: int, smoothing_temperature: float, redraw_every_iteration: bool, mixed_strategy: str, bias: bool, hidden_activations: List[torch.nn.modules.module.Module] = None, value_contest: bool = True)[source]¶
Bases:
object
- batch_size: int¶
- bias: bool¶
- learner_hyperparams: dict¶
- learner_type: str¶
- mixed_strategy: str¶
- model_sharing: bool¶
- optimizer_hyperparams: dict¶
- optimizer_type: str¶
- pretrain_iters: int¶
- pretrain_to_bne: int¶
- redraw_every_iteration: bool¶
- scheduler_hyperparams: dict¶
- scheduler_type: str¶
- smoothing_temperature: float¶
- value_contest: bool = True¶
- class bnelearn.experiment.configurations.LoggingConfig(enable_logging: bool, log_root_dir: str, util_loss_batch_size: int, util_loss_opponent_batch_size: int, util_loss_grid_size: int, eval_frequency: int, eval_batch_size: int, cache_eval_actions: bool, plot_frequency: int, plot_points: int, plot_show_inline: bool, log_metrics: dict, best_response: bool, save_tb_events_to_csv_aggregate: bool, save_tb_events_to_csv_detailed: bool, save_tb_events_to_binary_detailed: bool, save_models: bool, log_componentwise_norm: bool, save_figure_to_disk_png: bool, save_figure_to_disk_svg: bool, save_figure_data_to_disk: bool, experiment_dir: Optional[str] = None, experiment_name: Optional[str] = None)[source]¶
Bases:
object
Controls logging and evaluation aspects of an experiment suite.
If logging is enabled, the experiment runs will be logged to the following directories:
log_root_dir / [setting-specific dir hierarchy determined by Experiment subclasses] / experiment_timestamp + experiment_name / run_timestamp + run_seed
- best_response: bool¶
- cache_eval_actions: bool¶
- enable_logging: bool¶
- eval_batch_size: int¶
- eval_frequency: int¶
- experiment_dir: str = None¶
- experiment_name: str = None¶
- export_step_wise_linear_bid_function_size = None¶
- log_componentwise_norm: bool¶
- log_metrics: dict¶
- log_root_dir: str¶
- plot_frequency: int¶
- plot_points: int¶
- plot_show_inline: bool¶
- save_figure_data_to_disk: bool¶
- save_figure_to_disk_png: bool¶
- save_figure_to_disk_svg: bool¶
- save_models: bool¶
- save_tb_events_to_binary_detailed: bool¶
- save_tb_events_to_csv_aggregate: bool¶
- save_tb_events_to_csv_detailed: bool¶
- util_loss_batch_size: int¶
- util_loss_grid_size: int¶
- util_loss_opponent_batch_size: int¶
- class bnelearn.experiment.configurations.RunningConfig(n_runs: int, n_epochs: int, seeds: Iterable[int] = None)[source]¶
Bases:
object
- n_epochs: int¶
- n_runs: int¶
- seeds: Iterable[int] = None¶
- class bnelearn.experiment.configurations.SettingConfig(n_players: int, n_items: int, payment_rule: str, risk: float, common_prior: torch.distributions.distribution.Distribution = None, valuation_mean: float = None, valuation_std: float = None, u_lo: list = None, u_hi: list = None, gamma: float = None, correlation_types: str = None, correlation_groups: List[List[int]] = None, correlation_coefficients: List[float] = None, pretrain_transform: <built-in function callable> = None, constant_marginal_values: bool = False, item_interest_limit: int = None, efficiency_parameter: float = None, core_solver: str = None, tullock_impact_factor: float = None, impact_function: str = None, crowdsourcing_values: List = None)[source]¶
Bases:
object
- common_prior: Distribution = None¶
- constant_marginal_values: bool = False¶
- core_solver: str = None¶
- correlation_coefficients: List[float] = None¶
- correlation_groups: List[List[int]] = None¶
- correlation_types: str = None¶
- crowdsourcing_values: List = None¶
- efficiency_parameter: float = None¶
- gamma: float = None¶
- impact_function: str = None¶
- item_interest_limit: int = None¶
- n_items: int¶
- n_players: int¶
- payment_rule: str¶
- pretrain_transform: callable = None¶
- risk: float¶
- tullock_impact_factor: float = None¶
- u_hi: list = None¶
- u_lo: list = None¶
- valuation_mean: float = None¶
- valuation_std: float = None¶