bnelearn.util.integration module

Some utilities to leverage parallel computation of the types of integrals that arise in BNEs.

bnelearn.util.integration.cumulatively_integrate(f: callable, upper_bounds: tensor, lower_bound: float = 0.0, n_evaluations: int = 64)[source]

Integrate the fucntion f on the intervals [[lower_bound, upper_bounds[0], [lower_bound, upper_bounds[1], …] that sahre a common lower bound.

This function sorts the upper bounds, decomposes the integral into those between any two adjacent points in lower_bound, *upper_bounds, calculates each partial integral using pytorch’s trapezoid rule with n_evalautions sampling points per interval, then stichtes the resulting masses together to achieve the desired output. Note that this way, we can use pytorch.trapz in parallel over all domains and integrate directly on cuda, if desired.

Arguments:

f: callable, function to be integrated. upper_bounds: torch.tensor of shape (batch_size, 1) that specifies the

upper integration bounds.

lower_bound: float that specifies the lower bound of all domains. n_evaluations: int that specifies the number of function evaluations per

indivdidual interval.

Returns:

integrals: torch.tensor of shape (batch_size, 1).