# # If using `pblum_mode='dataset-scaled'` while optimizing, it is generally a good idea to disable this (set to 'component-coupled' or 'dataset-coupled' and sample over `pblum` to account for any correlations between the luminosities and your other sampled parameters. For more details, see the [pblum tutorial](./pblum.ipynb). # # If your observational uncertainties are not reliable, you may also want to sample over `sigmas_lnf` (the [emcee fitting a line tutorial](https://emcee.readthedocs.io/en/stable/tutorials/line/) has a nice overview on the mathematics, and the [inverse paper](http://phoebe-project.org/publications/2020Conroy+) describes the implementation within PHOEBE). # # See the [distributions tutorial](./distributions.ipynb) for more details on adding distributions. # In[12]: b.add_distribution({'sma@binary': phoebe.gaussian_around(0.1), 'incl@binary': phoebe.gaussian_around(5), 't0_supconj': phoebe.gaussian_around(0.001), 'requiv@primary': phoebe.gaussian_around(0.2), 'pblum@primary': phoebe.gaussian_around(0.2), 'sigmas_lnf@lc01': phoebe.uniform(-1e9, -1e4), }, distribution='ball_around_guess') # It is useful to make sure that the model-parameter space represented by this initializing distribution covers the observations themselves. # In[13]: b.run_compute(compute='fastcompute', sample_from='ball_around_guess', sample_num=20, model='init_from_model') # In[14]:
# b.add_distribution({'sma@binary': pb.gaussian_around(0.1), # 'incl@binary': pb.gaussian_around(5), # 't0_supconj': pb.gaussian_around(0.001), # 'requiv@primary': pb.gaussian_around(0.2), # 'pblum@primary': pb.gaussian_around(0.2), # 'sigmas_lnf@lc01': pb.uniform(-1e9, -1e4), # }, distribution='ball_around_guess') b.add_distribution( { 't0_supconj': pb.gaussian_around(0.001), 'incl@binary': pb.gaussian_around(5), 'q': pb.gaussian_around(1), 'ecc': pb.gaussian_around(0.005), 'per0': pb.gaussian_around(0.2), 'sigmas_lnf@lc01': pb.uniform(-1e9, -1e4), }, distribution='ball_around_guess') b.run_compute(model='EMCEE_Fit', compute='fastcompute', sample_from='ball_around_guess', sample_num=20) b.set_value('init_from', 'ball_around_guess') b.set_value( 'nwalkers', solver='emcee_solver', value=12 ) #Define number of walkers. Must be twice number of parameters b.set_value('niters', solver='emcee_solver', value=250) #Define number of iterations
# * [rename_distribution](../api/phoebe.frontend.bundle.Bundle.rename_distribution.md) # * [remove_distribution](../api/phoebe.frontend.bundle.Bundle.remove_distribution.md) # In[7]: print(b.get_distribution(distribution='mydist')) # Now let's add another distribution, with the same `distribution` tag, to the inclination of the binary. # In[8]: b.add_distribution(qualifier='incl', component='binary', value=phoebe.uniform(80,90), distribution='mydist') # In[9]: print(b.get_distribution(distribution='mydist')) # Accessing & Plotting Distributions # -------------------- # The parameters we've created and attached are [DistributionParameters](../api/phoebe.parameters.DistributionParameter.md) and live in `context='distribution'`, with all other tags matching the parameter they're referencing. For example, let's filter and look at the distributions we've added. # In[10]:
# In[4]: b = phoebe.default_binary() # In[5]: b.set_value('latex_repr', component='binary', value='orb') b.set_value('latex_repr', component='primary', value='1') b.set_value('latex_repr', component='secondary', value='2') # In[6]: b.add_distribution( { 'sma@binary': phoebe.uniform(5, 8), 'incl@binary': phoebe.gaussian(75, 10) }, distribution='mydist') # # Plotting Distributions # In[7]: dist = b.get_parameter('sma', component='binary', context='component').get_distribution('mydist') plt.clf() figure = plt.figure(figsize=(4, 4)) _ = dist.plot(plot_uncertainties=False) _ = plt.tight_layout()
# * [b.calculate_residuals](../api/phoebe.parameters.ParameterSet.calculate_residuals.md) # * [b.calculate_chi2](../api/phoebe.parameters.ParameterSet.calculate_chi2.md) # * [b.calculate_lnlikelihood](../api/phoebe.parameters.ParameterSet.calculate_lnlikelihood.md) # * [b.calculate_lnp](../api/phoebe.frontend.bundle.Bundle.calculate_lnp.md) # # The log-probability used as the merit function within optimizers and samplers is defined as [calculate_lnp](../api/phoebe.frontend.bundle.Bundle.calculate_lnp.md)`(priors, combine=priors_combine)` + [calculate_lnlikelihood](../api/phoebe.parameters.ParameterSet.calculate_lnlikelihood). # # To see the affect of `priors_combine`, we can pass the `solver` tag directly to [b.get_distribution_collection](../api/phoebe.frontend.bundle.Bundle.get_distribution_collection.md), [b.plot_distribution_collection](../api/phoebe.frontend.bundle.Bundle.plot_distribution_collection.md), or [b.calculate_lnp](../api/phoebe.frontend.bundle.Bundle.calculate_lnp.md). # In[14]: b.add_distribution('teff@primary', phoebe.gaussian(6000,100), distribution='mydist01') b.add_distribution('teff@secondary', phoebe.gaussian(5500,600), distribution='mydist01') b.add_distribution('teff@primary', phoebe.uniform(5800,6200), distribution='mydist02') # In[15]: b.add_solver('sampler.emcee', priors=['mydist01', 'mydist02'], solver='myemceesolver') # In[16]: print(b.filter(qualifier='prior*')) # Now we'll look at the affect of `priors_combine` on the resulting priors distributions that would be sent to the merit function.