예제 #1
0
# At this point we would call `b.run_solver(solver='emcee_solver', solution='resample_solution')` to start a new emcee run initialize from the distribution created from this top-branch from the previous run.
#
# Now let's imagine that we also want to include an additional parameter in our next emcee run.  So far our initializing distribution contains the following:

# In[10]:

_ = b.plot_distribution_collection('resample_branch', show=True)

# If we wanted to add additional parameters, we need to define their initial distributions via [b.add_distribution](../api/phoebe.frontend.bundle.Bundle.add_distribution.md).
#
# By passing `distribution='resample_branch'`, they would be appended to the existing distribution collection, but we could also name then separately and combine them later.  For example:

# In[11]:

b.add_distribution('requiv@secondary',
                   phoebe.gaussian_around(200),
                   distribution='init_from_new')

# Now we need to set `init_from` to **both** of these distributions.

# In[12]:

b.set_value('init_from',
            solver='emcee_solver',
            value=['resample_branch', 'init_from_new'])

# Whenever `init_from` contains more than one distribution, they will be combined according to `init_from_combine` (and similarly for `priors` and `priors_combine`).

# In[13]:

print(b.get_parameter('init_from_combine', solver='emcee_solver'))
예제 #2
0
# ### Initializing Distributions
# 
# Instead of the `fit_parameters` list that was used for [optimizers](./nelder_mead.ipynb), emcee requires initializing distributions.  Each of our `nwalkers` walkers will then draw from this distribution set to determine the starting position in parameter space.
# 
# For this case, we'll sample over just a few parameters (the ones that we offset somewhat from the known true solution).  In practice, you want to sample over as many of the most-sensitive parameters as possible to account for any correlations between the parameters... but each additional parameter adds a dimension to the parameter space that needs to be sampled and so increases the computational cost.
# 
# If using `pblum_mode='dataset-scaled'` while optimizing, it is generally a good idea to disable this (set to 'component-coupled' or 'dataset-coupled' and sample over `pblum` to account for any correlations between the luminosities and your other sampled parameters.  For more details, see the [pblum tutorial](./pblum.ipynb).
# 
# If your observational uncertainties are not reliable, you may also want to sample over `sigmas_lnf` (the [emcee fitting a line tutorial](https://emcee.readthedocs.io/en/stable/tutorials/line/) has a nice overview on the mathematics, and the [inverse paper](http://phoebe-project.org/publications/2020Conroy+) describes the implementation within PHOEBE).
# 
# See the [distributions tutorial](./distributions.ipynb) for more details on adding distributions.

# In[12]:


b.add_distribution({'sma@binary': phoebe.gaussian_around(0.1),
                    'incl@binary': phoebe.gaussian_around(5),
                    't0_supconj': phoebe.gaussian_around(0.001),
                    'requiv@primary': phoebe.gaussian_around(0.2),
                    'pblum@primary': phoebe.gaussian_around(0.2),
                    'sigmas_lnf@lc01': phoebe.uniform(-1e9, -1e4),
                   }, distribution='ball_around_guess')


# It is useful to make sure that the model-parameter space represented by this initializing distribution covers the observations themselves.

# In[13]:


b.run_compute(compute='fastcompute', sample_from='ball_around_guess', 
              sample_num=20, model='init_from_model')
print(b.compute_pblums(compute='fastcompute', dataset='lc01', pbflux=True))

# And although it doesn't really matter, let's marginalize over 'sma' and 'incl' instead of 'asini' and 'incl'.

# In[45]:

b.flip_constraint('sma@binary', solve_for='asini')

# We'll now create our initializing distribution, including gaussian "balls" around all of the optimized values and a uniform boxcar on `pblum@primary`.

# In[46]:

b.add_distribution(
    {
        'teffratio': phoebe.gaussian_around(0.1),
        'requivsumfrac': phoebe.gaussian_around(0.1),
        'incl@binary': phoebe.gaussian_around(3),
        'sma@binary': phoebe.gaussian_around(2),
        'q': phoebe.gaussian_around(0.1),
        'ecc': phoebe.gaussian_around(0.05),
        'per0': phoebe.gaussian_around(5),
        'pblum': phoebe.uniform_around(0.5)
    },
    distribution='ball_around_optimized_solution')

# We can look at this combined set of distributions, which will be used to sample the initial values of our walkers in [emcee](../api/phoebe.parameters.solver.sampler.emcee.md).

# In[47]:

_ = b.plot_distribution_collection('ball_around_optimized_solution', show=True)
예제 #4
0
    b.set_value('pblum_mode',
                'component-coupled')  #Could also set to 'dataset-coupled'

    #Default distribution from example:
    # b.add_distribution({'sma@binary': pb.gaussian_around(0.1),
    #                     'incl@binary': pb.gaussian_around(5),
    #                     't0_supconj': pb.gaussian_around(0.001),
    #                     'requiv@primary': pb.gaussian_around(0.2),
    #                     'pblum@primary': pb.gaussian_around(0.2),
    #                     'sigmas_lnf@lc01': pb.uniform(-1e9, -1e4),
    #                    }, distribution='ball_around_guess')

    b.add_distribution(
        {
            't0_supconj': pb.gaussian_around(0.001),
            'incl@binary': pb.gaussian_around(5),
            'q': pb.gaussian_around(1),
            'ecc': pb.gaussian_around(0.005),
            'per0': pb.gaussian_around(0.2),
            'sigmas_lnf@lc01': pb.uniform(-1e9, -1e4),
        },
        distribution='ball_around_guess')

    b.run_compute(model='EMCEE_Fit',
                  compute='fastcompute',
                  sample_from='ball_around_guess',
                  sample_num=20)

    b.set_value('init_from', 'ball_around_guess')
예제 #5
0
# In[2]:

import phoebe
from phoebe import u  # units
import numpy as np

logger = phoebe.logger()

# The latex representations of parameters are mostly used while plotting [distributions](./distributions.ipynb)... so let's just create a few dummy distributions so that we can see how they're labeled when plotting.

# In[4]:

b = phoebe.default_binary()
b.add_distribution({
    'teff@primary': phoebe.gaussian_around(100),
    'teff@secondary': phoebe.gaussian_around(150),
    'requiv@primary': phoebe.uniform_around(0.2)
})

# ## Default Representation

# By default, whenever parameters themselves are referenced in plotting (like when calling [b.plot_distribution_collection](../api/phoebe.frontend.bundle.Bundle.plot_distribution_collection.md), a latex representation of the parameter name, along with the component or dataset, when applicable, is used.

# In[5]:

_ = b.plot_distribution_collection(show=True)

# ## Overriding Component Labels
#
# By default, the component labels themselves are used within this latex representation.  These labels can be changed internally with [b.rename_component](../api/phoebe.frontend.bundle.Bundle.rename_component.md).  However, sometimes it is convenient to use a different naming convention for the latex representation.
예제 #6
0
       legend=True,
       s=0.01,
       save='lightcurves/SolverFitted/WithOptimizer.pdf')
print('Optimizer Fit Plotted\n')

#Sampler
b.add_solver('sampler.emcee', solver='emcee_solver')
b.set_value('compute', value='fastcompute', solver='emcee_solver')

b.set_value('pblum_mode',
            'component-coupled')  #Could also set to 'dataset-coupled'

#Default distribution from example:
b.add_distribution(
    {
        'sma@binary': pb.gaussian_around(0.1),
        'incl@binary': pb.gaussian_around(5),
        't0_supconj': pb.gaussian_around(0.001),
        'requiv@primary': pb.gaussian_around(0.2),
        'pblum@primary': pb.gaussian_around(0.2),
        'sigmas_lnf@lc01': pb.uniform(-1e9, -1e4),
    },
    distribution='ball_around_guess')

# b.add_distribution({'teffratio': pb.gaussian_around(0.1),
#                     'requivsumfrac': pb.gaussian_around(5),
#                     't0_supconj': pb.gaussian_around(0.001),
#                     'ecc': pb.gaussian_around(0.2),
#                     'per0': pb.gaussian_around(0.2),
#                     'sigmas_lnf@lc01': pb.uniform(-1e9, -1e4),
#                    }, distribution='ball_around_guess')