The search for the transits of long-period exoplanets and binary star systems in the archival Kepler light curves.
This project is (in theory) reproducible given enough compute time. Here are the steps:
To get started, you'll need to install miniconda3 and then install the peerless
environment:
conda env create -f environment.yml
source activate peerless
where environment.yml
is in the root of this repository.
There are two packages that you'll need to install following the specific installation instructions in their documentation: (a) the 1.0-dev branch of george, and (b) transit.
Once this environment is enabled, set the environment variable:
export PEERLESS_DATA_DIR="/path/to/scratch/"
to the directory where you want peerless to save all of its output. You'll need something like a TB of disk space to run the full pipeline.
Then, you'll need to build the peerless extensions:
python setup.py build_ext --inplace
Next up, run the target selection and download all the relevant datasets:
scripts/peerless-targets
scripts/peerless-datasets -p {ncpu}
scripts/peerless-download -p {ncpu}
where {ncpu}
is the number of CPUs that you want to run in parallel using multiprocessing
(they must be on the same node).
To search these targets for transits, run:
scripts/peerless-search -p {ncpu} -q --no-plots -o {searchdir}
where {ncpu}
is the same as above and {searchdir}
is the root directory for the output.
Then to run a single pass of injection tests (one per target), run:
scripts/peerless-search -p {ncpu} -q --no-plots --inject -o {injdir}/{someinteger}
Since you'll want to run many rounds of this script, the output directory should be something like /path/to/injections/{someinteger}
where {someinteger}
is an integer identifying the run.
To collect the results of the search and injection tests, run:
scripts/peerless-collect {searchdir} {injdir} -o {resultsdir}
where {searchdir}
and {injdir}
are from above and {resultsdir}
is the location where these should be saved. Some of the figure scripts will expect {resultsdir}
to be results
in this directory so, if you choose a different location, the figures might fail.
To predict the masses of the injected planets, run:
scripts/peerless-collect {searchdir} {injdir} -o {resultsdir}
To set up the MCMC fits for the candidates, run:
scripts/peerless-init {resultsdir}/candidates.csv -p -o {mcmcdir}
where {mcmcdir}
is the directory where the MCMC results should be saved. Then, to run the MCMC analysis, run:
export NP={number_of_processes}
mpiexec -np $NP scripts/peerless-fit {mcmcdir}/{kicid}/init.pkl --nwalkers $((NP*2))
for each {kicid}
. You'll probably want to rerun this script a few times to get more samples.
To collect the MCMC fit results, run:
scripts/peerless-collect-fits {resultsdir}/candidates.csv {mcmcdir} -o {resultsdir}
This script saves a table of posterior quantiles to {resultsdir}/fits.csv
, figures to the directory document/figures
for use in the manuscript, and HDF5 archives of thinned MCMC chains to {resultsdir}/chains
.
Run the predictions notebook. Dependencies are exosyspop, which further depends on isochrones and vespa.
Finally, to generate the LaTeX tables and macros for the paper, run:
scripts/peerless-write-tex {resultsdir}/candidates.csv {resultsdir}/fits.csv {resultsdir}/injections-with-mass.h5 {resultsdir}/fpp.csv
This will save several .tex
files to the document
directory.
Copyright 2015-2016 Daniel Foreman-Mackey
Licensed under the terms of the MIT License (see LICENSE).