BEAST

The Bayesian Extinction and Stellar Tool (BEAST) fits the ultraviolet to near-infrared photometric SEDs of stars to extract stellar and dust extinction parameters. The stellar parameters are age (t), mass (M), metallicity (M), and distance (d). The dust extinction parameters are dust column (Av), average grain size (Rv), and mixing between type A and B extinction curves (fA).

The full details of the BEAST are provide by Gordon et al. (2016, ApJ, 826, 104). <http://adsabs.harvard.edu/abs/2016ApJ…826..104G>

User Documentation

Setting Up the BEAST

Basics

  1. Define project and grid input parameters in datamodel.py
  2. Execute BEAST Run using python run_beast.py with appropriate task flags
    • Default Full Stack Run: python run_beast.py -p -o -t -f

BEAST Data Model

Before running the BEAST, you will need to modify datamodel.py to specify the required parameters for generating models and fitting data. These parameters (and example values) are described below.

Project Details
  • project: pathname of of working subdirectory.
  • filters: names of photometric filter passbands (matching library names).
  • basefilters: short versions of passband names.
  • obsfile: filename for input flux data.
  • obs_colnames: column names in obsfile for observed fluxes. The input data MUST be in fluxes, NOT in magnitudes and the fluxes MUST be in normalized Vega units.
Artificial Star Test (AST) File Parameters

The BEAST generates artificial star test (AST) input files based on additional input parameters from datamodel.py.

  • ast_models_selected_per_age: number of models to pick per age (default = 70).
  • ast_bands_above_maglimit: number of filters that must be above the magnitude limit for an AST to be included in the list (default = 3).
  • ast_realization_per_model: number of realizations of each included AST model to be put into the list (default = 20).
  • ast_maglimit: two options: (1) number of magnitudes fainter than the 90th percentile faintest star in the photometry catalog to be used for the mag cut (default = 1); (2) custom faint end limits (space-separated list of numbers, one for each band).
  • ast_with_positions: (optional; bool) if True, the AST list is produced with X,Y positions. If False, the AST list is produced with only magnitudes.
  • ast_pixel_distribution: (optional; float) minimum pixel separation between AST position and catalog star used to determine the AST spatial distribution. Used if ast_with_positions is True.
  • ast_reference_image: (optional; string) name of the reference image used by DOLPHOT when running the measured photometry. Required if ast_with_positions is True and no X,Y information is present in the photometry catalog.
  • astfile: pathname to the AST files (single camera ASTs).
  • noisefile : pathname to the output noise model file.
Grid Definition Parameters

The BEAST generates a grid of stellar models based on aditional input parameters from datamodel.py.

  • distances: distance grid range parameters. [min, max, step], or [fixed number].
  • distance_unit: specify magnitude (units.mag) or a length unit
  • logt: age grid range parameters (min, max, step).
  • z: metallicity grid points.
  • oiso: isochrone model grid. Current choices: Padova or MIST. Default: PARSEC+CALIBRI: oiso = isochrone.PadovaWeb(modeltype='parsec12s', filterPMS=True)
  • osl: stellar library definition. Options include Kurucz, Tlusty, BTSettl, Munari, Elodie and BaSel. You can also generate an object from the union of multiple individual libraries: osl = stellib.Tlusty() + stellib.Kurucz()
  • extLaw: extinction law definition.
  • avs: dust column in magnitudes (A_V) grid range parameters (min, max, step).
  • rvs: average dust grain size grid (R_V) range parameters (min, max, step).
  • fAs: mixture factor between “MW” and “SMCBar” extinction curves (f_A) grid range parameters (min, max, step).
  • *_prior_model: prior model definitions for dust parameters (A_V, R_V, f_A). Default: flat prior.
Optional Features
Add additional filters to grid

Define list of filternames as additional_filters and alter add_spectral_properties call:

add_spectral_properties_kwargs = dict(filternames=filters + additional_filters)

Skip verify_params exit

Add noexit=True keyword to verify_input_format() call in run_beast.py:

verify_params.verify_input_format(datamodel, noexit=True)

Remove constant SFH prior

Add prior_kwargs to datamodel.py:

prior_kwargs = dict(constantSFR=False)

Add kwargs defining code block before add_stellar_priors() call in run_beast.py:

if hasattr(datamodel, 'prior_kwargs'):
  prior_kwargs = datamodel.prior_kwargs
else:
  prior_kwargs = {}
Enable Exponential Av Prior

Set av_prior_model in datamodel.py:

av_prior_model = {'name': 'exponential', 'a': 2.0, 'N': 4.0}

BEAST Filters

The filters are defined in beast/libs/filters.hd5. The file contains two groups:

  • content: fields are TABLENAME (string), OBSERVATORY (string), INSTRUMENT (string), NORM (float), CWAVE (float), PWAVE (float), COMMENT (string)
  • filters has a group for each filter, with the same names as TABLENAME. The groups contain a dataset with the fields WAVELENGTH (float array, in Angstroms) and THROUGHPUT (float array).

The filters currently included in the BEAST filter library are as follows.

GROUND_JOHNSON_U
GROUND_JOHNSON_B
GROUND_JOHNSON_V
GROUND_COUSINS_R
GROUND_COUSINS_I
GROUND_BESSELL_J
GROUND_BESSELL_H
GROUND_BESSELL_K
HST_NIC2_F110W
HST_NIC2_F160W
HST_NIC2_F205W
HST_WFPC2_F218W
HST_ACS_HRC_F220W
HST_ACS_HRC_F250W
HST_WFPC2_F255W
HST_WFPC2_F300W
HST_ACS_HRC_F330W
HST_WFPC2_F336W
HST_ACS_HRC_F344N
HST_ACS_HRC_F435W
HST_ACS_WFC_F435W
HST_WFPC2_F439W
HST_WFPC2_F450W
HST_ACS_HRC_F475W
HST_ACS_WFC_F475W
HST_ACS_HRC_F502N
HST_ACS_WFC_F502N
HST_ACS_HRC_F550M
HST_ACS_WFC_F550M
HST_ACS_HRC_F555W
HST_ACS_WFC_F555W
HST_WFPC2_F555W
HST_ACS_HRC_F606W
HST_ACS_WFC_F606W
HST_WFPC2_F606W
HST_WFPC2_F622W
HST_ACS_HRC_F625W
HST_ACS_WFC_F625W
HST_ACS_HRC_F658N
HST_ACS_WFC_F658N
HST_ACS_HRC_F660N
HST_ACS_WFC_F660N
HST_WFPC2_F675W
HST_ACS_HRC_F775W
HST_ACS_WFC_F775W
HST_WFPC2_F791W
HST_ACS_HRC_F814W
HST_ACS_WFC_F814W
HST_WFPC2_F814W
HST_ACS_HRC_F850LP
HST_ACS_WFC_F850LP
HST_WFPC2_F850LP
HST_ACS_HRC_F892N
HST_ACS_WFC_F892N
CFHT_CFH12K_CFH7406
CFHT_CFH12K_CFH7504
CFHT_MEGAPRIME_CFH7605
CFHT_MEGAPRIME_CFH7701
CFHT_MEGAPRIME_CFH7803
CFHT_WIRCAM_CFH8002
CFHT_WIRCAM_CFH8101
CFHT_WIRCAM_CFH8102
CFHT_WIRCAM_CFH8103
CFHT_WIRCAM_CFH8104
CFHT_WIRCAM_CFH8201
CFHT_WIRCAM_CFH8202
CFHT_WIRCAM_CFH8203
CFHT_WIRCAM_CFH8204
CFHT_WIRCAM_CFH8301
CFHT_WIRCAM_CFH8302
CFHT_WIRCAM_CFH8303
CFHT_WIRCAM_CFH8304
CFHT_WIRCAM_CFH8305
CFHT_MEGAPRIME_CFH9301
CFHT_MEGAPRIME_CFH9401
CFHT_MEGAPRIME_CFH9601
CFHT_MEGAPRIME_CFH9701
CFHT_MEGAPRIME_CFH9801
HST_WFC3_F098M
HST_WFC3_F105W
HST_WFC3_F110W
HST_WFC3_F125W
HST_WFC3_F126N
HST_WFC3_F127M
HST_WFC3_F128N
HST_WFC3_F130N
HST_WFC3_F132N
HST_WFC3_F139M
HST_WFC3_F140W
HST_WFC3_F153M
HST_WFC3_F160W
HST_WFC3_F164N
HST_WFC3_F167N
HST_WFC3_F200LP
HST_WFC3_F218W
HST_WFC3_F225W
HST_WFC3_F275W
HST_WFC3_F280N
HST_WFC3_F300X
HST_WFC3_F336W
HST_WFC3_F343N
HST_WFC3_F350LP
HST_WFC3_F373N
HST_WFC3_F390M
HST_WFC3_F390W
HST_WFC3_F395N
HST_WFC3_F410M
HST_WFC3_F438W
HST_WFC3_F467M
HST_WFC3_F469N
HST_WFC3_F475W
HST_WFC3_F475X
HST_WFC3_F487N
HST_WFC3_F502N
HST_WFC3_F547M
HST_WFC3_F555W
HST_WFC3_F600LP
HST_WFC3_F606W
HST_WFC3_F621M
HST_WFC3_F625W
HST_WFC3_F631N
HST_WFC3_F645N
HST_WFC3_F656N
HST_WFC3_F657N
HST_WFC3_F658N
HST_WFC3_F665N
HST_WFC3_F673N
HST_WFC3_F680N
HST_WFC3_F689M
HST_WFC3_F763M
HST_WFC3_F775W
HST_WFC3_F814W
HST_WFC3_F845M
HST_WFC3_F850LP
HST_WFC3_F953N
HST_WFC3_FQ232N
HST_WFC3_FQ243N
HST_WFC3_FQ378N
HST_WFC3_FQ387N
HST_WFC3_FQ422M
HST_WFC3_FQ436N
HST_WFC3_FQ437N
HST_WFC3_FQ492N
HST_WFC3_FQ508N
HST_WFC3_FQ575N
HST_WFC3_FQ619N
HST_WFC3_FQ634N
HST_WFC3_FQ672N
HST_WFC3_FQ674N
HST_WFC3_FQ727N
HST_WFC3_FQ750N
HST_WFC3_FQ889N
HST_WFC3_FQ906N
HST_WFC3_FQ924N
HST_WFC3_FQ937N
HST_NIC3_F108N
HST_NIC3_F110W
HST_NIC3_F113N
HST_NIC3_F150W
HST_NIC3_F160W
HST_NIC3_F164N
HST_NIC3_F166N
HST_NIC3_F175W
HST_NIC3_F187N
HST_NIC3_F190N
HST_NIC3_F196N
HST_NIC3_F200N
HST_NIC3_F205M
HST_NIC3_F212N
HST_NIC3_F215N
HST_NIC3_F222M
HST_NIC3_F240M
CFHT_MEGAPRIME_CFH9702
HST_WFPC2_F170W
GALEX_FUV
GALEX_NUV
GROUND_2MASS_J
GROUND_2MASS_H
GROUND_2MASS_Ks
SPITZER_IRAC_36
SPITZER_IRAC_45
SPITZER_IRAC_58
SPITZER_IRAC_80
WISE_RSR_W1
WISE_RSR_W2
WISE_RSR_W3
WISE_RSR_W4
GROUND_SDSS_U
GROUND_SDSS_G
GROUND_SDSS_R
GROUND_SDSS_I
GROUND_SDSS_Z

Standard Workflow

The workflow is setup to run the fitting on many sources efficiently by splitting the full catalog into a number of smaller files. This allows distributing the fitting across cores. There are manual steps to allow for the refitting, fixing issues, etc without rerunning everything. This workflow has been tested on large (e.g., PHAT) and small (e.g. METAL) datasets.

Setup

Working location

Setup a working location, usually a subdirectory. For reference, a template is the ‘metal_production’ subdirectory in beast/examples.

In this location, at a minimum you will need the following files:

  • datamodel.py
  • run_beast_production.py: a “production” version of run_beast.py
    • Provides commandline options for sub region files
  • symbolic link to the beast directory in the beast repository
$ ln -s /location/beast/beast/ beast
Datamodel.py

Before running the BEAST, you will need to modify this file to specify the required parameters for generating models and fitting data. These parameters are described in the beast setup documentation.

Data

The data need to have source density information added as it is common for the observation model (scatter and bias) to be strongly dependent on source density due to crowding/confusion noise.

Adding source density to observations

Create a new version of the observations that includes a column with the source density. The new observation file includes only sources that have measurements in all bands (columns that match ‘X_RATE’). In theory, sources without measurements in all bands is the result of non-overlapping observations. The BEAST is based on fitting sources with the same selection function, in this case measurements in all bands.

A number of source density images are also created. These include images that map the source density of objects with zero fluxes in different bands (or any band).

Command to create the observed catalog with source density column with a pixel scale of 5 arcsec using the ‘obscat.fits’ catalog.

$ ./beast/tools/create_source_density_map.py --pixsize 5. obscat.fits
Split up observations by source density

The observed catalog should be split into separate files for each source density. In addition, each source density catalog is split into a set of sub files to have at most ‘n_per_file’ sources. The sources are sorted by the ‘sort_col’ flux before splitting to put sources with similar brightness together. This splitting into sub files sorted by flux allows for trimming the BEAST physics+observation model removing objects that are too bright or too faint to fit any of the sources in the file. In addition, this allows for running the BEAST fitting in parallel with each sub file on a different core.

Command to create the the source density split files

$ ./beast/tools/subdivide_obscat_by_source_density.py --n_per_file 6250 \
         --sort_col F475W_RATE obscat_with_sourceden.fits

Model

Physics model

Generate the full physics model grid. Needed for the fitting and generation of the artificial star test (AST) inputs. The ‘0 0’ arguments are dummy values.

$ ./run_beast_production.py -p 0 0
Observation model

The observation model is based on artificial star tests (ASTs). ASTs are artificial sources inserted into the observations and extracted with the same software that was used for the observed photometry catalog. This ensures that the observation model has the same selection function as the data.

Create the AST input list

To be added.

Compute the ASTs

Done separately with the same code that was used to extract the source photometry.

Split up the ASTs by source density

To be added.

Currently the workflow assumes a single AST file for all the source densities.

Create the observation models for each source density

To be added.

Create a single observation model

This assumes that the ASTs do not have a strong dependence on source density. This could be a good approximation if the source density does not change much over the observation area or is low everywhere. The ‘0 0’ arguments are dummy values.

$ ./run_beast_production.py -o 0 0

Trimming for speed

Trim the full model grid for each source density split file

The physics+observation model can be trimmed of sources that are so bright or so faint (compared to min/max flux in the observation file) that they will by definition produce effectively zero likelihood fits. Such trimming will speed up the fitting.

The source density split sub files are organized such that the range of fluxes is minimized in each sub file. This allows for trimming and faster fitting.

The trimming can take significant time to run. In addition, reading in the full physics+observation model can be slow and such reading can be minimized by producing multiple trimmed models with a single read. A specific tools is provided to setup batch files for this trimming and to do the actual trimming.

This code sets up batch files for submission to the ‘at’ queue on linux (or similar) systems. The projectname (e.g., ‘PHAT’) provides a portion of the batch file names. The datafile and astfile are the observed photometry file (not sub files) and file with the ASTs in them. A subdirection in the project directory is created with a joblist file for submission to the batch queue and smaller files used by the trimming code.

The joblist file can be split into smaller files if submission to multiple cores is desired. Use the ‘split’ commandline tool.

$ ./beast/tools/setup_batch_beast_trim.py projectname datafile astfile \
  --num_subtrim 5

Once the batch files are created, then the joblist can be submitted to the queue. The beast/tools/trim_many_via_obsdata.py code is called and trimmed versions of the physics and observation models are created in the project directory.

$ at -f project/trim_batch_jobs/XX_joblist now

Fitting

The fitting is done for each sub file separately. Code in the tools directory can be used to create the needed set of batch files for submission to a queue. In addition, this code will check and see if the fitting has already been done or was interrupted for the sub files. Only sub files that have not been fit or where the fitting was interrupted will be added to the batch files. The number of sub files to be run on each core is a command line argument (the runs will are serial on the core).

$ ./beast/tools/setup_batch_beast_fit.py projectname datafile \
  --num_percore 2

The jobs can be submitted to the batch queue via:

$ at -f projectname/fit_batch_jobs/beast_batch_fit_X.joblist now

Post-processing

Create the merged stats file

The stats (catalog of fit parameters) files can then be merged into a single file for the region. This only merges the stats output files, but not the pdf1d or lnp files (see the next section).

$ beast/tools/merge_stats_file.py filebase

where the filebase where it is the first portion of the output stats filenames (e.g., filebase_sdx-x_subx_stats.fits).

Reorganize the results into spatial region files

The output files from the BEAST with this workflow are organized by source density and brightness. This is not ideal for finding sources of interest or performing ensemble processing. A more useful organization is by spatial region. The large amount of BEAST output information makes it best to have individual files for each spatial region. Code to do this spatial reordering is provided in two parts. The 1st spatially reorders the results for each source density/brightness BEAST run into files for each spatial region. The 2nd condenses the multiple individual files for each spatial region into the minimal set (stats, pdf1d, and lnp).

Divide each source density/brightness file into files of spatial regions with 10”x10” pixels.

$ beast/tools/reorder_beast_results_spatial.py
   --stats_filename filebase_stats.fits
   --region_filebase filebase_
   --output_filebase spatial/filebase
   --reg_size 10.0

Condense the multiple files for each spatial region into the minimal set. Each spatial region will have files containing the stats, pdf1d, and lnp results for the stars in that region.

$ beast/tools/condense_beast_results_spatial.py
–filedir spatial

Guide to working with subgrids

Concept

The idea of this approach is that we split up the model grid into non-overlapping subgrids, and are then able to run each step of a full BEAST run for each grid individually. The calculations for the individual subgrids can then be run in parallel, for a speed boost without memory overhead, or sequentially, to use less memory. Of course, a combination of the two is also possible, splitting a grid into many subgrids, and running a couple of subgrid calculations at the same time.

At the end of the calculation, we will possess partial PDFs and statistics of each subgrid, which can be merged into a single 1dpdf file and a single stats file. By taking into account the weights of each subgrid correctly, the resulting file should be equivalent to the result of a BEAST run on the full grid.

Workflow

To make use of this functionality, no extra data or changes to the datamodel file are needed, but you will need a custom run script that makes use of a set of newly implemented functions, and new options for existing functions. We will now give a summary of what such a run script has to pay attention to, for each of the steps in a BEAST run. (An example script might be provided later).

Most the new functions can be found in beast.tools.subgridding_tools.

Please refer to the regular example code for the single grid implementation of these steps (beast/examples/phat_small).

Physics model

First the spectral (stellar) grid is created, using make_iso_table, make_spectral_grid and add_stellar_priors. Then, the extinction parameters are applied to this grid, and an extinguished SED grid is obtained, using make_extinguished_sed_grid.

The splitting of the grids has to happen somewhere in this function. Technically, split_grid can be either after obtaining the spectral grid with prior weights, or after obtaining the complete SED grid. The former makes more sense however, because then make_extinguished_sed_grid can be run for individual spectral subgrids, which avoids the memory impact of creating the complete SED grid. This choice also allows the user to run the construction of the grid in parallel.

Tip

The split_grid function returns the file names of the newly created subgrids. It is very useful to save these to a text file, so that they can be used in the other steps.

AST input list

This is the only step where the complete SED grid is needed. The subgrids can be merged into a single file using merge_grids. Just provide an output name, and a list of file names pointing to all the subgrids. The rest of the AST input list generation needs no changes once the full grid file is available.

Observations/Noise models

Here we will create separate noise model files, one for each subgrid. Nothing special happens here, e.g. just call make_toothpick_noise_model for each subgrid using the same AST results file, providing adequate output names for the resulting noise models. It is safe to run this in parallel.

Trimming of the physics and noise models

The same as the above applies here. Just make sure that the subgrid/subnoisemodel files are paired correctly.

Fitting & merging the results
Compatibility

To make sure that the results of the fitting routine for the individual grids are compatible, there are several subtleties which come into play here. Firstly, it needs to be made sure that the 1dpdfs are compatible: their number of bins and the values for the bin centers need to be exactly the same. To ensure this, we need to fix three values for each quantity:

  1. the minimum value
  2. the maximum value
  3. the number of unique values

This is why a new optional argument is provided in the main fitting function, summary_table_memory, which allows the user to override the min, max and number of unique values for all of the quantities.

The option is called grid_info_dict, and needs to be a nested dictionary of a certain format. subgridding_tools contains a function called reduce_grid_info which will generate this dictionary for you. Just provide the filenames to all the (trimmed) subgrids and their (trimmed) noisemodels.

This dictionary has an entry like this for each quantity (Rv in this example):

grid_info_dict['Rv'] = {'min': 0, 'max': 10, 'num_unique': 20}
Fit

When the info described above has been collected, you can start calling summary_table_memory for each of the subgrids, each time providing a trimmed subgrid/trimmed subnoisemodel pair, and adequate filenames for the output. The rest of the arguments can be identical the fit on each subgrid. However, be sure to set do_not_normalize to True, see note below.

Merge

When all the subgrid fits have been successfully completed, the merge step can be started. To do this, just gather all the filenames for the pdf1d and stats files, and pass them to merge_pdf1d_stats.

Note

The main fitting function needed to be modified so that the Pmax values that it stores (which are the maximum log likelihood, needed to calculate the Best values) are compatible between subgrids. This meant getting rid of some forms of normalization (specifically, the prior weight normalization needed to be disabled). Setting do_not_normalize should have no effect on the result actually, so we might remove this option altogether and make it the default behavior.

Note

To calculate the expectation values, another modification to the same function has been done. It now stores a measure for the total weight of the subgrid, total_log_norm. This value is equal to log(sum(exp(lnp))), and is calculated by taking the log of the normalization factor used in the code (because sum(exp(lnp)) / normalization = 1). By comparing this value between subgrids, we are able to calculate a weighted average for each expectation value, which should be close to the one that would be obtained by fitting over the whole grid at once.

Artificial Star Input Lists

The BEAST requires artificial star tests (ASTs) to produce a noise model. The AST input list software generates lists of magnitudes and (if desired) positions for ASTs that can be injected into the observed imaging and then re-photometered to assess the photometric bias, uncertainty, and completeness as a function of the model grid. The output from this software must be run through the same photometry routine (typically DOLPHOT) as used for the photometry measurements themselves.

Once the input lists have been run through the user’s photometry program and each input magnitude has an associated output magnitude (or non-detection value), those results can be used as input ASTs for the building the BEAST noise model.

Generating BEAST-friendly lists of artificial star tests

  1. Run “run_beast.py -p” to produce the physics model grid file “project_name_seds.grid.hd5”.
  2. If you wish to repeat ASTs over multiple bins of stellar density, run “tools/create_source_density_map.py” on your observed star catalog. If you wish to repeat ASTs over multiple bins of background brightness, run “tools/create_background_density_map.py” on your observed star catalog and reference image.
  3. Run “run_beast.py -a” This will use the datamodel to find everything it needs to make ASTs (filters, limits, SED grid, etc.). It will produce a list of fake stars in all bands using the datamodel photometry catalog to trim the inputs at the proper magnitudes. Currently, this script generates fake stars uniformly sampling log(age) space and randomly drawing from the metallicities in the model grid.

Functions

mag_limits: Determines the magnitude limits for the models in each filter in the photometry file.

pick_models: Samples the model grid and outputs models that fit within the mag limits.

pick_positions: Uses the observed stellar catalog to distribution the artificial stars in a similar spatial pattern to the observed catalog

pick_models_per_background: Uses a background map generated by the user to put a set of model SEDs at locations with similar background intensity. This way, it is ensured that different regimes of background emission are evenly sampled. The set of models generated by pick_models is reused for each background intensity bin. This function is not yet used in the standard examples.

Parameters

ast_models_selected_per_age : integer Number of models to pick per age (Default = 70).

ast_bands_above_maglimit : integer Number of filters that must be above the magnitude limit for an AST to be included in the list (Default = 3)

ast_realization_per_model : integer Number of Realizations of each included AST model to be put into the list. (Default = 20)

ast_maglimit : float (single value or array with one value per filter)

  1. option 1: [number] to change the number of mags fainter than the 90th percentile faintest star in the photometry catalog to be used for the mag cut. (Default = 1)
  2. option 2: [space-separated list of numbers] to set custom faint end limits (one value for each band).

ast_with_positions : (bool,optional) If True, the ast list is produced with X,Y positions. If False, the ast list is produced with only magnitudes.

ast_multiple_source_density : string (optional, if defined the file named will be used to determine spatial distribution of ASTs, and all ASTs will be repeated within each source density region.

ast_multiple_background brightness : string (optional, if defined the file named will be used to determine spatial distribution of ASTs, and all ASTs will be repeated within each background brightness region.

ast_pixel_distribution : float (optional) (Used if ast_with_positions is True), minimum pixel separation between AST position and catalog star used to determine the AST spatial distribution.

ast_reference_image : string (optional, but required if ast_with_positions is True and no X and Y information is present in the photometry catalog) Name of the reference image used by DOLPHOT when running the measured photometry.

Returns

Table of fake star magnitudes for all bands in the datamodel photometry file. The file will be in ascii format in the project directory, and it will have the name: [project]/[project]_inputAST.txt

The table will have <number of ages> * ast_models_selected_per_age * ast_realization_per_model lines. If ast_with_positions is True then each line will start with 0 1 X Y, which are the first four columns required by DOLPHOT to define the input star position.

In case the new method is used, which samples by background density, this number will be multiplied by the number of background density bins chosen.

File Formats

physicsmodel grid file

Three datasets are present:

  • grid: parameters of the seds (see below)
    • table N parameters x M models
  • lamb: wavelengths of bands
    • vector X bands
  • seds: fluxes in the requested bands [ergs/cm^2/s/A]
    • table X bands x M models
Grid Parameters
stellar parameters
Direct Grid Parameters:
  • M_ini: initial mass [M_sun]
  • logA: stellar age in [log10(years)]
  • Z: metallicity [what units/convenction?]
Ancillary Parameters:
  • logL: integrated luminosity of stars [log(???units???)]
  • logT: stellar atmosphere T_eff [log(K)]
  • logg: stellar atmosphere log(g) [log(cm^2/s)???]
  • radius: stellar radius [R_sun????]
  • M_act: actual mass at current age [M_sun]
  • mbolmag: M(bol) (??more info??) [mag]
  • osl: ????
dust extinction parameters
Direct Grid Parameters:
  • A(V): extinction in V band [mag]
  • R(V): total-to-selective extinciton = A(V)/E(B-V)
  • f_A: mixture fraction between “MW” and “SMC” extinction curves

Ancillary Parameters:

weights
  • weight: combined grid and prior weights used directly in summation marginalization
  • grid_weight: weighting in summation marginalization for flat priors
  • prior_weight: weighting in the summation marginalization for input priors
model fluxes

The model fluxes are stored in log10 form with and without dust extinction

Examples:
  • logHST_ACS_WFC_F475W_wd: flux in ACS/F475W band [log(ergs/cm^2/s/A)]
  • logHST_ACS_WFC_F475W_nd: intrinsic flux in ACS/F475W band [log(ergs/cm^2/s/A)]
traceback indices

These parameters are useful in mapping the SED model back to the full grid or spectral grid. For example, the SED model may be trimmed of points that will never fit the data due to survey sensitivity limits.

  • fullgrid_idx: index of model in full SED grid
  • specgrid_indx: index of model in the spectral grid
misc
  • keep: True if the model is instide the stellar atmosphere grid defined in T_eff and log(g) space
  • stage: ???

Installation

Installation

Requirements

Running the BEAST requires:

  • Python >=3.4 (recommended) or 2.7 (still possible)
  • Astropy >=1.3

In turn, Astropy depends on other packages for optional features. From these you will need:

  • hdf5 to read/write Table objects from/to HDF5 files.

You will also need:

  • PyTables to manage large amounts of data.

One easy way to obtain the above is through the AstroConda Python stack:

  • First install Miniconda which contains the conda package manager. Once Miniconda is installed, you can use the conda command to install any other packages and create environments, etc.

  • Setup the AstroConda Channel:

    $ conda config --add channels http://ssb.stsci.edu/astroconda
    
  • Install AstroConda with Python 3 (recommended):

    $conda create -n astroconda stsci
    
  • Install AstroConda with Python 2.7 (still possible):

    $ conda create -n iraf27 python=2.7 stsci pyraf iraf
    
  • Make sure that the PyTables and hdf5 packages are installed:

    $ conda install -n astroconda (or iraf27) pytables $ conda install -n astroconda (or iraf27) hdf5

Installing the BEAST

In addition to installing the code, library files also need to be installed. See BEAST Library Files.

Using pip

beast can also be installed using pip:

# from PyPI
$ pip install beast

# if you already have an older version installed
$ pip install --upgrade beast

# from the master trunk on the repository, considered developmental code
$ pip install git+https://github.com/BEAST-Fitting/beast.git
From source

beast can be installed from the source code in the normal python fashion after downloading it from the git repo:

$ python setup.py install
For developers

This option is suitable if you plan to make code contributions to the BEAST. See the BEAST Development for details.

BEAST Library Files

For the BEAST to work properly, you need to place a set of files in a directory. These files contain information related to filters, stellar atmospheres, and in the future stellar evolution models.

Location

There are 3 possible locations for these files (in the order the code will search for them):

  1. in a directory designated by the BEAST_LIBS environment variable
  2. in the ‘.beast’ directory in the home directory of the current user
  3. in the source code in ‘beast/beast/libs’

Whichever of the options used, the directory needs to be manually created.

Script download

After installing the beast, run the following script and the library files will be downloaded into the location specified in Location:

$ python -m beast.tools.get_libfiles

Running Example

You can find examples of BEAST runs in the <https://github.com/BEAST-Fitting/beast-examples> repository.

Inside each example, there is a run_beast*.py script.

phat_small example

This example is based on a very small amount of PHAT old data.

If the beast has not been installed (only downloaded from github), then In the ‘phat_small’ directory, place a soft link named ‘beast’ to where the beast code is located. Specifically:

$ cd beast-examples/phat_small

$ ln -s beast_code_loc/beast/beast beast

If you installed Python through AstroConda, first activate the correct AstroConda environment:

$ source activate astroconda

Verify that the current default Python is version 3:

$ python --version

Now try a sample BEAST run:

$ ./run_beast.py

or:

$ python run_beast.py::

Optionally, you can run BEAST with one, or a combination, of these arguments

-h, --help show this help message and exit
-p, --physicsmodel
 Generate the model grid
-o, --observationmodel
 Calculate the noise model
-t, --trim Trim the model and noise grids
-f, --fit Fit the observed data
-r, --resume Resume a run

For example: $ ./run_beast.py -h or $ ./run_beast.py -potf

If the BEAST is running correctly the second command should run without errors and should have written the output files into ‘beast_example_phat/’. The result can be plotted using:

$ python beast/plotting/plot_indiv_fit.py beast_example_phat/beast_example_phat

The argument for this script is the prefix of the output files. The output should look like this:

_images/beast_example_phat_ifit_starnum_0.png

Developer Documentation

BEAST Development

You are encouraged to help maintain and improve the BEAST. Before doing so, please familiarize yourself with basic version control and Git workflow concepts using one or more of these guides:

Here is the recommended work-flow for contributing to the BEAST project. Details follow.

  • Create your own ‘fork’ of the official BEAST release
  • Create purpose-specific ‘branches’ off your ‘fork’
  • Make changes or additions within the branches
  • Contribute your modified codes to the BEAST project or share them with your collaborators via ‘pull requests’
  • Keep your fork updated to benefit from continued development of the official version and to minimize version conflicts
  • Resolve version conflicts as much as possible before sending pull requests

BEAST on Slack

There is a BEAST space on slack. Email kgordon@stsci.edu for an invite.

Fork the BEAST distro

  • The main BEAST repository lives at <https://github.com/BEAST-Fitting/beast.git>. The master branch of this repository is the version that is distributed.

  • Log in to your github account, and on the top right corner of the BEAST repository page click on the ‘Fork’ button. This will create a copy of the repository in your github accout.

  • Clone a copy of your fork to your local computer. If you have a copy of the official BEAST distro, you may need to rename it; cloning will automatically name the folder ‘beast’.

  • Example of cloning your fork into ‘beast-YourName’ while keeping the official distribution in ‘beast’:

    $ mv beast beast-official
    $ git clone https://github.com/YourName/beast.git
    $ mv beast beast-YourName
    $ mv beast-official beast
    
  • Set the value of the fork’s ‘upstream’ to the official distribution so you can incorporate changes made by others to your development fork. In the clone of your fork, run the following:

    $ git remote add upstream https://github.com/BEAST-Fitting/beast.git
    

Adding Branches

  • Make sure you are in the directory for your fork of the beast. You will be on branch ‘master’ by default.

  • Create and switch to a branch (here named ‘beast-dev1’; generally it’s good practice to give branches names related to their purpose)

    $ git checkout -b beast-dev1
    
  • Instead, if you want to create first a branch and then switch to it:

    $ git branch beast-dev1
    $ git checkout beast-dev1
    
  • To see a list of all branches of the fork, with ‘*’ indicating which branch you are currently working on:

    $ git branch
    
  • To ‘upload’ this branch to your fork:

    $ git push origin beast-dev1
    
  • To revert back to your fork’s master branch:

    $ git checkout master
    

Making Changes

It is recommended that branches have a single purpose; for example, if you are working on adding a test suite, on improving the fitting algorithm and on speeding up some task, those should be in separate branches (e.g.) ‘add-test-suite’, ‘improve-fitting-algorithm’ and ‘beast-dev1’.

  • Anywhere below ‘beast-YourName’, switch to the branch you wish to work off of:

    $ git checkout beast-dev1
    
  • Make changes to the existing files as you wish and/or create new files.

  • To see what changes have been made at any time:

    $ git status
    
  • To stage any new or edited file (e.g., ‘newfile.py’) in preparation for committing:

    $ git add newfile.py
    
  • To add all edited files (not recommended unless you are sure of all your changes):

    $ git add -A
    
  • To ‘commit’ all changes after adding desired files:

    $ git commit -m 'brief comments describing changes'
    
  • Commit messages should be short but descriptive.

  • To see the status of or commit changes of a single file:

    $ git status PathToFile/filename
    $ git commit PathToFile/filename
    
  • To undo all changes made to a file since last commit:

    $ git checkout PathToFile/filename
    
  • To sync changes made to the branch locally with your GitHub repo:

    $ git push origin beast-dev1
    

Test Changes

It is a good idea to test your changes have not caused problems. In the base beast directory the following commands may be run to do this.

Run existing tests, including a regression test against a full BEAST model run. Once the command below has finished, the coverage of the tests can be viewed in a web browser by pointing to files in the htmlconv subdirectory.

$ python setup.py test --remote-data --coverage

Make sure the documentation can be created. The resulting files can be viewed in a web browser by point to files in the docs/docs/_build/html subdirectory.

$ python setup.py build_docs

Collaborating and Contributing

Once you have changes that you’d like to contribute back to the project or share with collaborators, you can open a pull request. It is a good idea to check with the projects or your collaborators which branch of their BEAST repo you should send the pull requests.

Note: Generally in git-lingo, ‘Pull’ is to ‘download’ what ‘Push’ is to ‘upload’. When you are making a ‘pull request’, you are requesting that your contributions are ‘pulled’ from the other side. So you are not pushing it, but the other party is pulling it :-)

  • Use ‘git add’, ‘git commit’ and ‘git push’ as summarized earlier to sync your local edits with your github repo
  • From the github page of your fork of BEAST, e.g., https://github.com/YourName/beast/branches click on ‘Branches’. Next to the name of the branch on which you commited/pushed the changes, click on ‘New pull request’. Verify that names of the target repo (‘base fork’) and branch (‘master’) to which you want to send the pull request, and those of your repo (‘head fork’) and your branch (‘compare’) from which you are sending the pull request match what you intend to do.
  • In the comments section briefly describe the changes/additions you made and submit the pull request.
  • It is at the other party’s (project, collaborator etc.) discretion to accept the changes and merge them with their repo.

Staying up-to-date

The BEAST project’s official repository will be updated from time to time to accommodate bug fixes, improvements and new features. You may keep your fork’s master repo up to date with the following steps.

It is highly recommended that you do this if you intend to contribute changes back to the project. Creating new branches off of an up-to-date fork-master minimizes the chances of conflicting contributions, duplicative efforts and other complications.

  • Switch to your fork’s master branch:

    $ git checkout master
    
  • Fetch the project’s up-to-date distribution:

    $ git fetch upstream
    
  • Merge the project-master (upstream) with your fork’s master (master):

    $ git merge upstream/master
    
  • Sync this change with your GitHub repo:

    $ git push origin master
    
  • Any branch created off of the fork’s master now will start from the correct BEAST distro and not contain any changes made to any prior branch, unless those changes have been incorporated into the official distro via an accepted pull request and merge

Managing Conflicts

Let’s consider a situation where a fork’s master has been updated. A local branch (e.g., beast-dev1) was created before the update and it has changes that hadn’t been contributed back to the project. As a results, there may be conflicting versions of some files. The following steps can resolve this.

  • Merge your fork’s master with upstream/master, and push the master

    $ git checkout master
    $ git fetch upstream
    $ git merge upstream/master
    $ git push origin master
    
  • Create a new branch from the updated fork-master, and push the new branch

    $ git checkout -b beast-dev2
    $ git push origin beast-dev2
    
  • Switch to the branch where your made changes, make a backup and push it

    $ git checkout beast-dev1
    $ git branch beast-dev1-backup beast-dev1
    $ git push origin beast-dev1-backup
    
  • Check the differences between the two branches and merge the two branches. (Edit files on the newer branch to resolve differences manually if needed.)

    $ git diff beast-dev1 beast-dev2
    $ git checkout beast-dev2
    $ git merge beast-dev1
    
  • Finally, push the updated new branch into your gitHub repo (Note: an error free push confirms that all conflicts have been resolved both locally and on the gitHub repo.)

    $ git push origin beast-dev2
    
  • If later you wish to restore the backup:

    $ git reset --hard beast-dev1-backup
    
  • Once all conflicts have been resolved and the re-base goes through, you can delete the backup branch:

    $ git branch -D beast-dev1-backup
    

Managing Conflicts via Re-basing

In some unusual situations, conflicts may seem unresolvable or version conflicts between branches/master/upstream may get messy. One last ditch solution can be re-basing, but this not recommended and certainly is not the preferred way to resolve conflicts. Here are the general steps to do this.

  • Merge your fork’s master with upstream/master, and push the master

  • Switch to and backup the branch with conflicts, and push the backup

  • Re-base the branch on upstream/master, and push it

  • Example:

    • Do the preparatory steps

      $ git checkout master
      $ git fetch upstream
      $ git merge upstream/master
      $ git push origin master
      $ git checkout beast-dev1
      $ git branch beast-dev1-backup beast-dev1
      $ git push origin beast-dev1-backup
      
    • Now re-base the branch:

      $ git rebase upstream/master
      
    • Once all conflicts have been resolved and the re-base goes through without any error message, push the changes to your gitHub repo:

      $ git push origin beast-dev1
      
    • If something goes wrong during re-base, you can start over:

      $ git rebase --abort
      
    • If you wish to restore the backup:

      $ git reset --hard beast-dev1-backup
      

Visualizing Repository Commits

The commits to the beast repository can be visualized using gource. This creates a movie showing the time evolution of the code and who make the changes.

Version created 22 Jan 2018: <http://stsci.edu/~kgordon/beast/beast_repo.mp4>

Command to create it:

$ gource -s .06 -1280x720 --auto-skip-seconds .1 --multi-sampling  --stop-at-end --key --highlight-users --hide mouse,progress --file-idle-time 0 --max-files 0  --background-colour 000000 --font-size 22 --title "This is beast" --output-ppm-stream - --output-framerate 30 | avconv -y -r 30 -f image2pipe -vcodec ppm -i - -b 65536K beast_repo.mp4

Reporting Issues

If you have found a bug in beast please report it by creating a new issue on the beast GitHub issue tracker.

Please include an example that demonstrates the issue sufficiently so that the developers can reproduce and fix the problem. You may also be asked to provide information about your operating system and a full Python stack trace. The developers will walk you through obtaining a stack trace if it is necessary.

Contributing

Like the Astropy project, beast is made both by and for its users. We accept contributions at all levels, spanning the gamut from fixing a typo in the documentation to developing a major new feature. We welcome contributors who will abide by the Python Software Foundation Code of Conduct.

beast follows the same workflow and coding guidelines as Astropy. The following pages will help you get started with contributing fixes, code, or documentation (no git or GitHub experience necessary):

For the complete list of contributors please see the beast contributors page on Github.

Reference API

Physics Model

Stars

beast.physicsmodel.stars.stellib Module

Stellib class

Intent to implement a generic module to manage stellar library from various sources.

The interpolation is implemented from the pegase.2 fortran converted algorithm. (this may not be pythonic though)

Classes
Stellib(*args, **kargs) Basic stellar library class
CompositeStellib(osllist, *args, **kwargs) Generates an object from the union of multiple individual libraries
Kurucz([filename]) The stellar atmosphere models by Castelli and Kurucz 2004 or ATLAS9
Tlusty([filename]) Tlusty O and B stellar atmospheres
BTSettl([medres]) BT-Settl Library
Munari(*args, **kwargs) ATLAS9 stellar atmospheres providing higher res than Kurucz medium resolution (1 Ang/pix) in optical (2500-10500 Ang)
Elodie(*args, **kwargs) Elodie 3.1 stellar library derived class
BaSeL(*args, **kwargs) BaSeL 2.2 (This library is used in Pegase.2)
Class Inheritance Diagram

Inheritance diagram of beast.physicsmodel.stars.stellib.Stellib, beast.physicsmodel.stars.stellib.CompositeStellib, beast.physicsmodel.stars.stellib.Kurucz, beast.physicsmodel.stars.stellib.Tlusty, beast.physicsmodel.stars.stellib.BTSettl, beast.physicsmodel.stars.stellib.Munari, beast.physicsmodel.stars.stellib.Elodie, beast.physicsmodel.stars.stellib.BaSeL

beast.physicsmodel.stars.isochrone Module

Isochrone class

Intent to implement a generic module to manage isochrone mining from various sources.

Classes
Isochrone([name])
padova2010()
pegase()
ezIsoch(source[, interp]) Trying to make something that is easy to manipulate This class is basically a proxy to a table (whatever format works best) and tries to keep things coherent.
PadovaWeb([Zref, modeltype, filterPMS, …])
MISTWeb([Zref, rotation])
Class Inheritance Diagram

Inheritance diagram of beast.physicsmodel.stars.isochrone.Isochrone, beast.physicsmodel.stars.isochrone.padova2010, beast.physicsmodel.stars.isochrone.pegase, beast.physicsmodel.stars.isochrone.ezIsoch, beast.physicsmodel.stars.isochrone.PadovaWeb, beast.physicsmodel.stars.isochrone.MISTWeb

Dust

beast.physicsmodel.dust.extinction Module

Extinction Curves

Classes
ExtinctionLaw() Extinction Law Template Class
Cardelli89() Cardelli89 Milky Way R(V) dependent Extinction Law
Fitzpatrick99() Fitzpatrick99 Milky Way R(V) dependent Extinction Law
Gordon03_SMCBar() Gordon03 SMCBar extinction curve
Gordon16_RvFALaw() Gordon16 RvFA extinction law
Class Inheritance Diagram

Inheritance diagram of beast.physicsmodel.dust.extinction.ExtinctionLaw, beast.physicsmodel.dust.extinction.Cardelli89, beast.physicsmodel.dust.extinction.Fitzpatrick99, beast.physicsmodel.dust.extinction.Gordon03_SMCBar, beast.physicsmodel.dust.extinction.Gordon16_RvFALaw

beast.physicsmodel.dust.attenuation Module

Attenuation Curves

Classes
Calzetti00() Calzetti et al.
Class Inheritance Diagram

Inheritance diagram of beast.physicsmodel.dust.attenuation.Calzetti00

Grid

beast.physicsmodel.grid Module

Manage Various SED/spectral grids is a generic way

Major changes from the previous version of core.grid:
Removed general write method added backend functions and direct access to properties MemoryGrid migrates to a function that creates a ModelGrid with a MemoryBackend FileSEDGrid, FileSpectralGrid migrated to functions as well

Currently no majors variation is expected as long as memory or cache backend types are used

More optimization can be done, especially in SpectralGrid.getSEDs

TODO: Check where any beast code uses eztable.Table’s specific methods and
implement equivalent in the backends for transparency in the case of HDFBackend
  • aliases
  • eval expression
  • selectWhere
  • readCoordinates (although should work already)
Functions
MemoryGrid(lamb[, seds, grid, header, aliases]) Replace the MemoryGrid class for backwards compatibility
FileSEDGrid(fname[, header, aliases, backend]) Replace the FileSEDGrid class for backwards compatibility
Classes
ModelGrid(*args, **kwargs) Generic class for a minimum update of future codes
SpectralGrid(*args, **kwargs) Generate a grid that contains spectra.
StellibGrid(osl, filters[, header, aliases]) Generate a grid from a spectral library
Class Inheritance Diagram

Inheritance diagram of beast.physicsmodel.grid.ModelGrid, beast.physicsmodel.grid.SpectralGrid, beast.physicsmodel.grid.StellibGrid

beast.physicsmodel.creategrid Module

Create extinguished grid more segmented dealing with large grids with enough memory

All functions are now transformed into generators. As a result, any function allows computation of a grid in an arbitrary number of chunks. This offers the possibility to generate grids that cannot fit in memory.

Note

  • dependencies have also been updated accordingly.
  • likelihood computations need to be updated to allow computations even if the full grid does not fit in memory
Functions
gen_spectral_grid_from_stellib_given_points(…) Generator that reinterpolates a given stellar spectral library on to
gen_spectral_grid_from_stellib(*args, **kwargs) Reinterpolate a given stellar spectral library on to an Isochrone grid
make_extinguished_grid(*args, **kwargs) Extinguish spectra and extract an SEDGrid through given series of filters (all wavelengths in stellar SEDs and filter response functions are assumed to be in Angstroms)
add_spectral_properties(specgrid[, …]) Addon spectral calculations to spectral grids to extract in the fitting routines
calc_absflux_cov_matrices(specgrid, sedgrid, …) Calculate the absflux covariance matrices for each model Must be done on the full spectrum of each model to account for the changing combined spectral response due to the model SED and the filter response curve.

Observation Model

Basics

beast.observationmodel.observations Module

Defines a generic interface to observation catalog This enables to handle non detections, (upper limits one day?), flux and magnitude conversions to avoid painful preparation of the dataset

Data model v2 with limited quantity units handling

Classes
Observations(inputFile[, desc]) A generic class that interfaces observation catalog in a standardized way
FakeObs(inputFile[, desc]) Generate a data interface object
PhotCharact(fname, filters)
Class Inheritance Diagram

Inheritance diagram of beast.observationmodel.observations.Observations, beast.observationmodel.observations.FakeObs, beast.observationmodel.observations.PhotCharact

beast.observationmodel.phot Module
Photometric package

Defines a Filter class and associated functions to extract photometry.

This also include functions to keep libraries up to date

Note

integrations are done using trapz() Why not Simpsons? Simpsons principle is to take sequence of 3 points to make a quadratic interpolation. Which in the end, when filters have sharp edges, the error due to this “interpolation” are extremely large in comparison to the uncertainties induced by trapeze integration.

Functions
load_all_filters([interp, lamb, filterLib]) load all filters from the library
load_filters(names[, interp, lamb, filterLib]) load a limited set of filters
load_Integrationfilters(flist[, interp, lamb]) load a limited set of filters
extractPhotometry(lamb, spec, flist[, absFlux]) Extract seds from a one single spectrum
extractSEDs(g0, flist[, absFlux]) Extract seds from a grid
STmag_to_flux(v) Convert an ST magnitude to erg/s/cm2/AA (Flambda)
STmag_from_flux(v) Convert to ST magnitude from erg/s/cm2/AA (Flambda)
fluxToMag(flux) Return the magnitudes from flux values
fluxErrTomag(flux, fluxerr) Return the magnitudes and associated errors from fluxes and flux error values
magToFlux(mag) Return the flux from magnitude values
magErrToFlux(mag, err) Return the flux and associated errors from magnitude and mag error values
append_filter(lamb, flux, tablename, …[, …]) Edit the filter catalog and append a new one given by its transfer function
appendVegaFilter(filtInst[, VegaLib]) Add filter properties to the Vega library
Classes
Filter(wavelength, transmit[, name]) Class filter Define a filter by its name, wavelength and transmission
IntegrationFilter(wavelength, transmit[, name]) Class filter
Class Inheritance Diagram

Inheritance diagram of beast.observationmodel.phot.Filter, beast.observationmodel.phot.IntegrationFilter

beast.observationmodel.vega Module

Handle vega spec/mags/fluxes manipulations

Functions
from_Vegamag_to_Flux(lamb, vega_mag) function decorator that transforms vega magnitudes to fluxes (without vega reference)
Classes
Vega([source]) Class that handles vega spectrum and references.
Class Inheritance Diagram

Inheritance diagram of beast.observationmodel.vega.Vega

Noise Model

beast.observationmodel.noisemodel.noisemodel Module
Classes
NoiseModel(astfile, *args, **kwargs) Initial class of noise models
Class Inheritance Diagram

Inheritance diagram of beast.observationmodel.noisemodel.noisemodel.NoiseModel

beast.observationmodel.noisemodel.toothpick Module

Toothpick noise model assumes that every photometric band is independent from the others.

The following package implements two classes that corresponds to two variants of input AST data.

the first MultiFilterASTs assumes that all AST information is compiled into on single table, in which one entry corresponds to one artificial star and recovered values

the second perCameraASTs assumes that the information is split into multiple tables, and implements the equivalent of multiple instances of MultiFilterASTs in parallel to calculate the model.

Method

The noise model is computed in equally spaced bins in log flux space to avoid injecting noise when the ASTs grossly oversample the model space. This is the case for single band ASTs - this is always the case for the BEAST toothpick noise model.

TODO:
+++ perCameraASTs has not been updated - delete? Does not work with PHAT single camera ASTs - column names duplicated
Classes
MultiFilterASTs(astfile, filters[, vega_fname]) Implement a noise model for which input information of ASTs are provided as one single table
Class Inheritance Diagram

Inheritance diagram of beast.observationmodel.noisemodel.toothpick.MultiFilterASTs

beast.observationmodel.noisemodel.trunchen Module

Trunchen version of noisemodel Goal is to compute the full 6-band covariance matrix for each model

Classes
MultiFilterASTs(astfile, filters, *args, …) Implement a noise model where the ASTs are provided as a single table
Class Inheritance Diagram

Inheritance diagram of beast.observationmodel.noisemodel.trunchen.MultiFilterASTs

beast.observationmodel.noisemodel.generic_noisemodel Module

Generates a generic noise model from artifical star tests (ASTs) results using the toothpick method. Using ASTs results in a noise model that includes contributions from measurement (photon) noise and crowding noise.

Toothpick assumes that all bands are independent - no covariance. This is a conservative assumption. If there is true covariance more accurate results with smaller uncertainties on fit parameters can be achieved using the trunchen method. The trunchen method requires significantly more complicated ASTs and many more of them.

Functions
make_toothpick_noise_model(outname, astfile, …) toothpick noise model assumes that every filter is independent with any other.
get_noisemodelcat(filename) returns the noise model
Classes
Generic_ToothPick_Noisemodel(astfile, filters)
Class Inheritance Diagram

Inheritance diagram of beast.observationmodel.noisemodel.generic_noisemodel.Generic_ToothPick_Noisemodel

Fitting Module

beast.fitting.pdf1d Module

Classes
pdf1d(gridvals, nbins[, logspacing, minval, …]) Create an object which can be used to efficiently generate a 1D pdf for an observed object
Class Inheritance Diagram

Inheritance diagram of beast.fitting.pdf1d.pdf1d

beast.fitting.trim_grid Module

Trim the grid of models

For a given set of observations, there will be models that are so bright or faint that they will always have ~0 probability of fitting the data. This program trims those models out of the SED grid so that time is not spent calculating model points that are always zero probability.

Functions
trim_models(sedgrid, sedgrid_noisemodel, …)
Returns: