special.chi

Function defining the goodness of fit.

special.chi.gof_scal(params, lbda_obs, spec_obs, err_obs, lbda_tmp, spec_mod, dlbda_obs, instru_corr, instru_fwhm, instru_idx, filter_reader, ext_range)[source]

Wrapper for the goodness of fit routine to search for best template library fitting spectrum. The only difference with goodness_of_fit is the “params” argument.

Parameters
  • params (tuple) – Tuple of 1 or 2 elements: scaling factor and (optionally) differential optical extinction $Delta A_V$ ($Delta A_V$ can be negative if template spectra are not dereddened).

  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • spec_obs (numpy 1d ndarray or list) – Observed spectrum for each value of lbda_obs.

  • err_obs (numpy 1d/2d ndarray or list) – Uncertainties on the observed spectrum. If 2d array, should be [2,n_ch] where the first (resp. second) column corresponds to lower (upper) uncertainty, and n_ch is the length of lbda_obs and spec_obs.

  • lbda_tmp (numpy 1d ndarray or list) – Wavelength of tested template or model. Should have a wider wavelength extent than the observed spectrum.

  • spec_mod (numpy 1d ndarray) – Model spectrum. It does not require the same wavelength sampling as the observed spectrum. If higher spectral resolution, it will be convolved with the instrumental spectral psf (if instru_fwhm is provided) and then binned to the same sampling. If lower spectral resolution, a linear interpolation is performed to infer the value at the observed spectrum wavelength sampling.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_corr (numpy 2d ndarray or list, optional) – Spectral correlation throughout post-processed images in which the spectrum is measured. It is specific to the combination of instrument, algorithm and radial separation of the companion from the central star. Can be computed using distances.spectral_correlation(). In case of a spectrum obtained with different instruments, build it with distances.combine_corrs(). If not provided, it will consider the uncertainties in each spectral channels are independent. See Greco & Brandt (2017) for details.

  • instru_fwhm (float or list, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant.

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default format assumed for the files: - first row containing header - starting from 2nd row: 1st column: WL in mu, 2nd column: transmission Note: files should all have the same format and wavelength units.

  • ext_range (tuple or None, opt) – If None: differential extinction is not to be considered as a free parameter. Elif a tuple of 3 floats is provided, differential extinction will be considered, with the floats as lower limit, upper limit and step of the grid search. Note: if simplex search, the range is still used to set a chi of np.inf outside of the range.

Returns

chi_sq – Goodness of fit indicator.

Return type

float

special.chi.goodness_of_fit(lbda_obs, spec_obs, err_obs, lbda_mod, spec_mod, dlbda_obs=None, instru_corr=None, instru_fwhm=None, instru_idx=None, filter_reader=None, plot=False, outfile=None)[source]

Function to estimate the goodness of fit indicator defined as in Olofsson et al. 2016 (Eq. 8). In addition, if a spectral correlation matrix is provided, it is used to take into account the correlated noise between spectral channels (see Greco & Brandt 2016). The goodness of fit indicator is identical to a chi square when all points are obtained from the same instrument (no additional weighting).

Parameters
  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • spec_obs (numpy 1d ndarray or list) – Observed spectrum for each value of lbda_obs.

  • err_obs (numpy 1d/2d ndarray or list) – Uncertainties on the observed spectrum. If 2d array, should be [2,n_ch] where the first (resp. second) column corresponds to lower (upper) uncertainty, and n_ch is the length of lbda_obs and spec_obs.

  • lbda_mod (numpy 1d ndarray or list) – Wavelength of tested model. Should have a wider wavelength extent than the observed spectrum.

  • spec_mod (numpy 1d ndarray) – Model spectrum. It does not require the same wavelength sampling as the observed spectrum. If higher spectral resolution, it will be convolved with the instrumental spectral psf (if instru_fwhm is provided) and then binned to the same sampling. If lower spectral resolution, a linear interpolation is performed to infer the value at the observed spectrum wavelength sampling.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_corr (numpy 2d ndarray or list, optional) – Spectral correlation throughout post-processed images in which the spectrum is measured. It is specific to the combination of instrument, algorithm and radial separation of the companion from the central star. Can be computed using distances.spectral_correlation(). In case of a spectrum obtained with different instruments, build it with distances.combine_corrs(). If not provided, it will consider the uncertainties in each spectral channels are independent. See Greco & Brandt (2017) for details.

  • instru_fwhm (float or list, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant.

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default format assumed for the files: - first row containing header - starting from 2nd row: 1st column: WL in mu, 2nd column: transmission Note: files should all have the same format and wavelength units.

  • plot (bool, optional) – Whether to plot the

  • outfile (string, optional) – Path+filename for the plot to be saved if provided.

Returns

chi_sq – Goodness of fit indicator.

Return type

float

special.config

Module with configuration parameters, timing functions and multiprocessing utilities (inspired from VIP).

special.config.time_fin(start_time)[source]

Return the execution time of a script.

It requires the initialization with the function time_ini().

special.config.time_ini(verbose=True)[source]

Set and print the time at which the script started.

Returns

start_time – Starting time.

Return type

string

special.config.timing(start_time)[source]

Print the execution time of a script.

It requires the initialization with the function time_ini().

special.fits

Module with various fits handling functions (same as in VIP)

special.fits.info_fits(fitsfilename, **kwargs)[source]

Print the information about a fits file.

Parameters
  • fitsfilename (str) – Path to the fits file.

  • **kwargs (optional) – Optional arguments to the astropy.io.fits.open() function. E.g. “output_verify” can be set to ignore, in case of non-standard header.

special.fits.open_fits(fitsfilename, n=0, header=False, ignore_missing_end=False, precision=<class 'numpy.float32'>, return_memmap=False, verbose=True, **kwargs)[source]

Load a fits file into a memory as numpy array.

Parameters
  • fitsfilename (string or pathlib.Path) – Name of the fits file or pathlib.Path object

  • n (int, optional) – It chooses which HDU to open. Default is the first one.

  • header (bool, optional) – Whether to return the header along with the data or not.

  • precision (numpy dtype, optional) – Float precision, by default np.float32 or single precision float.

  • ignore_missing_end (bool optional) – Allows to open fits files with a header missing END card.

  • return_memmap (bool, optional) – If True, the function returns the handle to the FITS file opened by mmap. With the hdulist, array data of each HDU to be accessed with mmap, rather than being read into memory all at once. This is particularly useful for working with very large arrays that cannot fit entirely into physical memory.

  • verbose (bool, optional) – If True prints message of completion.

  • **kwargs (optional) – Optional arguments to the astropy.io.fits.open() function. E.g. “output_verify” can be set to ignore, in case of non-standard header.

Returns

  • hdulist (hdulist) – [memmap=True] FITS file n hdulist.

  • data (numpy ndarray) – [memmap=False] Array containing the frames of the fits-cube.

  • header (dict) – [memmap=False, header=True] Dictionary containing the fits header.

special.fits.write_fits(fitsfilename, array, header=None, output_verify='exception', precision=<class 'numpy.float32'>, verbose=True)[source]

Write array and header into FTIS file.

If there is a previous file with the same filename then it’s replaced.

Parameters
  • fitsfilename (string) – Full path of the fits file to be written.

  • array (numpy ndarray) – Array to be written into a fits file.

  • header (numpy ndarray, optional) – Array with header.

  • output_verify (str, optional) – {“fix”, “silentfix”, “ignore”, “warn”, “exception”} Verification options: https://docs.astropy.org/en/stable/io/fits/api/verification.html

  • precision (numpy dtype, optional) – Float precision, by default np.float32 or single precision float.

  • verbose (bool, optional) – If True prints message.

special.mcmc_sampling

Module with the MCMC (emcee) sampling for model spectra parameter estimation.

special.mcmc_sampling.chain_zero_truncated(chain, ln_proba=None, ar=None)[source]

Return the Markov chain with the dimension: walkers x steps* x parameters, where steps* is the last step before having 0 (not yet constructed chain).

Parameters
  • chain (numpy.array) – The MCMC chain.

  • ln_proba (numpy.array, opt) – Corresponding ln-probabilities.

Returns

  • out (numpy.array) – The truncated MCMC chain, that is to say, the chain which only contains relevant information.

  • out_ln_proba (numpy.array) – If ln_proba is provided as input, out_ln_proba contains the zero-truncated ln-proba (i.e. matching shape with non-zero samples)

special.mcmc_sampling.confidence(isamples, labels, cfd=68.27, bins=100, gaussian_fit=False, weights=None, verbose=True, save=False, output_dir='', bounds=None, priors=None, **kwargs)[source]

Determine the highly probable value for each model parameter, as well as the 1-sigma confidence interval.

Parameters
  • isamples (numpy.array) – The independent samples for each model parameter.

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - next the planet photometric radius ‘R’, - (optionally) the optical extinction ‘Av’. - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution

  • cfd (float, optional) – The confidence level given in percentage.

  • bins (int, optional) – The number of bins used to sample the posterior distributions.

  • gaussian_fit (boolean, optional) – If True, a gaussian fit is performed in order to determine (mu,sigma)

  • weights ((n, ) numpy ndarray or None, optional) – An array of weights for each sample.

  • verbose (boolean, optional) – Display information in the shell.

  • save (boolean, optional) – If “True”, a txt file with the results is saved in the output repository.

  • bounds (dictionary, opt) – Only used if a text file is saved summarizing results+bounds+priors. Should be the same bounds as provided to the MCMC.

  • priors (dictionary, opt) – Only used if a text file is saved summarizing results+bounds+priors. Should be the same priors as provided to the MCMC.

  • kwargs (optional) – Additional attributes are passed to the matplotlib hist() method.

Returns

out – A 2 elements tuple with the highly probable solution and the confidence interval.

Return type

tuple

special.mcmc_sampling.lnlike(params, labels, grid_param_list, lbda_obs, spec_obs, err_obs, dist, model_grid=None, model_reader=None, em_lines={}, em_grid={}, dlbda_obs=None, instru_corr=None, instru_fwhm=None, instru_idx=None, filter_reader=None, AV_bef_bb=False, units_obs='si', units_mod='si', interp_order=1)[source]

Define the likelihood log-function.

Parameters
  • params (tuple) – Set of models parameters for which the model grid has to be interpolated.

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - then the planet photometric radius ‘R’, in Jupiter radius - (optionally) the flux of emission lines (labels should match those in the em_lines dictionary), in units of the model spectrum (times mu) - (optionally) the optical extinction ‘Av’, in mag - (optionally) the ratio of total to selective optical extinction ‘Rv’ - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution.

  • grid_param_list (list of 1d numpy arrays/lists OR None) –

    • If list, should contain list/numpy 1d arrays with available grid of

    model parameters. - Set to None for a pure n-blackbody fit, n=1,2,… - Note1: model grids should not contain grids on radius and Av, but these should still be passed in initial_state (Av optional). - Note2: for a combined grid model + black body, just provide the grid parameter list here, and provide values for ‘Tbbn’ and ‘Rbbn’ in initial_state, labels and bounds.

  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • spec_obs (numpy 1d ndarray or list) – Observed spectrum for each value of lbda_obs.

  • err_obs (numpy 1d/2d ndarray or list) – Uncertainties on the observed spectrum. If 2d array, should be [2,n_ch] where the first (resp. second) column corresponds to lower (upper) uncertainty, and n_ch is the length of lbda_obs and spec_obs.

  • dist (float) – Distance in parsec, used for flux scaling of the models.

  • model_grid (numpy N-d array, optional) – If provided, should contain the grid of model spectra for each free parameter of the given grid. I.e. for a grid of n_T values of Teff and n_g values of Logg, the numpy array should be n_T x n_g x n_ch x 2, where n_ch is the number of wavelengths for the observed spectrum, and the last 2 dims are for wavelength and fluxes respectively. If provided, takes precedence over model_name/model_reader.

  • model_reader (python routine, opt) – External routine that reads a model file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains model values. See example routine in model_interpolation() description.

  • em_lines (dictionary, opt) –

    Dictionary of emission lines to be added on top of the model spectrum. Each dict entry should be the name of the line, assigned to a tuple of 4 values: 1) the wavelength (in mu); 2) a string indicating whether line intensity is expressed in flux (‘F’), luminosity (‘L’) or log(L/LSun) (“LogL”); 3) the FWHM of the gaussian (or None if to be set automatically); 4) whether the FWHM is expressed in ‘nm’, ‘mu’ or ‘km/s’.

    The third and fourth can also be set to None. In that case, the FWHM of the gaussian will automatically be set to the equivalent width of the line, calculated from the flux to be injected and the continuum level (measured in the grid model to which the line is injected). Examples: - em_lines = {‘BrG’:(2.1667,’F’,None, None)}; - em_lines = {‘BrG’:(2.1667,’LogL’, 100, ‘km/s’)}

  • em_grid (dictionary pointing to lists, opt) – Dictionary where each entry corresponds to an emission line and points to a list of values to inject for emission line fluxes. For computation efficiency, interpolation will be performed between the points of this grid during the MCMC sampling. Dict entries should match labels and em_lines.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_corr (numpy 2d ndarray or list, optional) – Spectral correlation throughout post-processed images in which the spectrum is measured. It is specific to the combination of instrument, algorithm and radial separation of the companion from the central star. Can be computed using distances.spectral_correlation(). In case of a spectrum obtained with different instruments, build it with distances.combine_corrs(). If not provided, it will consider the uncertainties in each spectral channels are independent. See Greco & Brandt (2017) for details.

  • instru_fwhm (float or list, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant.

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default format assumed for the files: - first row containing header - starting from 2nd row: 1st column: WL in mu, 2nd column: transmission Note: files should all have the same format and wavelength units.

  • AV_bef_bb (bool, optional) – If both extinction and an extra bb component are free parameters, whether to apply extinction before adding the BB component (e.g. extinction mostly from circumplanetary dust) or after the BB component (e.g. mostly insterstellar extinction).

  • units_obs (str, opt {'si','cgs','jy'}) – Units of observed spectrum. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys.

  • units_mod (str, opt {'si','cgs','jy'}) – Units of the model. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys. If different to units_obs, the spectrum units will be converted.

  • interp_order (int, opt, {-1,0,1}) – Interpolation mode for model interpolation. -1: log interpolation (i.e. linear interpolatlion on log(Flux)) 0: nearest neighbour model. 1: Order 1 spline interpolation.

Returns

out – The log of the likelihood.

Return type

float

special.mcmc_sampling.lnprob(params, labels, bounds, grid_param_list, lbda_obs, spec_obs, err_obs, dist, model_grid=None, model_reader=None, em_lines={}, em_grid={}, dlbda_obs=None, instru_corr=None, instru_fwhm=None, instru_idx=None, filter_reader=None, AV_bef_bb=False, units_obs='si', units_mod='si', interp_order=1, priors=None, physical=True)[source]

Define the probability log-function as the sum between the prior and likelihood log-functions.

Parameters
  • params (tuple) – The model parameters.

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - then the planet photometric radius ‘R’, in Jupiter radius - (optionally) the flux of emission lines (labels should match those in the em_lines dictionary), in units of the model spectrum (times mu) - (optionally) the optical extinction ‘Av’, in mag - (optionally) the ratio of total to selective optical extinction ‘Rv’ - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution.

  • bounds (dictionary) – Each entry should be associated with a tuple corresponding to lower and upper bounds respectively. Bounds should be provided for ALL model parameters, including ‘R’ (planet photometric radius). ‘Av’ (optical extinction) is optional. If provided here, Av will also be fitted. All keywords that are neither ‘R’, ‘Av’ nor ‘M’ will be considered model grid parameters. Example for BT-SETTL: bounds = {‘Teff’:(1000,2000), ‘logg’:(3.0,4.5), ‘R’:(0.1,5), ‘Av’:(0.,2.5)} ‘M’ can be used for a prior on the mass of the planet. In that case the corresponding prior log probability is computed from the values for parameters ‘logg’ and ‘R’.

  • grid_param_list (list of 1d numpy arrays/lists OR None) –

    • If list, should contain list/numpy 1d arrays with available grid of model parameters.

    • Set to None for a pure n-blackbody fit, n=1,2,…

    • Note1: model grids should not contain grids on radius and Av, but these should still be passed in initial_state (Av optional).

    • Note2: for a combined grid model + black body, just provide the grid parameter list here, and provide values for ‘Tbbn’ and ‘Rbbn’ in initial_state, labels and bounds.

  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • spec_obs (numpy 1d ndarray or list) – Observed spectrum for each value of lbda_obs.

  • err_obs (numpy 1d/2d ndarray or list) – Uncertainties on the observed spectrum. If 2d array, should be [2,n_ch] where the first (resp. second) column corresponds to lower (upper) uncertainty, and n_ch is the length of lbda_obs and spec_obs.

  • dist (float) – Distance in parsec, used for flux scaling of the models.

  • model_grid (numpy N-d array, optional) – If provided, should contain the grid of model spectra for each free parameter of the given grid. I.e. for a grid of n_T values of Teff and n_g values of Logg, the numpy array should be n_T x n_g x n_ch x 2, where n_ch is the number of wavelengths for the observed spectrum, and the last 2 dims are for wavelength and fluxes respectively. If provided, takes precedence over model_name/model_reader.

  • model_reader (python routine) – External routine that reads a model file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains model values. See example routine in model_interpolation() description.

  • em_lines (dictionary, opt) – Dictionary of emission lines to be added on top of the model spectrum. Each dict entry should be the name of the line, assigned to a tuple of 4 values: 1) the wavelength (in mu); 2) a string indicating whether line intensity is expressed in flux (‘F’), luminosity (‘L’) or log(L/LSun) (“LogL”); 3) the FWHM of the gaussian (or None if to be set automatically); 4) whether the FWHM is expressed in ‘nm’, ‘mu’ or ‘km/s’. The third and fourth can also be set to None. In that case, the FWHM of the gaussian will automatically be set to the equivalent width of the line, calculated from the flux to be injected and the continuum level (measured in the grid model to which the line is injected). Examples: em_lines = {‘BrG’:(2.1667,’F’,None, None)}; em_lines = {‘BrG’:(2.1667,’LogL’, 100, ‘km/s’)}

  • em_grid (dictionary pointing to lists, opt) – Dictionary where each entry corresponds to an emission line and points to a list of values to inject for emission line fluxes. For computation efficiency, interpolation will be performed between the points of this grid during the MCMC sampling. Dict entries should match labels and em_lines.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_corr (numpy 2d ndarray or list, optional) – Spectral correlation throughout post-processed images in which the spectrum is measured. It is specific to the combination of instrument, algorithm and radial separation of the companion from the central star. Can be computed using distances.spectral_correlation(). In case of a spectrum obtained with different instruments, build it with distances.combine_corrs(). If not provided, it will consider the uncertainties in each spectral channels are independent. See Greco & Brandt (2017) for details.

  • instru_fwhm (float or list, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant.

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default format assumed for the files: - first row containing header - starting from 2nd row: 1st column: WL in mu, 2nd column: transmission Note: files should all have the same format and wavelength units.

  • AV_bef_bb (bool, optional) – If both extinction and an extra bb component are free parameters, whether to apply extinction before adding the BB component (e.g. extinction mostly from circumplanetary dust) or after the BB component (e.g. mostly insterstellar extinction).

  • units_obs (str, opt {'si','cgs','jy'}) – Units of observed spectrum. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys.

  • units_mod (str, opt {'si','cgs','jy'}) – Units of the model. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys. If different to units_obs, the spectrum units will be converted.

  • interp_order (int, opt, {-1,0,1}) – Interpolation mode for model interpolation. -1: log interpolation (i.e. linear interpolatlion on log(Flux)) 0: nearest neighbour model. 1: Order 1 spline interpolation.

  • priors (dictionary, opt) –

    If not None, sets prior estimates for each parameter of the model. Each entry should be set to either None (no prior) or a tuple of 2 elements containing prior estimate and uncertainty on the estimate. Missing entries (i.e. provided in bounds dictionary but not here) will be associated no prior. e.g. priors = {‘Teff’:(1600,100), ‘logg’:(3.5,0.5), ‘R’:(1.6,0.1), ‘Av’:(1.8,0.2), ‘M’:(10,3)}

    Important: dictionary entry names should match exactly those of bounds.

  • physical (bool, opt) – In case of extra black body component(s) to a photosphere, whether to force lower temperature than the photosphere effective temperature.

Returns

out – The probability log-function.

Return type

float

special.mcmc_sampling.mcmc_spec_sampling(lbda_obs, spec_obs, err_obs, dist, grid_param_list, initial_state, labels, bounds, resamp_before=True, model_grid=None, model_reader=None, em_lines={}, em_grid={}, dlbda_obs=None, instru_corr=None, instru_fwhm=None, instru_idx=None, filter_reader=None, AV_bef_bb=False, units_obs='si', units_mod='si', interp_order=1, priors=None, physical=True, interp_nonexist=True, ini_ball=0.1, a=2.0, nwalkers=1000, niteration_min=10, niteration_limit=1000, niteration_supp=0, check_maxgap=20, conv_test='ac', ac_c=50, ac_count_thr=3, burnin=0.3, rhat_threshold=1.01, rhat_count_threshold=1, grid_name='resamp_grid.fits', output_dir='special/', output_file=None, nproc=1, display=False, verbosity=0, save=False)[source]

Runs an affine invariant MCMC sampling algorithm in order to determine the most likely parameters for given spectral model and observed spectrum. Allowed features: * Spectral models can either be read from a grid (e.g. BT-SETTL) or be purely parametric (e.g. a blackbody model). * Extinction (A_V) and total-to-selective optical extinction ratio (R_V) can be sampled. Default: A_V=0. If non-zero, default R_V=3.1 (ISM). * A dictionary of emission lines can be provided and their flux can be sampled too. * Gaussian priors can be provided for each parameter, including the mass of the object. The latter will be used if ‘logg’ is a parameter. * Spectral correlation between measurements will be taken into account if provided in ‘instru_corr’. * Convolution of the model spectra with instrumental FWHM or photometric filter can be performed using ‘instru_fwhm’ and/or ‘filter_reader’ (done before resampling to observed). * The weight of each observed point will be directly proportional to Delta lbda_i/lbda_i, where Delta lbda_i is either the FWHM of the photometric filter (imager) or the width of the spectral channel (IFS). * MCMC convergence criterion can either be based on auto-correlation time (default) or the Gelman-Rubin test.

The result of this procedure is a chain with the samples from the posterior distributions of each of the free parameters in the model. More details in Christiaens et al. (2021).

Parameters
  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • spec_obs (numpy 1d ndarray or list) – Observed spectrum for each value of lbda_obs.

  • err_obs (numpy 1d/2d ndarray or list) – Uncertainties on the observed spectrum. If 2d array, should be [2,n_ch] where the first (resp. second) column corresponds to lower (upper) uncertainty, and n_ch is the length of lbda_obs and spec_obs.

  • dist (float) – Distance in parsec, used for flux scaling of the models.

  • grid_param_list (list of 1d numpy arrays/lists OR None) –

    • If list, should contain list/numpy 1d arrays with available grid of model parameters.

    • Set to None for a pure n-blackbody fit, n=1,2,…

    • Note1: model grids should not contain grids on radius and Av, but these should still be passed in initial_state (Av optional).

    • Note2: for a combined grid model + black body, just provide the grid parameter list here, and provide values for ‘Tbbn’ and ‘Rbbn’ in initial_state, labels and bounds.

  • initial_state (tuple of floats) – Initial guess on the best fit parameters of the spectral fit. Length of the tuple should match the total number of free parameters. Walkers will all be initialised in a small ball of parameter space around that first guess. - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - then the planet photometric radius ‘R’, in Jupiter radius - (optionally) the intensity of emission lines (labels must match those in the em_lines dict), in units of the model spectrum (x mu) - (optionally) the optical extinction ‘Av’, in mag - (optionally) the ratio of total to selective optical extinction ‘Rv’ - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - then the planet photometric radius ‘R’, in Jupiter radius - (optionally) the flux of emission lines (labels should match those in the em_lines dictionary), in units of the model spectrum (times mu) - (optionally) the optical extinction ‘Av’, in mag - (optionally) the ratio of total to selective optical extinction ‘Rv’ - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution.

  • bounds (dictionary) – Each entry should be associated with a tuple corresponding to lower and upper bounds respectively. Bounds should be provided for ALL model parameters, including ‘R’ (planet photometric radius). ‘Av’ (optical extinction) is optional. If provided here, Av will also be fitted. Example for BT-SETTL: bounds = {‘Teff’:(1000,2000), ‘logg’:(3.0,4.5), ‘R’:(0.1,5), ‘Av’:(0.,2.5)} ‘M’ can be used for a prior on the mass of the planet. In that case the corresponding prior log probability is computed from the values for parameters ‘logg’ and ‘R’ (if both exist).

  • resamp_before (bool, optional) – Whether to prepare the whole grid of resampled models before entering the MCMC, i.e. to avoid doing it at every MCMC step. Recommended. Only reason not to: model grid is too large and individual models require being opened and resampled at each step.

  • model_grid (numpy N-d array, optional) – If provided, should contain the grid of model spectra for each free parameter of the given grid. I.e. for a grid of n_T values of Teff and n_g values of Logg, the numpy array should be n_T x n_g x n_ch x 2, where n_ch is the number of wavelengths for the observed spectrum, and the last 2 dims are for wavelength and fluxes respectively. If provided, takes precedence over filename/file_reader.

  • model_reader (python routine, optional) – External routine that reads a model file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains model values. See example routine in model_interpolation() description.

  • em_lines (dictionary, opt) – Dictionary of emission lines to be added on top of the model spectrum. Each dict entry should be the name of the line, assigned to a tuple of 4 values: 1) the wavelength (in mu); 2) a string indicating whether line intensity is expressed in flux (‘F’), luminosity (‘L’) or log(L/LSun) (“LogL”); 3) the FWHM of the gaussian (or None if to be set automatically); 4) whether the FWHM is expressed in ‘nm’, ‘mu’ or ‘km/s’. The third and fourth can also be set to None. In that case, the FWHM of the gaussian will automatically be set to the equivalent width of the line, calculated from the flux to be injected and the continuum level (measured in the grid model to which the line is injected). Examples: em_lines = {‘BrG’:(2.1667,’F’, None, None)}; em_lines = {‘BrG’:(2.1667,’LogL’, 100, ‘km/s’)}

  • em_grid (dictionary pointing to lists, opt) – Dictionary where each entry corresponds to an emission line and points to a list of values to inject for emission line fluxes. For computation efficiency, interpolation will be performed between the points of this grid during the MCMC sampling. Dict entries should match labels and em_lines.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_corr (numpy 2d ndarray or list, optional) – Spectral correlation throughout post-processed images in which the spectrum is measured. It is specific to the combination of instrument, algorithm and radial separation of the companion from the central star. Can be computed using spec_corr.spectral_correlation(). In case of a spectrum obtained with different instruments, build it with spec_corr.combine_corrs(). If not provided, it will consider the uncertainties in each spectral channels are independent. See Greco & Brandt (2017) for details.

  • instru_fwhm (float OR list of either floats or strings, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant. If strings are provided, they should correspond to filenames (including full paths) of text files containing the filter information for each observed wavelength. Strict format:

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) –

    External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default file reader will be used. It assumes the following format for the files: - first row containing header - starting from 2nd row: 1st column: wavelength, 2nd col.: transmission - Unit of wavelength can be provided in parentheses of first header key name: e.g. “WL(AA)” for angstrom, “wavelength(mu)” for micrometer or “lambda(nm)” for nanometer. Note: Only what is in parentheses matters.

    Important: filter files should all have the same format and WL units.

  • AV_bef_bb (bool, optional) – If both extinction and an extra bb component are free parameters, whether to apply extinction before adding the BB component (e.g. extinction mostly from circumplanetary dust) or after the BB component (e.g. mostly insterstellar extinction).

  • units_obs (str, opt {'si','cgs','jy'}) – Units of observed spectrum. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys.

  • units_mod (str, opt {'si','cgs','jy'}) – Units of the model. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys. If different to units_obs, the spectrum units will be converted.

  • interp_order (int, opt, {-1,0,1}) – Interpolation mode for model interpolation. -1: log interpolation (i.e. linear interpolatlion on log(Flux)) 0: nearest neighbour model. 1: Order 1 spline interpolation.

  • priors (dictionary, opt) – If not None, sets prior estimates for each parameter of the model. Each entry should be set to either None (no prior) or a tuple of 2 elements containing prior estimate and uncertainty on the estimate. Missing entries (i.e. provided in bounds dictionary but not here) will be associated no prior. e.g. priors = {‘Teff’:(1600,100), ‘logg’:(3.5,0.5), ‘R’:(1.6,0.1), ‘Av’:(1.8,0.2), ‘M’:(10,3)} Important: dictionary entry names should match exactly those of bounds.

  • physical (bool, opt) – In case of extra black body component(s) to a photosphere, whether to force lower temperature than the photosphere effective temperature.

  • interp_nonexist (bool, opt) – Whether to interpolate non-existing models in the grid. Only used if resamp_before is set to True.

  • ini_ball (float or string, default=1e-1) – Size of the initial ball in parameter space from which walkers start their chain. If “uniform” is provided, a uniform ini_ball spanning the bounds interval will be used to initialise walkers.

  • a (float, default=2.0) – The proposal scale parameter. See notes.

  • nwalkers (int, default: 1000) – Number of walkers

  • niteration_min (int, optional) – Steps per walker lower bound. The simulation will run at least this number of steps per walker.

  • niteration_limit (int, optional) – Steps per walker upper bound. If the simulation runs up to ‘niteration_limit’ steps without having reached the convergence criterion, the run is stopped.

  • niteration_supp (int, optional) – Number of iterations to run after having “reached the convergence”.

  • burnin (float, default=0.3) – The fraction of a walker which is discarded.

  • rhat_threshold (float, default=0.01) – The Gelman-Rubin threshold used for the test for nonconvergence.

  • rhat_count_threshold (int, optional) – The Gelman-Rubin test must be satisfied ‘rhat_count_threshold’ times in a row before claiming that the chain has converged.

  • check_maxgap (int, optional) – Maximum number of steps per walker between two convergence tests.

  • conv_test (str, optional {'gb','autocorr'}) – Method to check for convergence: - ‘gb’ for gelman-rubin test (http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/305.pdf) - ‘autocorr’ for autocorrelation analysis (https://emcee.readthedocs.io/en/stable/tutorials/autocorr/)

  • nproc (int, optional) – The number of processes to use for parallelization.

  • grid_name (str, optional) – Name of the fits file containing the model grid (numpy array) AFTER convolution+resampling as the observed spectrum given as input. If provided, will read it if it exists (and resamp_before is set to True), or make it and write it if it doesn’t.

  • output_dir (str, optional) – The name of the output directory which contains the output files in the case save is True.

  • output_file (str, optional) – The name of the output file which contains the MCMC results in the case save is True.

  • display (bool, optional) – If True, the walk plot is displayed at each evaluation of the Gelman- Rubin test.

  • verbosity (0, 1 or 2, optional) – Verbosity level. 0 for no output and 2 for full information.

  • save (bool, optional) – If True, the MCMC results are pickled.

Returns

  • out (numpy.array) – The MCMC samples after truncation of zeros.

  • lnprobability (emcee sample object) – The corresponding probabilities for each sample

Notes

The parameter a must be > 1. For more theoretical information concerning this parameter, see Goodman & Weare, 2010, Comm. App. Math. Comp. Sci., 5, 65, Eq. [9] p70.

The parameter ‘rhat_threshold’ can be a numpy.array with individual threshold value for each model parameter.

special.mcmc_sampling.show_corner_plot(chain, burnin=0.5, save=False, output_dir='', mcmc_res=None, units=None, ndig=None, labels_plot=None, plot_name='corner_plot.pdf', **kwargs)[source]

Display/save a figure showing the corner plot (pdfs + correlation plots).

Parameters
  • chain (numpy.array) – The Markov chain. The shape of chain must be nwalkers x length x dim. If a part of the chain is filled with zero values, the method will discard these steps.

  • burnin (float, default: 0) – The fraction of a walker we want to discard.

  • save (boolean, default: False) – If True, a pdf file is created.

  • output_dir (str, optional) – The name of the output directory which contains the output files in the case save is True.

  • mcmc_res (numpy array OR tuple of 2 dictionaries/np.array, opt) – Values to be printed on top of each 1d posterior distribution * if numpy array: - npar x 3 dimensions (where npar is the number of parameters), containing the most likely value of each parameter and the lower and upper uncertainties at the desired quantiles, resp. - npar x 2 dimensions: same as above but with a single value of uncertainty. E.g. output of spec_confidence() for a gaussian fit * if tuple of 2 dictionaries: output of spec_confidence without gaussian fit * if tuple of 2 np.array: output of spec_confidence() with gaussian fit

  • units (tuple, opt) – Tuple of strings containing units for each parameter. If provided, mcmc_res will be printed on top of each 1d posterior distribution along with these units.

  • ndig (tuple, opt) – Number of digits precision for each printed parameter

  • labels_plot (tuple, opt) – Labels corresponding to parameter names, used for the plot. If None, will use “labels” passed in kwargs.

  • kwargs – Additional attributes passed to the corner.corner() method. (e.g. ‘labels’, ‘labels_tit’, ‘labels_tit_unit’, ‘title_kwargs’)

Returns

  • Display the figure or create a pdf file named walk_plot.pdf in the working

  • directory.

Raises

ImportError

special.mcmc_sampling.show_walk_plot(chain, labels, save=False, output_dir='', ntrunc=100, **kwargs)[source]

Display/save a figure showing the path of each walker during the MCMC run.

Parameters
  • chain (numpy.array) – The Markov chain. The shape of chain must be nwalkers x length x dim. If a part of the chain is filled with zero values, the method will discard these steps.

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - next the planet photometric radius ‘R’, - (optionally) the optical extinction ‘Av’. - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution

  • save (boolean, default: False) – If True, a pdf file is created.

  • output_dir (str, optional) – The name of the output directory which contains the output files in the case save is True.

  • ntrunc (int, opt) – max number of walkers plotted. If too many walkers the plot will become too voluminous and too crowded. Plot will be truncated to only ntrunc first walkers

  • kwargs – Additional attributes are passed to the matplotlib plot method.

Returns

  • Display the figure or create a pdf file named walk_plot.pdf in the working

  • directory.

special.model_resampling

Functions useful for spectral fitting of companions, and model interpolation.

special.model_resampling.interpolate_model(params, grid_param_list, params_em={}, em_grid={}, em_lines={}, labels=None, model_grid=None, model_reader=None, interp_order=1, max_dlbda=0.0002, verbose=False)[source]
Parameters
  • params (tuple) – Set of models parameters for which the model grid has to be interpolated.

  • grid_param_list (list of 1d numpy arrays/lists) –

    • If list, should contain list/numpy 1d arrays with available grid of model parameters.

    • Note1: model grids should not contain grids on radius and Av, but these should still be passed in initial_state (Av optional).

  • params_em (dictionary, opt) – Set of emission line parameters (typically fluxes) for which the model grid has to be interpolated.

  • em_grid (dictionary pointing to lists, opt) – Dictionary where each entry corresponds to an emission line and points to a list of values to inject for emission line fluxes. For computation efficiency, interpolation will be performed between the points of this grid during the MCMC sampling. Dict entries should match labels and em_lines. Note: length of this dictionary can be different of em_lines; i.e. if a line is in em_lines but not in em_grid, it will not be considered an MCMC parameter.

  • em_lines (dictionary, opt) – Dictionary of emission lines to be added on top of the model spectrum. Each dict entry should be the name of the line, assigned to a tuple of 3 values: 1) the wavelength (in mu), 2) a string indicating whether line intensity is expressed in flux (‘F’), luminosity (‘L’) or log(L/LSun) (“LogL”), and 3) the latter quantity. The intensity of the emission lines can be sampled by MCMC, in that case the last element of the tuple can be set to None. If not to be sampled, a value for the intensity should be provided (in the same system of units as the model spectra, multiplied by mu). Example: em_lines = {‘BrG’:(2.1667,’F’,263)}

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: * first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) * next the planet photometric radius ‘R’, in Jupiter radius * (optionally) the flux of emission lines (labels should match those in the em_lines dictionary), in units of the model spectrum (times mu) * (optionally) the optical extinction ‘Av’, in mag * (optionally) the ratio of total to selective optical extinction ‘Rv’ * (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution. Note: only necessary if an emission list dictionary is provided.

  • model_grid (numpy N-d array) – If provided, should contain the grid of model spectra for each free parameter of the given grid. I.e. for a grid of n_T values of Teff and n_g values of Logg, the numpy array should be n_T x n_g x n_ch x 2, where n_ch is the number of wavelengths for the observed spectrum. If provided, takes precedence over filename/file_reader which would open and read models at each step of the MCMC. Note: if provided, it should already probe any potential emission line.

  • model_reader (python routine) – External routine that reads a model file, converts it to required units and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains model values. Example below.

  • interp_order (int, opt, {0,1}) – 0: nearest neighbour model. 1: Order 1 spline interpolation.

  • max_dlbda (float, opt) – Maximum delta lbda in mu allowed if binning of lbda_model is necessary. This is necessary for grids of models (e.g. BT-SETTL) where the wavelength sampling is not the same depending on parameters (e.g. between 4000K and 4100K models for BT-SETTL): resampling preserving original resolution is too prohibitive computationally.

  • verbose (bool, optional) – Whether to print more information during resampling.

Returns

  • model (2d numpy array) – Interpolated model for input parameters. First column corresponds to wavelengths, and the second contains model values.

  • Example file_reader

  • ——————-

  • def _example_file_reader(params) – ‘’’This is a minimal example for the file_reader routine to be provided as argument to model_interpolation. The routine should only take as inputs grid parameters, and returns as output: both the wavelengths and model values as a 2D numpy array. This example assumes the model is in a fits file, that is already a 2D numpy array, where the first column is the wavelength, and 2nd column is the corresponding model values.’’’

    model = open_fits(filename.format(params[0],params[1]))

    return model

special.model_resampling.make_model_from_params(params, labels, grid_param_list, dist, lbda_obs=None, model_grid=None, model_reader=None, em_lines={}, em_grid={}, dlbda_obs=None, instru_fwhm=None, instru_idx=None, filter_reader=None, AV_bef_bb=False, units_obs='si', units_mod='si', interp_order=1)[source]

Routine to make the model from input parameters.

Parameters
  • params (tuple) – Set of models parameters for which the model grid has to be interpolated.

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: * first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) * then the planet photometric radius ‘R’, in Jupiter radius * (optionally) the flux of emission lines (labels should match those in the em_lines dictionary), in units of the model spectrum (times mu) * (optionally) the optical extinction ‘Av’, in mag * (optionally) the ratio of total to selective optical extinction ‘Rv’ * (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution.

  • grid_param_list (list of 1d numpy arrays/lists OR None) –

    • If list, should contain list/numpy 1d arrays with available grid of model parameters.

    • Set to None for a pure n-blackbody fit, n=1,2,…

    • Note1: model grids should not contain grids on radius and Av, but these should still be passed in initial_state (Av optional).

    • Note2: for a combined grid model + black body, just provide the grid parameter list here, and provide values for ‘Tbbn’ and ‘Rbbn’ in initial_state, labels and bounds.

  • dist (float) – Distance in parsec, used for flux scaling of the models.

  • lbda_obs (numpy 1d ndarray or list, opt) – Wavelength of observed spectrum. If provided, the model spectrum will be resampled to match lbda_obs. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • model_grid (numpy N-d array, optional) – If provided, should contain the grid of model spectra for each free parameter of the given grid. I.e. for a grid of n_T values of Teff and n_g values of Logg, the numpy array should be n_T x n_g x n_ch x 2, where n_ch is the number of wavelengths for the observed spectrum, and the last 2 dims are for wavelength and fluxes respectively. If provided, takes precedence over model_name/model_reader.

  • model_reader (python routine, opt) – External routine that reads a model file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains model values. See example routine in interpolate_model() description.

  • em_lines (dictionary, opt) – Dictionary of emission lines to be added on top of the model spectrum. Each dict entry should be the name of the line, assigned to a tuple of 3 values: 1) the wavelength (in mu), 2) a string indicating whether line intensity is expressed in flux (‘F’), luminosity (‘L’) or log(L/LSun) (“LogL”), and 3) the latter quantity. The intensity of the emission lines can be sampled by MCMC, in that case the last element of the tuple can be set to None. If not to be sampled, a value for the intensity should be provided (in the same system of units as the model spectra, multiplied by mu). Examples: em_lines = {‘BrG’:(2.1667,’F’,263)}; em_lines = {‘BrG’:(2.1667,’LogL’,-5.1)}

  • em_grid (dictionary pointing to lists, opt) – Dictionary where each entry corresponds to an emission line and points to a list of values to inject for emission line fluxes. For computation efficiency, interpolation will be performed between the points of this grid during the MCMC sampling. Dict entries should match labels and em_lines.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_fwhm (float or list, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant.

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default format assumed for the files: - first row containing header; - starting from 2nd row: 1st column: WL in mu, 2nd column: transmission Note: files should all have the same format and wavelength units.

  • AV_bef_bb (bool, optional) – If both extinction and an extra bb component are free parameters, whether to apply extinction before adding the BB component (e.g. extinction mostly from circumplanetary dust) or after the BB component (e.g. mostly insterstellar extinction).

  • units_obs (str, opt {'si','cgs','jy'}) – Units of observed spectrum. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys.

  • units_mod (str, opt {'si','cgs','jy'}) – Units of the model. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys. If different to units_obs, the spectrum units will be converted.

  • interp_order (int, opt, {-1,0,1}) – Interpolation mode for model interpolation. -1: log interpolation (i.e. linear interpolatlion on log(Flux)) 0: nearest neighbour model. 1: Order 1 spline interpolation.

Returns

out – The model wavelength and spectrum

Return type

numpy array

special.model_resampling.make_resampled_models(lbda_obs, grid_param_list, model_grid=None, model_reader=None, em_lines={}, em_grid=None, dlbda_obs=None, instru_fwhm=None, instru_idx=None, filter_reader=None, interp_nonexist=True)[source]

Returns a cube of models after convolution and resampling as in the observations.

Parameters
  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • grid_param_list (list of 1d numpy arrays/lists) – Should contain list/numpy 1d arrays with available grid of model parameters. Note: model grids shouldn’t contain grids on radius and Av.

  • model_grid (list of 1d numpy arrays, or list of lists.) – Available grid of model parameters (should only contain the parameter values, not the models themselves). The latter will be loaded. Important: 1) Make sure the bounds are within the model grid to avoid extrapolation. 2) All keywords that are neither ‘R’, ‘Av’ nor ‘M’ will be considered model grid parameters. length of params, with the same order. OR ‘1bb’ or ‘2bb’ for black-body models. In that case the model will be created on the fly at each iteration using 1 or 2 Planck functions respectively. There are 2 params for each Planck function: Teff and radius.

  • model_reader (python routine) – External routine that reads a model file, converts it to required units and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains model values. Example below.

  • em_lines (dictionary, opt) – Dictionary of emission lines to be added on top of the model spectrum. Each dict entry should be the name of the line, assigned to a tuple of 3 values: 1) the wavelength (in mu), 2) a string indicating whether line intensity is expressed in flux (‘F’), luminosity (‘L’) or log(L/LSun) (“LogL”), and 3) the latter quantity. The intensity of the emission lines can be sampled by MCMC, in that case the last element of the tuple can be set to None. If not to be sampled, a value for the intensity should be provided (in the same system of units as the model spectra, multiplied by mu). Example: em_lines = {‘BrG’:(2.1667,’F’,263)}

  • em_grid (dictionary pointing to lists, opt) – Dictionary where each entry corresponds to an emission line and points to a list of values to inject for emission line fluxes. For computation efficiency, interpolation will be performed between the points of this grid during the MCMC sampling. Dict entries should match labels and em_lines. Note: length of this dictionary can be different of em_lines; i.e. if a line is in em_lines but not in em_grid, it will not be considered an MCMC parameter.

  • lbda_mod (numpy 1d ndarray or list) – Wavelength of tested model. Should have a wider wavelength extent than the observed spectrum.

  • spec_mod (numpy 1d ndarray) – Model spectrum. It does not require the same wavelength sampling as the observed spectrum. If higher spectral resolution, it will be convolved with the instrumental spectral psf (if instru_fwhm is provided) and then binned to the same sampling. If lower spectral resolution, a linear interpolation is performed to infer the value at the observed spectrum wavelength sampling.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_fwhm (float or list, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant.

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default format assumed for the files: - first row containing header - starting from 2nd row: 1st column: WL in mu, 2nd column: transmission Note: files should all have the same format and wavelength units.

  • interp_nonexist (bool, opt) – Whether to interpolate if models do not exist, based on closest model(s)

Returns

resamp_mod – Grid of model spectra resampled at wavelengths matching the observed spectrum.

Return type

1d numpy array

special.model_resampling.resample_model(lbda_obs, lbda_mod, spec_mod, dlbda_obs=None, instru_fwhm=None, instru_idx=None, filter_reader=None, no_constraint=False, verbose=False)[source]

Convolve, interpolate and resample a model spectrum to match observed spectrum.

Parameters
  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • lbda_mod (numpy 1d ndarray or list) – Wavelength of tested model. Should have a wider wavelength extent than the observed spectrum.

  • spec_mod (numpy 1d ndarray) – Model spectrum. It does not require the same wavelength sampling as the observed spectrum. If higher spectral resolution, it will be convolved with the instrumental spectral psf (if instru_fwhm is provided) and then binned to the same sampling. If lower spectral resolution, a linear interpolation is performed to infer the value at the observed spectrum wavelength sampling.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It is used to infer which part(s) of a combined spectro+photometric spectrum should involve convolution+subsampling (model resolution higher than measurements) or interpolation (the opposite). If not provided, will be inferred from half-difference between consecutive lbda_obs points (i.e. inaccurate for a combined spectrum).

  • instru_fwhm (float or list, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant.

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default format assumed for the files: - first row containing header - starting from 2nd row: 1st column: WL in mu, 2nd column: transmission Note: files should all have the same format and wavelength units.

  • no_constraint (bool, optional) – If set to True, will not use ‘floor’ and ‘ceil’ constraints when cropping the model wavelength ranges, i.e. faces the risk of extrapolation. May be useful, if the bounds of the wavelength ranges are known to match exactly.

  • verbose (bool, optional) – Whether to print more information during resampling.

Returns

lbda_obs, spec_mod_res – Observed lambdas, and resampled model spectrum (at those lambdas)

Return type

2x 1d numpy array

special.nested_sampling

Module with functions for posterior sampling of the model spectra parameters using nested sampling (nestle).

special.nested_sampling.nested_sampling_results(ns_object, labels, burnin=0.4, bins=None, cfd=68.27, units=None, ndig=None, labels_plot=None, save=False, output_dir='/', plot=False, **kwargs)[source]

Shows the results of the Nested Sampling, summary, parameters with errors, walk and corner plots. Returns best-fit values and uncertatinties.

Parameters
  • ns_object (numpy.array) – The nestle object returned from nested_spec_sampling.

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - then the planet photometric radius ‘R’, in Jupiter radius - (optionally) the flux of emission lines (labels should match those in the em_lines dictionary), in units of the model spectrum (times mu) - (optionally) the optical extinction ‘Av’, in mag - (optionally) the ratio of total to selective optical extinction ‘Rv’ - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution.

  • burnin (float, default: 0) – The fraction of a walker we want to discard.

  • bins (int, optional) – The number of bins used to sample the posterior distributions.

  • cfd (float, optional) – The confidence level given in percentage.

  • units (tuple, opt) – Tuple of strings containing units for each parameter. If provided, mcmc_res will be printed on top of each 1d posterior distribution along with these units.

  • ndig (tuple, opt) – Number of digits precision for each printed parameter

  • labels_plot (tuple, opt) – Labels corresponding to parameter names, used for the plot. If None, will use “labels” passed in kwargs.

  • save (boolean, default: False) – If True, a pdf file is created.

  • output_dir (str, optional) – The name of the output directory which contains the output files in the case save is True.

  • plot (bool, optional) – Whether to show the plots (instead of saving them).

  • kwargs – Additional optional arguments passed to confidence (matplotlib optional arguments for histograms).

Returns

final_res – Best-fit parameters and uncertainties (corresponding to 68% confidence interval). Dimensions: nparams x 2.

Return type

numpy ndarray

special.nested_sampling.nested_spec_sampling(init, lbda_obs, spec_obs, err_obs, dist, grid_param_list, labels, bounds, resamp_before=True, model_grid=None, model_reader=None, em_lines={}, em_grid={}, dlbda_obs=None, instru_corr=None, instru_fwhm=None, instru_idx=None, filter_reader=None, AV_bef_bb=False, units_obs='si', units_mod='si', interp_order=1, priors=None, physical=True, interp_nonexist=True, output_dir='special/', grid_name='resamp_grid.fits', method='single', npoints=100, dlogz=0.1, decline_factor=None, rstate=None, verbose=True, **kwargs)[source]

Runs a nested sampling algorithm in order to determine the position and the flux of the planet using the ‘Negative Fake Companion’ technique. The result of this procedure is a a nestle object containing the samples from the posterior distributions of each of the 3 parameters. It provides pretty good results (value plus error bars) compared to a more CPU intensive Monte Carlo approach with the affine invariant sampler (emcee).

Parameters
  • init (numpy ndarray or tuple) – Initial guess on the best fit parameters of the spectral fit. Length of the tuple should match the total number of free parameters. - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - then the planet photometric radius ‘R’, in Jupiter radius - (optionally) the intensity of emission lines (labels must match those in the em_lines dict), in units of the model spectrum (x mu) - (optionally) the optical extinction ‘Av’, in mag - (optionally) the ratio of total to selective optical extinction ‘Rv’ - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution

  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • spec_obs (numpy 1d ndarray or list) – Observed spectrum for each value of lbda_obs.

  • err_obs (numpy 1d/2d ndarray or list) – Uncertainties on the observed spectrum. If 2d array, should be [2,n_ch] where the first (resp. second) column corresponds to lower (upper) uncertainty, and n_ch is the length of lbda_obs and spec_obs.

  • dist (float) – Distance in parsec, used for flux scaling of the models.

  • grid_param_list (list of 1d numpy arrays/lists OR None) –

    • If list, should contain list/numpy 1d arrays with available grid of model parameters.

    • Set to None for a pure n-blackbody fit, n=1,2,…

    • Note1: model grids should not contain grids on radius and Av, but these should still be passed in initial_state (Av optional).

    • Note2: for a combined grid model + black body, just provide the grid parameter list here, and provide values for ‘Tbbn’ and ‘Rbbn’ in initial_state, labels and bounds.

  • labels (Tuple of strings) – Tuple of labels in the same order as initial_state, that is: - first all parameters related to loaded models (e.g. ‘Teff’, ‘logg’) - then the planet photometric radius ‘R’, in Jupiter radius - (optionally) the flux of emission lines (labels should match those in the em_lines dictionary), in units of the model spectrum (times mu) - (optionally) the optical extinction ‘Av’, in mag - (optionally) the ratio of total to selective optical extinction ‘Rv’ - (optionally) ‘Tbb1’, ‘Rbb1’, ‘Tbb2’, ‘Rbb2’, etc. for each extra bb contribution.

  • bounds (dictionary) – Each entry should be associated with a tuple corresponding to lower and upper bounds respectively. Bounds should be provided for ALL model parameters, including ‘R’ (planet photometric radius). ‘Av’ (optical extinction) is optional. If provided here, Av will also be fitted. Example for BT-SETTL: bounds = {‘Teff’:(1000,2000), ‘logg’:(3.0,4.5), ‘R’:(0.1,5), ‘Av’:(0.,2.5)} ‘M’ can be used for a prior on the mass of the planet. In that case the corresponding prior log probability is computed from the values for parameters ‘logg’ and ‘R’ (if both exist).

  • resamp_before (bool, optional) – Whether to prepare the whole grid of resampled models before entering the MCMC, i.e. to avoid doing it at every MCMC step. Recommended. Only reason not to: model grid is too large and individual models require being opened and resampled at each step.

  • model_grid (numpy N-d array, optional) – If provided, should contain the grid of model spectra for each free parameter of the given grid. I.e. for a grid of n_T values of Teff and n_g values of Logg, the numpy array should be n_T x n_g x n_ch x 2, where n_ch is the number of wavelengths for the observed spectrum, and the last 2 dims are for wavelength and fluxes respectively. If provided, takes precedence over filename/file_reader.

  • model_reader (python routine, optional) – External routine that reads a model file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains model values. See example routine in model_interpolation() description.

  • em_lines (dictionary, opt) – Dictionary of emission lines to be added on top of the model spectrum. Each dict entry should be the name of the line, assigned to a tuple of 4 values: 1) the wavelength (in mu); 2) a string indicating whether line intensity is expressed in flux (‘F’), luminosity (‘L’) or log(L/LSun) (“LogL”); 3) the FWHM of the gaussian (or None if to be set automatically); 4) whether the FWHM is expressed in ‘nm’, ‘mu’ or ‘km/s’. The third and fourth can also be set to None. In that case, the FWHM of the gaussian will automatically be set to the equivalent width of the line, calculated from the flux to be injected and the continuum level (measured in the grid model to which the line is injected). Examples: em_lines = {‘BrG’:(2.1667,’F’, None, None)}; em_lines = {‘BrG’:(2.1667,’LogL’, 100, ‘km/s’)}

  • em_grid (dictionary pointing to lists, opt) – Dictionary where each entry corresponds to an emission line and points to a list of values to inject for emission line fluxes. For computation efficiency, interpolation will be performed between the points of this grid during the MCMC sampling. Dict entries should match labels and em_lines.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_corr (numpy 2d ndarray or list, optional) – Spectral correlation throughout post-processed images in which the spectrum is measured. It is specific to the combination of instrument, algorithm and radial separation of the companion from the central star. Can be computed using distances.spectral_correlation(). In case of a spectrum obtained with different instruments, build it with distances.combine_corrs(). If not provided, it will consider the uncertainties in each spectral channels are independent. See Greco & Brandt (2017) for details.

  • instru_fwhm (float OR list of either floats or strings, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant. If strings are provided, they should correspond to filenames (including full paths) of text files containing the filter information for each observed wavelength. Strict format:

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default file reader will be used. It assumes the following format for the files: - first row containing header - starting from 2nd row: 1st column: wavelength, 2nd col.: transmission - Unit of wavelength can be provided in parentheses of first header key name: e.g. “WL(AA)” for angstrom, “wavelength(mu)” for micrometer or “lambda(nm)” for nanometer. Note: Only what is in parentheses matters. Important: filter files should all have the same format and WL units.

  • AV_bef_bb (bool, optional) – If both extinction and an extra bb component are free parameters, whether to apply extinction before adding the BB component (e.g. extinction mostly from circumplanetary dust) or after the BB component (e.g. mostly insterstellar extinction).

  • units_obs (str, opt {'si','cgs','jy'}) – Units of observed spectrum. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys.

  • units_mod (str, opt {'si','cgs','jy'}) – Units of the model. ‘si’ for W/m^2/mu; ‘cgs’ for ergs/s/cm^2/mu or ‘jy’ for janskys. If different to units_obs, the spectrum units will be converted.

  • interp_order (int, opt, {-1,0,1}) – Interpolation mode for model interpolation. -1: log interpolation (i.e. linear interpolatlion on log(Flux)) 0: nearest neighbour model. 1: Order 1 spline interpolation.

  • priors (dictionary, opt) – If not None, sets prior estimates for each parameter of the model. Each entry should be set to either None (no prior) or a tuple of 2 elements containing prior estimate and uncertainty on the estimate. Missing entries (i.e. provided in bounds dictionary but not here) will be associated no prior. e.g. priors = {‘Teff’:(1600,100), ‘logg’:(3.5,0.5), ‘R’:(1.6,0.1), ‘Av’:(1.8,0.2), ‘M’:(10,3)} Important: dictionary entry names should match exactly those of bounds.

  • physical (bool, opt) – In case of extra black body component(s) to a photosphere, whether to force lower temperature than the photosphere effective temperature.

  • interp_nonexist (bool, opt) – Whether to interpolate non-existing models in the grid. Only used if resamp_before is set to True.

  • w (float or tuple) – The relative size of the bounds (around the initial state init) for each parameter. If a float the same relative size is considered for each parameter. E.g. if 0.1, bounds will be set to: (0.9*params[0], 1.1*params[0]), … (0.9*params[N-1], 1.1*params[N-1]), to True), or make it and write it if it does not.

  • output_dir (str, optional) – The name of the output directory which contains the output files in the case save is True.

  • grid_name (str, optional) – Name of the fits file containing the model grid (numpy array) AFTER convolution+resampling as the observed spectrum given as input. If provided, will read it if it exists (and resamp_before is set

  • method ({"single", "multi", "classic"}, str optional) – Flavor of nested sampling.

  • npoints (int optional) – Number of active points. At least ndim+1 (4 will produce bad results). For problems with just a few parameters (<=5) like the NEGFC, good results are obtained with 100 points (default).

  • dlogz (Estimated remaining evidence) – Iterations will stop when the estimated contribution of the remaining prior volume to the total evidence falls below this threshold. Explicitly, the stopping criterion is log(z + z_est) - log(z) < dlogz where z is the current evidence from all saved samples, and z_est is the estimated contribution from the remaining volume. This option and decline_factor are mutually exclusive. If neither is specified, the default is dlogz=0.5.

  • decline_factor (float, optional) – If supplied, iteration will stop when the weight (likelihood times prior volume) of newly saved samples has been declining for decline_factor * nsamples consecutive samples. A value of 1.0 seems to work pretty well.

  • rstate (random instance, optional) – RandomState instance. If not given, the global random state of the numpy.random module will be used.

  • kwargs (optional) – Additional optional arguments to the nestle.sample function.

Returns

resNestle object with the nested sampling results, including the posterior samples.

Return type

nestle object

Notes

Nested Sampling is a computational approach for integrating posterior probability in order to compare models in Bayesian statistics. It is similar to Markov Chain Monte Carlo (MCMC) in that it generates samples that can be used to estimate the posterior probability distribution. Unlike MCMC, the nature of the sampling also allows one to calculate the integral of the distribution. It also happens to be a pretty good method for robustly finding global maxima.

Nestle documentation: http://kbarbary.github.io/nestle/

Convergence: http://kbarbary.github.io/nestle/stopping.html Nested sampling has no well-defined stopping point. As iterations continue, the active points sample a smaller and smaller region of prior space. This can continue indefinitely. Unlike typical MCMC methods, we don’t gain any additional precision on the results by letting the algorithm run longer; the precision is determined at the outset by the number of active points. So, we want to stop iterations as soon as we think the active points are doing a pretty good job sampling the remaining prior volume - once we’ve converged to the highest-likelihood regions such that the likelihood is relatively flat within the remaining prior volume.

Method: The trick in nested sampling is to, at each step in the algorithm, efficiently choose a new point in parameter space drawn with uniform probability from the parameter space with likelihood greater than the current likelihood constraint. The different methods all use the current set of active points as an indicator of where the target parameter space lies, but differ in how they select new points from it. “classic” is close to the method described in Skilling (2004). “single”, Mukherjee, Parkinson & Liddle (2006), Determines a single ellipsoid that bounds all active points, enlarges the ellipsoid by a user-settable factor, and selects a new point at random from within the ellipsoid. “multiple”, Shaw, Bridges & Hobson (2007) and Feroz, Hobson & Bridges 2009 (Multinest). In cases where the posterior is multi-modal, the single-ellipsoid method can be extremely inefficient: In such situations, there are clusters of active points on separate high-likelihood regions separated by regions of lower likelihood. Bounding all points in a single ellipsoid means that the ellipsoid includes the lower-likelihood regions we wish to avoid sampling from. The solution is to detect these clusters and bound them in separate ellipsoids. For this, we use a recursive process where we perform K-means clustering with K=2. If the resulting two ellipsoids have a significantly lower total volume than the parent ellipsoid (less than half), we accept the split and repeat the clustering and volume test on each of the two subset of points. This process continues recursively. Alternatively, if the total ellipse volume is significantly greater than expected (based on the expected density of points) this indicates that there may be more than two clusters and that K=2 was not an appropriate cluster division. We therefore still try to subdivide the clusters recursively. However, we still only accept the final split into N clusters if the total volume decrease is significant.

special.spec_corr

Module to estimate the spectral correlation between channels of an IFS datacube.

special.spec_corr.combine_spec_corrs(arr_list)[source]

Combines the spectral correlation matrices of different instruments into a single square matrix (required for input of spectral fits).

Parameters

arr_list (list or tuple of numpy ndarrays) – List/tuple containing the distinct square spectral correlation matrices OR ones (for independent photometric measurements).

Returns

combi_corr – 2d square ndarray representing the combined spectral correlation.

Return type

numpy 2d ndarray

special.spec_corr.spectral_correlation(array, awidth=2, r_in=1, r_out=None, pl_xy=None, mask_r=4, fwhm=4, sp_fwhm_guess=3, full_output=False)[source]

Computes the spectral correlation between (post-processed) IFS frames, as a function of radius, implemented as Eq. 7 of Greco & Brandt 2017. This is a crucial step for an unbias fit of a measured IFS spectrum to either synthetic or template spectra.

Parameters
  • array (numpy ndarray) – Input cube or 3d array, of dimensions n_ch x n_y x n_x; where n_y and n_x should be odd values (star should be centered on central pixel).

  • awidth (int, optional) – Width in pixels of the concentric annuli used to compute the spectral correlation as a function of radial separation. Greco & Brandt 2017 noted no significant differences for annuli between 1 and 3 pixels width on GPI data.

  • r_in (int, optional) – Innermost radius where the spectral correlation starts to be computed.

  • r_out (int, optional) – Outermost radius where the spectral correlation is computed.If left as None, it will automatically be computed up to the edge of the frame.

  • pl_xy (tuple of tuples of 2 floats, optional) – x,y coordiantes of all companions present in the images. If provided, a circle centered on the location of each companion will be masked out for the spectral correlation computation.

  • mask_r (float, optional) – if pl_xy is provided, this should also be provided. Size of the aperture around each companion (in terms of fwhm) that is discarded to not bias the spectral correlation computation.

  • fwhm (float, optional) – if pl_xy is provided, this should also be provided. By default we consider a 2FWHM aperture mask around each companion to not bias the spectral correlation computation.

  • sp_fwhm_guess (float, optional) – Initial guess on the spectral FWHM of all channels.

  • full_output (bool, opt) – Whether to also output the fitted spectral FWHM for each channel, and the vector of radial separation at which each spectral correlation matrix is calculated.

  • Note (radii that are skipped will be filled with zeros in the output cube.) –

Returns

  • sp_corr (numpy ndarray) – 3d array of spectral correlation, as a function of radius with dimensions: n_rad x n_ch x n_ch, where n_rad = int((r_out-r_in)/2)

  • sp_fwhm (numpy ndarray) – (if full_output is True) 2d array containing the spectral fwhm at each radius, for each spectral channel. Dims: n_rad x n_ch

  • sp_rad (numpy ndarray) – (if full_output is True) 1d array containing the radial separation of each measured spectral correlation matrix. Dims: n_rad

special.spec_indices

Module with utilities to estimate the spectral type and gravity of an object based on spectral indices.

special.spec_indices.digit_to_spt(idx, convention='splat')[source]

Converts an integer index into spectral type.

Parameters
  • idx (float or int) – Index value of the spectral type

  • convention (str, optional {'splat', 'Allers+07'}) – Which convention to use to convert digit into spectral type. Convention from splat: K0 = 0, M0=10, L0=20, T0=30, Y9 = 49 Convention from Allers+07: M0 = 0, L0 = 10, …

Returns

spt – String representing the spectral index

Return type

str

special.spec_indices.sp_idx_to_gravity(idx, name='Na-1.1')[source]

Provides a qualitative estimate of the gravity/youth based on a gravity-sensitive spectral index. Implemented so far:

  • the Na-1.1 index (Allers et al. 2007)

  • the CO-2.3 index (Gorlova et al. 2003)

Parameters
  • idx (float) – Value of spectral index

  • name (str, optional {'Na-1.1', 'CO-2.3'}) – The name of the spectral index.

special.spec_indices.sp_idx_to_spt(idx, name='H2O-1.5', idx_err=None, young=True)[source]
Estimates a spectral type from a spectral index. Implemented so far:
  • the H2O 1.3 mu index (Gorlova et al. 2003)

  • the H2O 1.5 mu index (Allers et al. 2007)

  • the H2O-2 index (Slesnick et al. 2004)

Note on scale of SpT: 0 = M0, 10 = L0.

Parameters
  • idx (float) – Value of spectral index

  • name (str, optional {'H2O-1.3', 'H2O-1.5, 'H2O-2'}) – The name of the spectral index.

  • idx_err (float, optional) – Uncertainty on the spectral index value

  • young (bool, opt) – Whether the object is likely young (only used for ‘H2O-1.3’ index, which is gravity dependent)

Returns

  • spt (float) – Value of the spectral type

  • spt_err (float) – [if idx_err is provided] Uncertainty on the spectral type.

special.spec_indices.spectral_idx(lbda, spec, band='H2O-1.5', spec_err=None, verbose=False)[source]
Computes a spectral index. Implemented so far:
  • the Na 1.1 mu index (Allers et al. 2007)

  • the H2O 1.3 mu index (Gorlova et al. 2003)

  • the H2O 1.5 mu index (Allers et al. 2007)

  • the H2O 2 index (Slesnick et al. 2004)

  • the CO 2.3 index (Gorlova et al. 2003).

Parameters
  • lbda (numpy ndarray) – 1d numpy array containing the wavelengths of the spectrum in microns.

  • spec (numpy ndarray) – 1d numpy array containing the measured flux (arbitrary units accepted).

  • band (str, optional {'H2O-1.5', 'H2O-1.3', 'H2O-2', 'Na-1.1', 'CO-2.3'}) – Name of the band where the spectral index is defined (spectral feature + wavelength in mu)

  • spec_err (numpy ndarray, optional) – 1d numpy array containing the uncertainties on the measured flux (arbitrary units accepted). If provided the uncertainty on the spectral index will also be returned.

  • verbose (bool, optional) – Whther to print more information.

Returns

  • index (float) – Value of the spectral index

  • index_err (float) – [if spec_err is provided] Uncertainty on the spectral index.

special.spec_indices.spt_to_digit(spt, convention='splat')[source]

Converts a string representing spectral type into an integer index. Convention (from splat): K0 = 0, M0=10, L0=20, T0=30, Y9 = 49

Parameters
  • spt (str) – String representing the spectral index

  • convention (str, optional {'splat', 'Allers+07'}) – Which convention to use to convert digit into spectral type. Convention from splat: K0 = 0, M0=10, L0=20, T0=30, Y9 = 49 Convention from Allers+07: M0 = 0, L0 = 10, …

Returns

idx – Index value of the spectral type

Return type

float or int

special.template_fit

Module for simplex or grid search of best fit spectrum in a template library.

special.template_fit.best_fit_tmp(lbda_obs, spec_obs, err_obs, tmp_reader, search_mode='simplex', n_best=1, lambda_scal=None, scale_range=(0.1, 10, 0.01), ext_range=None, simplex_options=None, dlbda_obs=None, instru_corr=None, instru_fwhm=None, instru_idx=None, filter_reader=None, lib_dir='tmp_lib/', tmp_endswith='.fits', red_chi2=True, remove_nan=False, nproc=1, verbosity=0, force_continue=False, min_npts=1, **kwargs)[source]

Finds the best fit template spectrum to a given observed spectrum, within a spectral library. By default, a single free parameter is considered: the scaling factor of the spectrum. A first automatic scaling is performed by comparing the flux of the observed and template spectra at lambda_scal. Then a more refined scaling is performed, either through simplex or grid search (within scale_range). If fit_extinction is set to True, the exctinction is also considered as a free parameter.

Parameters
  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • spec_obs (numpy 1d ndarray or list) – Observed spectrum for each value of lbda_obs.

  • err_obs (numpy 1d/2d ndarray or list) – Uncertainties on the observed spectrum. If 2d array, should be [2,n_ch] where the first (resp. second) column corresponds to lower (upper) uncertainty, and n_ch is the length of lbda_obs and spec_obs.

  • tmp_reader (python routine) – External routine that reads a model file and returns a 3D numpy array, where the first column corresponds to wavelengths, the second contains flux values, and the third the uncertainties on the flux.

  • search_mode (str, optional, {'simplex','grid'}) – How is the best fit template found? Simplex or grid search.

  • n_best (int, optional) – Number of best templates to be returned (default: 1)

  • lambda_scal (float, optional) – Wavelength where a first scaling will be performed between template and observed spectra. If not provided, the middle wavelength of the osberved spectra will be considered.

  • scale_range (tuple, opt) – If grid search, this parameter should be provided as a tuple of 3 floats: lower limit, upper limit and step of the grid search for the scaling factor to be applied AFTER the first rough scaling (i.e. scale_range should always encompass 1).

  • ext_range (tuple or None, opt) – If None: differential extinction is not to be considered as a free parameter. Elif a tuple of 3 floats is provided, differential extinction will be considered, with the floats as lower limit, upper limit and step of the grid search. Note: if simplex search, the range is still used to set a chi of np.inf outside of the range.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_corr (numpy 2d ndarray or list, optional) – Spectral correlation throughout post-processed images in which the spectrum is measured. It is specific to the combination of instrument, algorithm and radial separation of the companion from the central star. Can be computed using distances.spectral_correlation(). In case of a spectrum obtained with different instruments, build it with distances.combine_corrs(). If not provided, it will consider the uncertainties in each spectral channels are independent. See Greco & Brandt (2017) for details.

  • instru_fwhm (float OR list of either floats or strings, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant. If strings are provided, they should correspond to filenames (including full paths) of text files containing the filter information for each observed wavelength. Strict format:

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default file reader will be used. It assumes the following format for the files: - first row containing header - starting from 2nd row: 1st column: wavelength, 2nd col.: transmission - Unit of wavelength can be provided in parentheses of first header key name: e.g. “WL(AA)” for angstrom, “wavelength(mu)” for micrometer or “lambda(nm)” for nanometer. Note: Only what is in parentheses matters. Important: filter files should all have the same format and WL units.

  • simplex_options (dict, optional) – The scipy.optimize.minimize simplex (Nelder-Mead) options.

  • red_chi2 (bool, optional) – Whether to compute the reduced chi square. If False, considers chi^2.

  • remove_nan (bool, optional) – Whether to remove NaN values from template spectrum BEFORE resampling to the wavelength sampling of the observed spectrum. Whether it is set to True or False, a check is made just before chi^2 is calculated (after resampling), and only non-NaN values will be considered.

  • nproc (None or int, optional) – The number of processes to use for parallelization. If set to None, will automatically use half of the available CPUs on the machine.

  • verbosity (0, 1 or 2, optional) – Verbosity level. 0 for no output and 2 for full information.

  • force_continue (bool, optional) – In case of issue with the fit, whether to continue regardless (this may be useful in an uneven spectral library, where some templates have too few points for the fit to be performed).

  • min_npts (int or None, optional) – Minimum number of (resampled) points to consider a template spectrum valid in the minimization search.

  • **kwargs (optional) – Optional arguments to the scipy.optimize.minimize function

Returns

  • final_tmpname (tuple of n_best str) – Best-fit template filenames

  • final_tmp (tuple of n_best 3D numpy array) – Best-fit template spectra (3D: lbda+spec+spec_err)

  • final_chi (1D numpy array of length n_best) – Best-fit template chi^2

  • final_params (2D numpy array (2xn_best)) – Best-fit parameters (optimal scaling and optical extinction). Note if extinction is not fitted, optimal AV will be set to 0.

special.template_fit.get_chi(lbda_obs, spec_obs, err_obs, tmp_name, tmp_reader, search_mode='simplex', lambda_scal=None, scale_range=(0.1, 10, 0.01), ext_range=None, dlbda_obs=None, instru_corr=None, instru_fwhm=None, instru_idx=None, filter_reader=None, simplex_options=None, red_chi2=True, remove_nan=False, force_continue=False, min_npts=1, verbose=False, **kwargs)[source]

Routine calculating chi^2, optimal scaling factor and optimal extinction for a given template spectrum to match an observed spectrum.

Parameters
  • lbda_obs (numpy 1d ndarray or list) – Wavelength of observed spectrum. If several instruments, should be ordered per instrument, not necessarily as monotonically increasing wavelength. Hereafter, n_ch = len(lbda_obs).

  • spec_obs (numpy 1d ndarray or list) – Observed spectrum for each value of lbda_obs.

  • err_obs (numpy 1d/2d ndarray or list) – Uncertainties on the observed spectrum. If 2d array, should be [2,n_ch] where the first (resp. second) column corresponds to lower (upper) uncertainty, and n_ch is the length of lbda_obs and spec_obs.

  • tmp_name (str) – Template spectrum filename.

  • tmp_reader (python routine) – External routine that reads a model file and returns a 3D numpy array, where the first column corresponds to wavelengths, the second contains flux values, and the third the uncertainties on the flux.

  • search_mode (str, opt {'simplex', 'grid'}) – How is the best fit template found? Simplex or grid search.

  • lambda_scal (float, optional) – Wavelength where a first scaling will be performed between template and observed spectra. If not provided, the middle wavelength of the osberved spectra will be considered.

  • scale_range (tuple, opt) – If grid search, this parameter should be provided as a tuple of 3 floats: lower limit, upper limit and step of the grid search for the scaling factor to be applied AFTER the first rough scaling (i.e. scale_range should always encompass 1).

  • ext_range (tuple or None, opt) – If None: differential extinction is not to be considered as a free parameter. Elif a tuple of 3 floats is provided, differential extinction will be considered, with the floats as lower limit, upper limit and step of the grid search. Note: if simplex search, the range is still used to set a chi of np.inf outside of the range.

  • dlbda_obs (numpy 1d ndarray or list, optional) – Spectral channel width for the observed spectrum. It should be provided IF one wants to weigh each point based on the spectral resolution of the respective instruments (as in Olofsson et al. 2016).

  • instru_corr (numpy 2d ndarray or list, optional) – Spectral correlation throughout post-processed images in which the spectrum is measured. It is specific to the combination of instrument, algorithm and radial separation of the companion from the central star. Can be computed using distances.spectral_correlation(). In case of a spectrum obtained with different instruments, build it with distances.combine_corrs(). If not provided, it will consider the uncertainties in each spectral channels are independent. See Greco & Brandt (2017) for details.

  • instru_fwhm (float or list, optional) – The instrumental spectral fwhm provided in nm. This is used to convolve the model spectrum. If several instruments are used, provide a list of instru_fwhm values, one for each instrument whose spectral resolution is coarser than the model - including broad band filter FWHM if relevant.

  • instru_idx (numpy 1d array, optional) – 1d array containing an index representing each instrument used to obtain the spectrum, label them from 0 to n_instru. Zero for points that don’t correspond to any instru_fwhm provided above, and i in [1,n_instru] for points associated to instru_fwhm[i-1]. This parameter must be provided if the spectrum consists of points obtained with different instruments.

  • filter_reader (python routine, optional) – External routine that reads a filter file and returns a 2D numpy array, where the first column corresponds to wavelengths, and the second contains transmission values. Important: if not provided, but strings are detected in instru_fwhm, the default format assumed for the files: - first row containing header - starting from 2nd row: 1st column: WL in mu, 2nd column: transmission Note: files should all have the same format and wavelength units.

  • red_chi2 (bool, optional) – Whether to compute the reduced chi square. If False, considers chi^2.

  • remove_nan (bool, optional) – Whether to remove NaN values from template spectrum BEFORE resampling to the wavelength sampling of the observed spectrum. Whether it is set to True or False, a check is made just before chi^2 is calculated (after resampling), and only non-NaN values will be considered.

  • simplex_options (dict, optional) – The scipy.optimize.minimize simplex (Nelder-Mead) options.

  • force_continue (bool, optional) – In case of issue with the fit, whether to continue regardless (this may be useful in an uneven spectral library, where some templates have too few points for the fit to be performed).

  • verbose (str, optional) – Whether to print more information when fit fails.

  • min_npts (int or None, optional) – Iinimum number of (resampled) points to consider a template spectrum valid in the minimization search. A Nan value will be returned for chi if the condition is not met.

  • **kwargs (optional) – Optional arguments to the scipy.optimize.minimize function.

Returns

  • best_chi (float) – goodness of fit scored by the template

  • best_scal – best-fit scaling factor for the considered template

  • best_ext – best-fit optical extinction for the considered template

special.utils_mcmc

Module with utility functions to the MCMC (emcee) sampling for parameter estimation.

special.utils_mcmc.auto_window(taus, c)[source]
special.utils_mcmc.autocorr(y, c=5.0)[source]
special.utils_mcmc.autocorr_func_1d(x, norm=True)[source]
special.utils_mcmc.autocorr_test(chain)[source]
special.utils_mcmc.gelman_rubin(x)[source]

Determine the Gelman-Rubin hat{R} statistical test between Markov chains.

Parameters

x (numpy.array) – The numpy.array on which the Gelman-Rubin test is applied. This array should contain at least 2 set of data, i.e. x.shape >= (2,).

Returns

out – The Gelman-Rubin hat{R}.

Return type

float

Example

>>> x1 = np.random.normal(0.0,1.0,(1,100))
>>> x2 = np.random.normal(0.1,1.3,(1,100))
>>> x = np.vstack((x1,x2))
>>> gelman_rubin(x)
1.0366629898991262
>>> gelman_rubin(np.vstack((x1,x1)))
0.99
special.utils_mcmc.gelman_rubin_from_chain(chain, burnin)[source]

Pack the MCMC chain and determine the Gelman-Rubin hat{R} statistical test. In other words, two sub-sets are extracted from the chain (burnin parts are taken into account) and the Gelman-Rubin statistical test is performed.

Parameters
  • chain (numpy.array) – The MCMC chain with the shape walkers x steps x model_parameters

  • burnin (float in [0,1]) – The fraction of a walker which is discarded.

Returns

out – The Gelman-Rubin hat{R}.

Return type

float

special.utils_spec

Utility functions for spectral fitting.

special.utils_spec.akaike(LnL, k)[source]

Computes the Akaike Information Criterion: 2k-2ln(L), where k is the number of estimated parameters in the model and LnL is the max ln-likelihood for the model.

Parameters
  • LnL (float) – Max ln-likelihood for the considered model.

  • k (int) – Number of estimated parameters in the model.

Returns

aic – Akaike Information Criterion

Return type

float

special.utils_spec.blackbody(lbda, T)[source]

Planck function. Returns specific intensity for an input wavelength vector lbda (in micrometers) and a given input temperature.

Parameters
  • lbda (numpy array) – 1d numpy array corresponding to the wavelengths (in microns) for the desired output specific intensities.

  • T (float) – Temperature

Returns

B_lambda – Specific intensity corresponding to the Planck function.

Return type

float

special.utils_spec.convert_F_units(F, lbda, in_unit='cgs', out_unit='si')[source]

Function to convert Flux density between [ergs s-1 cm-2 um-1], [W m-2 um-1] and [Jy].

Parameters
  • F (float or 1d array) – Flux

  • lbda (float or 1d array) – Wavelength of the flux (in um)

  • in_unit (str, opt, {"si", "cgs", "jy", "cgsA"}) – Input flux units. ‘si’: W/m^2/mu; ‘cgs’: ergs/s/cm^2/mu ‘jy’: janskys ‘cgsA’: erg/s/cm^2/AA

  • out_unit (str, opt {"si", "cgs", "jy"}) – Output flux units.

Returns

Return type

Flux in output units.

special.utils_spec.convert_F_vs_mag(value, F_0=None, band='H', system='Johnson', conversion='to_mag')[source]

Function to convert Flux density (in Jy) to magnitude in a given band, or the opposite.

Sources for zero points:
Parameters
  • value (float) – Flux or magnitude to be converted.

  • F_0 (float, opt) – Zero-point flux. If provided will take precedence over band.

  • band (str, opt) – Band of the given flux or magnitude. Choice between: {‘U’,’B’,’V’, ‘R’, ‘I’, ‘J’, ‘H’, ‘K’, “L”, “L’”, ‘M’, ‘N’, ‘O’} (but not for all band systems).

  • system (str, opt) – Band system. Choice between: {‘Johnson;,’2MASS’, ‘UKIRT’, ‘ESO’}

  • conversion (str, opt) – In which sense to convert: flux to mag (‘to_mag’) or mag to flux (‘to_flux’)

Returns

Return type

Converted flux or magnitude.

special.utils_spec.extinction(lbda, AV, RV=3.1)[source]

Calculates the A(lambda) extinction for a given combination of A_V and R_V. If R_V is not provided, assumes an ISM value of R_V=3.1 Uses the Cardelli et al. (1989) empirical formulas.

Parameters
  • lbda (1d np.ndarray) – Array with the wavelengths (um) for which the extinction is calculated.

  • AV (float) – Extinction (mag) in the V band.

  • RV (float, opt) – Reddening in the V band: R_V = A_V / E(B-V)

Returns

Albda – Extinction (mag) at wavelengths lbda.

Return type

1d np.ndarray

special.utils_spec.find_nearest(array, value, output='index', constraint=None, n=1)[source]

Function to find the indices, and optionally the values, of an array’s n closest elements to a certain value.

Parameters
  • array (1d numpy array or list) – Array in which to check the closest element to value.

  • value (float) – Value for which the algorithm searches for the n closest elements in the array.

  • output (str, opt {'index','value','both' }) – Set what is returned

  • constraint (str, opt {None, 'ceil', 'floor'}) – If not None, will check for the closest element larger than value (ceil) or closest element smaller than value (floor).

  • n (int, opt) – Number of elements to be returned, sorted by proximity to the values. Default: only the closest value is returned.

Returns

  • Either – (output=’index’): index/indices of the closest n value(s) in the array; (output=’value’): the closest n value(s) in the array, (output=’both’): closest value(s) and index/-ices, respectively.

  • By default, only returns the index/indices.

  • Possible constraints (‘ceil’, ‘floor’, None (“ceil” will return the closest)

  • element with a value greater than ‘value’, “floor” the opposite)

special.utils_spec.inject_em_line(wl, flux, lbda, spec, width=None, height=0.1, em=True)[source]

Injects an emission (or absorption) line in a spectrum.

Parameters
  • wl (float) – Wavelength of the line

  • flux (float) – Flux of the line to be injected

  • lbda (1d np.ndarray) – Array with the wavelengths (um) of the input spectrum.

  • spec (1d np.ndarray) – Input spectrum fluxes

  • width (float, opt) – Full width of the line in mu (see also height). The line will be injected assuming a gaussian profile. If not provided, the width will be set to the ‘equivalent width’ of the line.

  • height (float, opt) – Ratio to peak where the line width is considered. E.g. if height=10%, the width will be the full width at 10% maximum.

  • em (bool, opt) – Whether emission (True) or absorption (False) line.

Returns

spec – Spectrum with the injected line

Return type

1d np.ndarray

special.utils_spec.mj_from_rj_and_logg(rp, logg)[source]

Estimates a planet mass in Jupiter mass for a given radius in Jupiter radius and the log of the surface gravity.

Parameters
  • rp (float) – Planet radius in Jupiter radii

  • logg (float) – Log of the surface gravity

Returns

mj – Planet mass in Jupiter masses

Return type

float

special.utils_spec.nrefrac(wavelength, density=1.0)[source]

Calculates refractive index of air from Cauchy formula. For comparisong to measurements from the ground, the wavelenghts of model spectra must be slightly shifted using: lbda_shift = lbda_model/(1+(nrefrac*1e-6))

Input: wavelength in Angstrom, Returns N = (n-1) * 1.e6. Credit: France Allard.

Parameters
  • wavelength (numpy array) – 1d numpy array corresponding to the wavelengths of the input spectrum in Angstrom

  • density (float) – density of air in amagat (relative to STP, e.g. ~10% decrease per 1000m above sea level).

Returns

N – Refractive index

Return type

float