Skip to content

MCFOST API

Various functions to make the communication between xenomorph and MCFOST a bit easier. These are broadly written with running on a HPC-cluster in mind.

generate_para(paraname, density_file, distance=2400, photons=10000000.0, T_photons=2000000.0, resolution=600, gas_2_dust=100, root_dir='')

WIP -- Generates a .para file to be used in the MCFOST run. Todo: - allow multiple stars to be generated in the para file. - generate the para file inside the root_dir?

Parameters:

Name Type Description Default
paraname str

The name that you'd like to give to this parameter file. A '.para' is automatically appended to the end of this input.

required
density_file str

The name of the .fits file generated by xmc.mcfost_points().

required
distance float

The distance (in pc) to the system.

2400
photons int

The number of photons to use in the imaging computation.

10000000.0
T_photons int

The number of photons to use in the T computation

2000000.0
resolution int

The pixel side length of the computed image.

600
gas_2_dust float

The mass of gas relative to the dust in each particle

100
root_dir str

The directory where you'd like the output of the job to be.

''
Source code in xenomorph\mcfost.py
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
def generate_para(paraname, density_file, distance=2400, photons=1e7, T_photons=2e6, resolution=600, gas_2_dust=100, root_dir=''):
    '''WIP -- Generates a .para file to be used in the MCFOST run. 
    Todo:
     - allow multiple stars to be generated in the para file.
     - generate the para file inside the root_dir?
    Parameters
    ----------
    paraname : str
        The name that you'd like to give to this parameter file. A '.para' is automatically appended to the end of this input.
    density_file : str
        The name of the .fits file generated by `xmc.mcfost_points()`. 
    distance : float
        The distance (in pc) to the system. 
    photons : int
        The number of photons to use in the imaging computation.
    T_photons : int
        The number of photons to use in the T computation
    resolution : int
        The pixel side length of the computed image.
    gas_2_dust : float 
        The mass of gas relative to the dust in each particle
    root_dir : str
        The directory where you'd like the output of the job to be.
    '''
    # first, let's start by calculating our (physical) image size (in au) from the given density file 
    data = fits.open(density_file)
    particles = data[1].data
    particles = np.abs(particles)
    img_bound = 1.05 * 2 * np.max(particles[:, :2]) # want it slightly bigger than twice the radius from the origin, in the projection of the sky (hence the :2 slice)

    parafile = io.open(f'{paraname}.para', 'w', newline='\n')

    parafile.write("4.1                      mcfost version\n\n"+\
                    "#Number of photon packages\n"+\
                    f"  {T_photons:.2e}                  nbr_photons_eq_th  : T computation\n"+\
                    "  1.28e3	          nbr_photons_lambda : SED computation\n"+\
                    f"  {photons:.2e}                  nbr_photons_image  : images computation\n\n")
    parafile.write("#Wavelength\n"+\
                    "  100  0.02 3000.0          n_lambda, lambda_min, lambda_max [mum] Do not change this line unless you know what you are doing\n"+\
                    "  T F T 		  compute temperature?, compute sed?, use default wavelength grid for output ?\n"+\
                    "  IMLup.lambda		  wavelength file (if previous parameter is F)\n"+\
                    "  F F			  separation of different contributions?, stokes parameters?\n\n")
    parafile.write("#Grid geometry and size\n"+\
                    "  2			  1 = cylindrical, 2 = spherical (a Voronoi mesh is selected automatically with -phantom)\n"+\
                    " 100 70 1 20              n_rad (log distribution), nz (or n_theta), n_az, n_rad_in\n\n")
    parafile.write("#Maps\n"+\
                    f"  {resolution} {resolution} {int(img_bound)}.            grid (nx,ny), size [AU]\n"+\
                    "  0.  0.  1  F           RT: imin, imax, n_incl, centered ?\n"+\
                    "  0    0.   1             RT: az_min, az_max, n_az angles\n"+\
                    f"  {distance:.1f}			  distance (pc)\n"+\
                    "  -90.			  disk PA\n\n")
    parafile.write("#Scattering method\n"+\
                    "  2	                  1=exact phase function, 2=hg function with same g (2 implies the loss of polarizarion)\n\n"+\
                    "#Symmetries\n"+\
                    "  F	                  image symmetry\n"+\
                    "  F	                  central symmetry\n"+\
                    "  F	                  axial symmetry (important only if N_phi > 1)\n\n")
    parafile.write("#Disk physics\n"+\
                    "  0     0.50  1.0	  dust_settling (0=no settling, 1=parametric, 2=Dubrulle, 3=Fromang), exp_strat, a_strat (for parametric settling)\n"+\
                    "  F                       dust radial migration\n"+\
                    "  F		  	  sublimate dust\n"+\
                    "  F                       hydostatic equilibrium\n"+\
                    "  F  1e-5		  viscous heating, alpha_viscosity\n\n"+\
                    "#Number of zones : 1 zone = 1 density structure + corresponding grain properties\n"+\
                    "  1                       needs to be 1 if you read a density file (phantom or fits file)\n\n")
    parafile.write("#Density structure\n"+\
                    "  3                       zone type : 1 = disk, 2 = tappered-edge disk, 3 = envelope, 4 = debris disk, 5 = wall\n"+\
                    f"  1.e-15    {gas_2_dust:.1f}		  dust mass,  gas-to-dust mass ratio\n"+\
                    "  10.  100.0  2           scale height, reference radius (AU), unused for envelope, vertical profile exponent (only for debris disk)\n"+\
                    "  100.0  0.0    300.  100.  Rin, edge, Rout, Rc (AU) Rc is only used for tappered-edge & debris disks (Rout set to 8*Rc if Rout==0)\n"+\
                    "  1.125                   flaring exponent, unused for envelope\n"+\
                    "  -0.5  0.0    	          surface density exponent (or -gamma for tappered-edge disk or volume density for envelope), usually < 0, -gamma_exp (or alpha_in & alpha_out for debris disk)\n\n")
    parafile.write("#Grain properties\n"+\
                    "  1  Number of species\n"+\
                    "  Mie  1 2  0.0  1.0  0.9 Grain type (Mie or DHS), N_components, mixing rule (1 = EMT or 2 = coating),  porosity, mass fraction, Vmax (for DHS)\n"+\
                    "  Draine_Si_sUV.dat  1.0  Optical indices file, volume fraction\n"+\
                    "  1	                  Heating method : 1 = RE + LTE, 2 = RE + NLTE, 3 = NRE\n"+\
                    "  0.03  1.0 3.5 100 	  amin, amax [mum], aexp, n_grains (log distribution)\n\n")
    parafile.write("#Molecular RT settings\n"+\
                    "  T T T                   lpop, laccurate_pop, LTE\n"+\
                    "  0.05 km/s		  Turbulence velocity, unit km/s or cs\n"+\
                    "  1			  Number of molecules\n"+\
                    "  co.dat 6                molecular data filename, level max up to which NLTE populations are calculated\n"+\
                    "  T 1.e-4 abundance.fits.gz   cst molecule abundance ?, abundance, abundance file\n"+\
                    "  T  2                       ray tracing ?,  number of lines in ray-tracing\n"+\
                    "  2 3	 		  transition numbers\n"+\
                    "  -10.0 10.0 40     	  vmin and vmax [km/s], number of velocity bins betwen 0 and vmax\n\n")
    parafile.write("#Atoms settings / share some informations with molecules\n"+\
                    " 1		number of atoms\n"+\
                    " H_6.atom	all levels treated in details at the moment\n"+\
                    " F		non-LTE ?\n"+\
                    " 0		initial solution, 0 LTE, 1 from file\n"+\
                    " 1000 101	vmax (km/s), n_points for ray-traced images and total flux\n"+\
                    " T 1		images (T) or total flux (F) ? Number of lines for images\n"+\
                    " 3 2		upper level -> lower level (Atomic model dependent)\n\n")
    parafile.write("#Star properties\n"+\
                    " 1 Number of stars\n"+\
                    " 60000.0	6.0	15.0	0.0	0.0	0.0  T Temp, radius (solar radius),M (solar mass),x,y,z (AU), automatic spectrum?\n"+\
                    " lte4000-3.5.NextGen.fits.gz\n"+\
                    " 0.0	2.2  fUV, slope_fUV\n")

    parafile.close()

generate_slurm(slurmname, wavelength, para_file, density_file, cpus=4, run_hours=10, memory=4, root_dir='', job_name='mcfost-transfer', email='ryan.white1@hdr.mq.edu.au', mcfost_setup='~/setup_mcfost')

Generates a slurm script to run a single radiative transfer calculation on a xenomorph-generated spiral. The script will generate the temperature profile of the (currently default-only) dust, and then image it and the specified wavelength.

Parameters:

Name Type Description Default
slurmname str

The name that you'd like to give to this slurm script. A '.q' is automatically appended to the end of this input.

required
wavelength int or float

The wavelength (in microns) that you'd like to generate the image for.

required
para_file str

The name of the MCFOST parameter file for the system.

required
density_file str

The name of the .fits file generated by xmc.mcfost_points().

required
cpus int

The number of CPUs you'd like to allocate to running this MCFOST job. A power of 2 is usually good.

4
run_hours int

How many (wall-time) hours you'd like to set the job to run before the scheduling system cancels it.

10
memory float or int

The amount of memory (in GB) to allocate to the job. In my experience they typically use of order a few hundred MB, so setting 1-4GB is usually safe.

4
root_dir str

The directory where you'd like the output of the job to be.

''
job_name str

A name for the job (only visible on the HPC queue system).

'mcfost-transfer'
email str

The email of the user, should they want email updates on how the job is doing. Mine by default :)

'ryan.white1@hdr.mq.edu.au'
mcfost_setup str

The location (and name) of the mcfost_setup file. This file is meant to be read each time an MCFOST job is run on a HPC. A typical ~/setup_mcfost file will look like: export PATH=//mcfost/bin:${PATH} export MCFOST_UTILS=//mcfost/utils

'~/setup_mcfost'
Source code in xenomorph\mcfost.py
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
def generate_slurm(slurmname, wavelength, para_file, density_file, cpus=4, run_hours=10, memory=4, root_dir='', job_name="mcfost-transfer",
                   email='ryan.white1@hdr.mq.edu.au', mcfost_setup='~/setup_mcfost'):
    ''' Generates a slurm script to run a *single* radiative transfer calculation on a xenomorph-generated spiral. 
    The script will generate the temperature profile of the (currently default-only) dust, and then image it and the specified wavelength.

    Parameters
    ----------
    slurmname : str
        The name that you'd like to give to this slurm script. A '.q' is automatically appended to the end of this input.
    wavelength : int or float
        The wavelength (in microns) that you'd like to generate the image for.
    para_file : str
        The name of the MCFOST parameter file for the system.
    density_file : str
        The name of the .fits file generated by `xmc.mcfost_points()`. 
    cpus : int
        The number of CPUs you'd like to allocate to running this MCFOST job. A power of 2 is usually good.
    run_hours : int
        How many (wall-time) hours you'd like to set the job to run before the scheduling system cancels it.
    memory : float or int
        The amount of memory (in GB) to allocate to the job. 
        In my experience they typically use of order a few hundred MB, so setting 1-4GB is usually safe.
    root_dir : str
        The directory where you'd like the output of the job to be. 
    job_name : str
        A name for the job (only visible on the HPC queue system).
    email : str
        The email of the user, should they want email updates on how the job is doing. Mine by default :)
    mcfost_setup : str
        The location (and name) of the mcfost_setup file. This file is meant to be read each time an MCFOST job is run on a HPC. 
        A typical ~/setup_mcfost file will look like:
            export PATH=/<path to mcfost>/mcfost/bin:${PATH}
            export MCFOST_UTILS=/<path to mcfost>/mcfost/utils 
    '''
    if root_dir != '':
        write_dir = f'{root_dir}/'
    else:
        write_dir = ''
    if email != '':
        email_lines = ["#SBATCH --mail-type=BEGIN\n", "#SBATCH --mail-type=FAIL\n", "#SBATCH --mail-type=END\n",
                      f"#SBATCH --mail-user={email}\n", '\n']
    else:
        email_lines = ['\n']

    hashed_lines = ["#!/bin/bash\n", "#SBATCH --ntasks=1\n", f"#SBATCH --cpus-per-task={cpus}\n",
                    f"#SBATCH --job-name={job_name}\n", f"#SBATCH --output={write_dir + job_name + str(wavelength)}.qout\n",
                    f"#SBATCH --time=0-{run_hours}:00:00\n", f"#SBATCH --mem={memory}G\n"]
    for email_line in email_lines:
        hashed_lines.extend(email_line)

    initialise_lines = ['echo "HOSTNAME = $HOSTNAME"\n', 'echo "HOSTTYPE = $HOSTTYPE"\n',
                        'echo Time is `date`\n', 'echo Directory is `pwd`\n', '\n']

    environment_lines = ['ulimit -s unlimited\n', f'source {mcfost_setup}\n', f'export OMP_NUM_THREADS={cpus}\n', 
                         '\n', 'echo "Starting mcfost..."\n', '\n']

    mcfost_lines = [f'mcfost {para_file} -df {density_file} -fix_star -star_bb -root_dir {root_dir}\n',
                    f'mcfost {para_file} -df {density_file} -fix_star -star_bb -img {wavelength} -root_dir {root_dir}\n']

    slurmfile = io.open(f'{slurmname}.q', 'w', newline='\n')

    slurmfile.writelines(hashed_lines)
    slurmfile.writelines(initialise_lines)
    slurmfile.writelines(environment_lines)
    slurmfile.writelines(mcfost_lines)

    slurmfile.close()

mcfost_points(stardata, shells, shell_mass, filename, n_t=1000, n_points=400, root_dir='')

Generates points that can be fed into a new version of MCFOST for radiative transfer calculations. Once generated, you could use these points with MCFOST via:

mcfost .para -df -force_Mgas -fix_star -star_bb mcfost .para -df -force_Mgas -fix_star -star_bb -img 1.6

Parameters:

Name Type Description Default
stardata dict

The system parameter file.

required
shells int

The number of shells to generate

required
shell_mass float

The mass (in units of Solar masses) of each dust shell. This assumes MCFOST is using a 1:100 dust:gas mass ratio, so the actual mass will be multiplied by 100. That is, the user-entered mass here is just for the dust.

required
filename str

The name of the file to save the points into. This must include a '.fits' suffix.

required
n_t int

The number of rings to generate in each shell

1000
n_points int

The number of points to generate in each ring

400
root_dir str

The directory where you'd like the output of the job to be.

''
Source code in xenomorph\mcfost.py
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
def mcfost_points(stardata, shells, shell_mass, filename, n_t=1000, n_points=400, root_dir=''):
    ''' Generates points that can be fed into a new version of MCFOST for radiative transfer calculations.
    Once generated, you could use these points with MCFOST via:

    mcfost <system>.para -df <filename> -force_Mgas -fix_star -star_bb
    mcfost <system>.para -df <filename> -force_Mgas -fix_star -star_bb -img 1.6

    Parameters
    ----------
    stardata : dict
        The system parameter file.
    shells : int
        The number of shells to generate
    shell_mass : float
        The mass (in units of Solar masses) of *each* dust shell. This assumes MCFOST is using a 1:100 dust:gas mass ratio, so the actual mass will be
        multiplied by 100. That is, the user-entered mass here is just for the dust.
    filename : str
        The name of the file to save the points into. This *must* include a '.fits' suffix.
    n_t : int
        The number of rings to generate in each shell
    n_points : int
        The number of points to generate in each ring
    root_dir : str
        The directory where you'd like the output of the job to be.
    '''
    if root_dir != '':
        save_dir = root_dir + '/'
    else:
        save_dir = root_dir
    particles, weights = gm.dust_plume_custom(stardata, shells, n_t=n_t, n_points=n_points)

    particles = jnp.tan(particles / (60 * 60 * 180 / jnp.pi)) * (stardata['distance'] * 3.086e13) # convert from angular coords back to physical coords
    particles /= gm.AU2km  # MCFOST needs distances in au

    # get rid of any points that have no mass
    filter = np.where(weights > 0)[0]
    particles = particles[:, filter]
    weights = weights[filter]

    # assign masses to each point
    masses = 1e2 * shell_mass * shells * weights / jnp.sum(weights)     # the 1e2 factor assumes a 1:100 dust:gas mass ratio, and these are actually the gas masses

    # particles += np.random.normal(0, 5e-2, size=(3, len(weights)))    # add small random offsets to the particles (in au) to stop the tesselation from crashing (strange bug with mcfost!)
    particles *= np.random.normal(1, 5e-5, size=(3, len(weights)))    # add small random offsets to the particles (in au) to stop the tesselation from crashing (strange bug with mcfost!)

    fits_masses = fits.PrimaryHDU(masses)
    fits_positions = fits.ImageHDU(particles.T)     # transposed to get the orientation right (although it doesnt affect the calc at all)
    hdul = fits.HDUList([fits_masses, fits_positions])
    hdul.writeto(save_dir + filename, overwrite=True)