Skip to content

will-henney/orion-west

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

66 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Notes on creating spectral maps from Orion West slit spectra

1 Weekly meetings

1.1 [2016-03-04 Fri]

  • Will is going to make figures of the isovelocity maps
    • Alba suggested showing pairs of maps, one annotated (or just annotations)
  • Alba is going to revise the positions of the sample regions for bowshock and background
    • Take bg either side of shock where possible
    • Use slit as close to perpendicular to shock as possible
    • Estimate uncertainty in background
  • Alba will revise line color choice in Figure 6
    • Maybe use histogram for the observed profile

1.2 [2016-03-08 Tue]

  • We discussed the different options for figure 6 that Alba had generated
    • We decided to use the thin blue lines for the individual components
    • For the observed profile, Will prefers the steps
      • But Alba needs to adjust it so that the data points are at the step centers, not edges
      • Also invesigate if the steps can be shown as a filled area (light gray) with no outline
  • We discussed the BG subtraction for the red bow shock profiles
    • Avoid the region around LL2 because the LL2 bowshock dominates the spectra
    • Use only those BG regions that are “outside” the bow shock where possible
    • Try to construct a mean +/- variation BG profile from various slits and apply it to all spectra in a given region
    • For the southern bow shock arcs, use BG from horizontal slits
  • Will still needs to do the isovelocity figures

1.3 [2016-03-15 Tue]

  • Alba has redone the fits to the red bow shocks, taking larger sample regions
    • All are now fine except for the NW shock, where she needs to use a BG sample to the W of the source sample, even though the blue knots there may make it impossible to do a good subtraction on the blue side of the line
  • Most of the red bow shock samples have peak velocities close to 25 km/s heliocentric
    • This is close to the systemic velocity of the star cluster and molecular cloud
    • One explanation could be that the bow shocks are moving in the plane of the sky
    • But since we are looking at the limb-brightened edge of a spatially resolved bow shock, the relative radial velocity should always be roughly zero, even if the bow shock axis is inclined to the plane of the sky
  • We are provisionally accepting Will’s brown color scheme for nebular core in the figure that shows the slits - maybe revisit later
  • Will still needs to finish the isovelocity image annotations
    • Alba will send the final sample regions used for the knots

1.4 [2016-03-29 Tue]

1.5 [2016-04-05 Tue]

  • Task for Will: write to Tere about correction to 6716/31 ratio
  • Alba: will investigate knots in [S II] and [O III] spectra
    • [O III]: only if we see some emission that is not also in Ha
    • [S II]: calculate knot and nebula densities once we have the photometric correction to the 6716/6731 ratio

1.6 [2016-04-08 Fri]

  • Will has to read latest draft
  • Will should look at [O III] calibration
  • Alba will measure the (uncalibrated) line ratios for [O III] and [S II], with the idea of fixing the photometric calibration later

1.7 [2016-04-12 Tue]

  • We measured the position of one of the [O III] slits
    • spec228-oiii.fits has delta of 61.7 pixels between image and slit spectrum
    • With luck, the same solution will work for the [S II] spectrum taken at the same time
  • In the [O III] slit from the LL1 position, there is a blue knot in the far north
    • Alba to check if this corrsponds to one of our identified knots
  • We still haven’t found the line ration correction factors
    • Will must carry on looking

1.8 [2016-04-15 Fri]

  • We now have the correction factors for the 6716/6731 ratio
  • We also have a correction factor for the [O III]/Ha ratio
    • But we don’t have a clear idea of how we can do a reliable calibration of the [O III] line
  • We extended the table of slits to include four sii slits and two oiii slits
  • We modified the program slit-calibration.py so it can deal with spectra that are not Ha+NII
    • But in such cases it misses out the flux calibration part
  • We have run the program on the sii and oiii slits, so we now have FITS files in the homogeneous 3D format that the other programs expect
  • Immediate remaining tasks:
    • [X] Will Re-try the background fit and subtraction
    • [ ] Will Run spec_augment_wcs.py on the sii and oiii spectra to put them in heliocentric velocity and map pixels
    • [ ] Alba Finish analysing the knots in [S II] and [O III]

2 Proper motions for red bow

  • We will measure some proper motions by hand

2.1 First epoch is

~/Work/OrionTreasury/acs/hlsp_orion_hst_acs_strip1r_f775w_v1_drz.fits

2.2 Second epoch is

~/Dropbox/OrionTreasury2015/jcol09010_drc.fits

2.3 Aligning the two epochs

  • We will adjust the WCS of the second epoch so that a star is aligned
  • Table of differences in RA, Dec after manual alignment
200483.624942-5.4214582fk511154.6324004.9834201.042
201583.624713-5.4213777fk52832.62784385.8105218.56
-0.00022900.0000805
  • The last row above gives the differences, which must be subtracted from the CRVAL of the second epoch
    83.6107386987505-0.000229083.610968
    -5.413052771596520.0000805-5.4131333
  • Corrected WCS is in ~/Dropbox/OrionTreasury2015/jcol09010_drc_fix.wcs

3 Finishing the paper

3.1 [2017-01-18 Wed] Summary of where we are and what needs to be done

3.2 TODO [2/3] Things to do for Alba [2017-01-18 Wed]

  • [X] Region files for new knots
    • All are in Will-Regions-2016-12/
    • [X] Boxes for PV images
      • Each box is a single velocity feature in a single slit
        • So several boxes may contribute to each knot
      • There is a separate region file for each slit
      • Original ones have names such as
        • pvboxes-YY1401-2013-02-024.reg
      • They were done by hand in Put PV boxes by hand on to the PV spectra
      • Final ones have names such as
        • pvboxes-knots-XX1141-2007-01b-2061.reg
      • They have the knot ID for each box in OW coordinate format, which was done in Retrofit knot assignments to the PV boxes
    • [X] Bars for the images
      • There is one-to-one correspondence between the bars and the boxes
      • Original files were generated automatically in Automatically construct bars for the maps
      • One file with all features, plus one for each velocity class:
        • bars-from-boxes-all.reg
        • bars-from-boxes-fast.reg
        • bars-from-boxes-slow.reg
        • bars-from-boxes-ultra.reg
      • Then in Combine bars by hand into knots they are sorted into knots, which are indicated as DS9 region groups, saved in
        • bars-from-boxes-fast-groups.reg
        • bars-from-boxes-slow-groups.reg
        • bars-from-boxes-ultra-groups.reg
      • Note that the knots in the above files have only a sequential ID, such as Ultra 006 (-80)
      • All the above have coordinates in pixels (IMAGE frame), but for some uses (for example, with aplpy) it is necessary to have celestial coordinates (WCS frame), so we have also written
        • bars-from-boxes-fast-groups-wcs.reg
        • bars-from-boxes-slow-groups-wcs.reg
        • bars-from-boxes-ultra-groups-wcs.reg
    • [X] Knots for the images
      • These are polygons that were created to envelope the bars by routines in Utility functions for boxes and bars. In brief:
        • Find convex hull of all bars in knot
          • This is done on the 2048x2048 pixel image
        • Dilate it by disk of 4 pixels diameter so it is not too spindly
        • Convert the shape to vector polygon
        • Simplify the polygon so it only has a handful of vertices
        • Find coordinate center of knot and convert to an OW-style ID
        • Save as DS9 region
      • There is one file for each velocity class (these are in IMAGE coordinates)
        • knots-fast.reg
        • knots-slow.reg
        • knots-ultra.reg
      • And we also have versions in WCS coordinates
        • knots-fast-wcs.reg
        • knots-slow-wcs.reg
        • knots-ultra-wcs.reg
  • [X] Table of knot properties
  • [ ] Figures of my new stuff

3.3 Will comments [2016-10-12 Wed]

3.3.1 TODO Red bows

  • [ ] Alba Fig 6 - also show the nebular component that was subtracted.

3.3.2 TODO Physical conditions in knots

  • We can do a lot more analysis of the few data that we have for the majority of knots:
    1. Plot V([N II]) vs V(Ha)
      • In principle, they should be equal
      • The dispersion about a straight line could be taken as an indication of the uncertainty in the velocities
    2. Plot W(Ha) vs W([N II])
      • Check that it is consistent with a constant temperature
      • Is the temperature any different from the nebula (should not be)
      • I already did this in Plot Ha width versus {N II} width
    3. Plot [N II]/Ha for the knots vs the nebula
      • Is the behaviour that we see in Fig 9 typical of the other knots?

3.4 [4/9] Alba message [2016-10-10 Mon]

3.4.1 TODO Sect. 1:

escribir la introducción. Me pongo con esto?

3.4.2 DONE Sect 2.2:

por mi no ponemos la figura 2, no me parece tan importante, qué
piensas?

De acuerdo.

3.4.3 DONE Sect 3:

-Hay que definir el WW. Si quieres lo hago yo aunque no sé qué tomar cómo
referencia porque no hay nada cerca conocido. Lo describo y ya está.

3.4.4 DONE Fig 5:

-Fig 5: mapa isovelocidades en rangos rojos donde se muetran los 4-5 bow
shocks descritos. Te mando el archivo bowshocks_arcs.reg.
[X] WILL
Make a figure showing the Red arcs superimposed on the WFI image

3.4.5 TODO Sect 4:

Fig 4: mapa en rangos azules+finding chart con los blue knots. Te
adjunto blue_knots_final.reg con los knots (este ya te lo había mandado)
[ ] WILL
Finding chart for blue knots
  • With or without image as background?
    • Jane prefers not

3.4.6 DONE Sect 4.1:

en el último párrafo hablo del knot 4261-352 (en el que solo se
mide [SII]6731), revisa si puedes lo que digo para completarlo.

Simmilar to HH 201 and other Orion Bullets?

3.4.7 TODO Sect. 5.1:

discusión de los red bow shocks. Con la información que tenemos
como que no hay mucho que discutir. Alguna idea?
  • Largest bow shocks seen in Orion Nebula
    • Analogy with giant bow shock of HH1 and other famous HH objects
    • John Bally 1997
  • Calculate dynamic time
    • But this needs the plane of sky velocity
    • [ ] Make a rough estimate of the proper motions
  • Also estimate mass loss rate?
  • What is the origin of this flow?
    • Axis is aligned with HH 269, but it has the wrong radial velocity
      • HH 269 is blueshifted
    • [ ] Are there any redshifted flows coming out of the core of the nebula in this general direction?

3.4.8 TODO Sect 5.2:

al final encontraste modelos de choques que podamos poner en
el diagrama de diagnóstico? El párrafo del final es un resumen para
nosotros, lo reharé cuando terminemos lo de los choques

No. The shock models are now deferred until a later paper.

3.4.9 TODO Sect 5.3:

está sin terminar porque no tengo como mucho qué decir. Es más
descripción que discusión. La Fig 10 es provisional, solo para que veas
qué describo. Te mando los archivos con los arcos y jets y los knots
asociados (arcs_knots_all. reg, jetD_knots.reg y jetH_knots.reg).
Jet D
  • Could be an extension of the blue-shifted jet from LL2
    • Compare velocities
  • Also may be linked with Arc D and some of the knots

3.4.10 Raw message

Hola Will,

hace mucho que no nos vemos. Estoy un poco atascada con el artículo, hay
bastante escrito pero es necesario terminar la discusión.



Te mando la última versión que tengo. Aquí te comento qué cosas hay que
hacer:

Sect. 1: escribir la introducción. Me pongo con esto?

Sect 2.2: por mi no ponemos la figura 2, no me parece tan importante, qué
piensas?

Sect 3:
-Hay que definir el WW. Si quieres lo hago yo aunque no sé qué tomar cómo
referencia porque no hay nada cerca conocido. Lo describo y ya está.
-Fig 5: mapa isovelocidades en rangos rojos donde se muetran los 4-5 bow
shocks descritos. Te mando el archivo bowshocks_arcs.reg.

Sect 4: Fig 4: mapa en rangos azules+finding chart con los blue knots. Te
adjunto blue_knots_final.reg con los knots (este ya te lo había mandado)

Sect 4.1: en el último párrafo hablo del knot 4261-352 (en el que solo se
mide [SII]6731), revisa si puedes lo que digo para completarlo.

Sect. 5.1: discusión de los red bow shocks. Con la información que tenemos
como que no hay mucho que discutir. Alguna idea?

Sect 5.2:  al final encontraste modelos de choques que podamos poner en
el diagrama de diagnóstico? El párrafo del final es un resumen para
nosotros, lo reharé cuando terminemos lo de los choques

Sect 5.3: está sin terminar porque no tengo como mucho qué decir. Es más
descripción que discusión. La Fig 10 es provisional, solo para que veas
qué describo. Te mando los archivos con los arcos y jets y los knots
asociados (arcs_knots_all. reg, jetD_knots.reg y jetH_knots.reg).


Bueno, cuando te lo leas ya me dices qué piensas.

Saludos

4 Classification of knots in systems

4.1 Individual features in the blue knot systems

4.1.1 Color coding by velocity

-10#e82orange
-20#ca2gold
-30#de2yellow
-40#8d2apple green
-50#2d8mint
-55#6a8gray-green
-60#2c9turquoise
-65#2aasky blue
-70#29c
-75#28d
-80#44d
-90#82d
-95#a2c
-100#a2a

4.1.2 Terminal zones

  • I have identified 7 zones
    • Labelled A to G, going from N to S
  • They are all regions of “spikes” or low-velocity knots

+

4.1.3 Knots in amongst the red bows

  • Alba Knot 4331-453 is part of larger structure with complicated kinematic structure
    • Blue-shifted velocities increase
      • from -10 at E side (new knot 4345-505)
        • This extends over at least 20 arcsec in the N-S direction
        • But the N boundary is not constrained since we run out of slits
        • May be related with boundary of big blue sheet that starts about 10 arcsec to the E of it and covers the S part of the LL2 slits
      • to -30 at W side (4331-453 proper)
    • With red-shifted spike (+40) at new knot 4339-456
      • This is only red-shifted spike that I have found (not associated with a brightness peak, which doesn’t really count)
  • These are associated with knotty emission seen in the ACS images
    • Located just to E of a foreground globule (seen in extinction, scattered continuum, with Ha skin)
  • May also be related with knot 4921-429 to make a low projected-velocity flow
    • But if so, it is not clear in which direction the flow is going: E or W?
  • Alternatively, may be just phoroevaporation flows - but would that really give -30 and +40 km/s?

4.2 TODO [#C] Include the other LL1 slits

  • Just for completeness
  • And to see if there is anything interesting in the south
  • Steps to achieve:
    1. Add the spectra to Table of all slits
    2. Run slit-calibration.py
      • Use command line argument to restrict which datasets are processed
      • The main pain will be finding the y-offset along the slits
    3. Everything else should be automatic
    4. Then re-generate all the maps I suppose

4.3 TODO New methodology for finding knots

  • [2016-11-16 Wed] Started on this, but it is a tedious job
  • [2016-12-02 Fri 12:30] Re-started with refined methodology:
    • Put boxes on each slit spectrum, then all the rest is automatic

4.3.1 New method: boxes → bars → knots

4.3.1.1 DONE Put PV boxes by hand on to the PV spectra

  • Make them more compact than before to better isolate the line
  • No labels or anything
  • Which are finished?
    • [X] vert
    • [X] horiz
    • [X] ll2
    • [X] ll1
    • [X] east
4.3.1.1.1 Utility scripts for moving regions
REGDIR=Will-Regions-2016-12
DS9=${line}-${slits}
prefix=$(basename $(xpaget $DS9 file) -${line}-vhel.fits)
regfile=pvboxes-$prefix.reg
xpaset -p $DS9 regions save $PWD/$REGDIR/$regfile
REGDIR=Will-Regions-2016-12
DS9=${line}-${slits}
prefix=$(basename $(xpaget $DS9 file) -${line}-vhel.fits)
regfile=pvboxes-$prefix.reg
xpaset -p $DS9 regions delete all
xpaset -p $DS9 regions load $PWD/$REGDIR/$regfile
REGDIR=Will-Regions-2016-12
DS9=${line}-${slits}
for i in $(xpaget $DS9 frame all); do
    sleep 1
    xpaset -p $DS9 frame $i
    prefix=$(basename $(xpaget $DS9 file) -${line}-vhel.fits)
    regfile=pvboxes-knots-$prefix.reg
    xpaset -p $DS9 regions delete all
    xpaset -p $DS9 regions load $PWD/$REGDIR/$regfile
done
REGDIR=Will-Regions-2016-12
DS9=${line}-${slits}
for i in $(xpaget $DS9 frame all); do
    sleep 1
    xpaset -p $DS9 frame $i
    prefix=$(basename $(xpaget $DS9 file) -${line}-vhel.fits)
    regfile=pvboxes-$prefix.reg
    sleep 1
    xpaset -p $DS9 regions delete all
    xpaset -p $DS9 regions load $PWD/$REGDIR/$regfile
    xpaset -p $DS9 regions select all
    xpaset -p $DS9 regions system image
    xpaset -p $DS9 regions select none
    xpaset -p $DS9 regions save $PWD/$REGDIR/$regfile
done
4.3.1.1.1.1 Hα → [N II]
4.3.1.1.1.2 [N II] → Hα
4.3.1.1.1.3 Convert to image coords
4.3.1.1.1.4 Load all the boxes at once

4.3.1.2 DONE Automatically construct bars for the maps

  • [X] Write colored bars
  • [X] Split them up into the 3 velocity classes
    • We have a small overlap between the classes:
      • -45 → -35 are in both fast and slow
      • -80 → -70 are in both ultrafast and fast
import pyregion
from astropy.io import fits
from astropy.wcs import WCS
import glob
import os
import seaborn as sns

DEBUG = True
REGION_DIR = 'Will-Regions-2016-12'
FITS_DIR = 'Calibrated/BGsub'
BOX_PATTERN = 'pvboxes-*.reg'
BAR_HEADER = '''# Region file format: DS9 version 4.1
global color=green dashlist=8 3 width=1 font="helvetica 10 normal roman" select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1
image
'''
BAR_FMT = ('line({x1:.1f},{y1:.1f},{x2:.1f},{y2:.1f}) # '
           + 'line=0 0 color={color} width={width} '
           + 'text={{{v:d}}} dash={dashed}')
BAR_FILE = 'bars-from-boxes.reg'

BRIGHT_LEVELS = [0.001, 0.003, 0.009, 0.027]
def find_width(b, hdu):
    shapelist = pyregion.ShapeList([b])
    m = shapelist.get_mask(hdu=hdu)
    box_bright = hdu.data[m].mean()
    width = 1
    dashed = 1
    for ib, blevel in enumerate(BRIGHT_LEVELS):
        if box_bright >= blevel:
            width = ib + 1
            dashed = 0
    return width, dashed


VMIN, VMAX = -110.0, 0.0
NC = int(VMAX - VMIN) + 1
rgblist = sns.hls_palette(NC)
def find_color(v):
    ic = int(VMAX - v)
    ic = max(0, min(ic, NC-1))
    r, g, b = rgblist[ic]
    return '#{:01x}{:01x}{:01x}'.format(int(16*r), int(16*g), int(16*b))


box_files = glob.glob(os.path.join(REGION_DIR, BOX_PATTERN))

VLIMITS = {
    'all': [-200.0, 200.0],
    'slow': [-45.0, 0.0],
    'fast': [-80.0, -35.0],
    'ultra': [-150.0, -70.0]}

bar_lists = {'all': [], 'slow': [], 'fast': [], 'ultra': []}
for box_file in box_files:
    # Each box_file has the boxes for one slit
    slit_boxes = pyregion.open(box_file)
    # Also open the fits file associated with this slit
    slit_name = box_file.replace(
        os.path.join(REGION_DIR, 'pvboxes-'), '').replace('.reg', '')
    fits_name = os.path.join(FITS_DIR, slit_name) + '-ha-vhel.fits'
    hdu, = fits.open(fits_name)
    # Get the normal WCS together with the 'V' alternative WCS
    w = WCS(hdu)
    ww = WCS(hdu, key='V')
    # Check if horizontal or vertical
    is_horizontal = slit_name.startswith('YY')
    if DEBUG:
        print('Extracting boxes from', slit_name)
    for b in slit_boxes:
        # Check that it really is a box and that coordinates are in
        # the correct format
        if b.name == 'box' and b.coord_format == 'image':
            # Extract slit pixel coordinates
            # ii is along velocity axis
            # jj is along slit length
            ii, jj, dii, djj, angle = b.coord_list
            # Find the start/end coordinate along the slit
            jj1, jj2 = jj - 0.5*djj, jj + 0.5*djj
            # Then use alt WCS to find velocity plus both x and y
            [v, _], [x1, x2], [y1, y2] = ww.all_pix2world(
                [ii, ii], [jj1, jj2], [0, 0], 0)
            # Convert velocity from m/s -> km/s
            v /= 1000.0

            width, dashed = find_width(b, hdu)
            color = find_color(v)

            bar_region = BAR_FMT.format(
                x1=x1, y1=y1, x2=x2, y2=y2,
                v=int(v), width=width, dashed=dashed, color=color)

            for vclass, (v1, v2) in VLIMITS.items():
                if v1 <= v <= v2:
                    bar_lists[vclass].append(bar_region)


for vclass, bar_list in bar_lists.items():
    bar_file = BAR_FILE.replace('.reg', '-' + vclass + '.reg')
    with open(os.path.join(REGION_DIR, bar_file), 'w') as f:
        f.write(BAR_HEADER + '\n'.join(bar_list))
python boxes-to-bars.py 

4.3.1.3 DONE Combine bars by hand into knots

  • Knots will have an ellipse representation for plotting
    • And color indicating velocity as before
  • But the boundaries can be more irregular
4.3.1.3.1 DONE Use groups in DS9
  • Procedure to make a new group
    1. Select the bars that we want in the group (shift click)
    2. Choose Region->New Group from menu
    3. Pattern for group name is “Fast 001 (-70)”
      • Velocity rounded to nearest 10
      • We will construct the coordinate names automatically later
  • [X] Also, consider the overlap ranges and assign each group of bars to one velocity class or the other
    • Delete the bars from the unwanted velocity class
  • [X] First pass at asigning to knots
  • [X] Check for orphan bars
    • Look for lines without the word “tag”
    • grep -nH -v tag *-groups.reg
  • [X] Check for bars assigned to multiple knots
    • Look for lines with more than one “tag”
    • grep -nH -e 'tag.*tag' *-groups.reg

4.3.1.4 Utility functions for boxes and bars

Load an enormous long list of 3rd party libraries

import os
import glob
import numpy as np
import pyregion
import skimage
import skimage.morphology
import skimage.draw
import rasterio
import rasterio.features
import shapely
import shapely.geometry
from astropy.io import fits
from astropy.wcs import WCS
from astropy import units as u
from astropy.coordinates import SkyCoord

Replace the # signs in the color specifications, since pyregion does not like it

def load_regions(region_file):
    with open(region_file) as f:
        region_string = f.read()
    # Workaround for bug in pyregion.parse when color is of form '#fff'
    region_string = region_string.replace('color=#', 'color=')
    regions = pyregion.parse(region_string)
    return regions

Extract the knot assignment from the tag field of each bar, so that we have a dict of knots, with each entry consisting of a list of (x1, y1, x2, y2) for each bar

def sort_bars_into_knots(shapelist):
    """Make a dict of knots, each with a list of bar parameters"""
    knots = {}
    for shape in shapelist:
        if shape.name == 'line' and shape.coord_format == 'image':
            _, shape_dict = shape.attr
            tags = shape_dict['tag']
            assert len(tags) == 1, 'Each bar should belong to one knot only'
            for knot_id in tags:
                if not knot_id in knots:
                    knots[knot_id] = {'coords': [], 'width': [], 'vel': []}
                knots[knot_id]['coords'].append(shape.coord_list)
                knots[knot_id]['width'].append(int(shape_dict['width']))
                knots[knot_id]['vel'].append(int(shape_dict['text']))
    return knots
  • Series of routines that create an image mask that encloses all the bars in a given knot.
  • We use the convex hull, which is a bit big in some cases, where the mask outline is irregular
    • But is hard to see how that could be improved
  • We use a dilation with a disk to ensure that the knot width is always at least 4 pixels
MAP_SHAPE = 2048, 2048
def blank_mask(shape=MAP_SHAPE):
    """Make a blank mask"""
    mask = np.zeros(MAP_SHAPE, dtype=bool)
    return mask


def paint_line_on_mask(x1, y1, x2, y2, mask):
    """Paint a single line on an image mask"""
    # Draw line between endpoints
    # (skimage always puts rows before columns)
    rr, cc = skimage.draw.line(y1, x1, y2, x2)
    mask[rr, cc] = True
    return mask


def nint(x):
    """Nearest integer value"""
    return int(x + 0.5)


def find_hull_mask(line_coord_list, min_size=4.0):
    """Given a list of line regions return an image mask of enclosing hull"""
    # Start with all blank
    mask = blank_mask()
    for x1, y1, x2, y2 in line_coord_list:
        # Add on each bar
        mask = paint_line_on_mask(nint(x1), nint(y1), nint(x2), nint(y2), mask)
    # Find the convex hull that encloses all the bars
    mask = skimage.morphology.convex_hull_image(mask)
    if min_size > 0.0:
        selem = skimage.morphology.disk(min_size/2)
        mask = skimage.morphology.dilation(mask, selem=selem)
    return mask
  • Now take that image mask and turn it back into a vector shape
  • The rasterio routine produces a shape with many vertices that follows all the corners of the individual pixels
  • We then use shapely to simplify the shape
def vector_polygon_from_mask(mask, tolerance=2):
    """Find vertices from a polygonal image mask r
    Return vertices as two arrays: x, y"""
    # Use rasterio to get corners of polygon
    shapes_generator = rasterio.features.shapes(mask.astype(np.uint8), mask=mask)
    # There should be only one of them, and we throw away the image value
    shape_dict, _ = next(shapes_generator)
    # Now import it into shapely (note that asPolygon does not work)
    polygon = shapely.geometry.asShape(shape_dict)
    # And simplify the boundary 
    boundary = polygon.boundary.simplify(tolerance)
    # Return array of x values, array of y values
    return boundary.xy
  • Convert the polygon vertex coordinates to a polygon region that can be read by DS9
def polygon_region_string(x, y, color=None, text=None):
    """Return pyregion polygon region as string"""
    coords = []
    for xx, yy in zip(x, y):
        coords.extend([xx, yy])
    string = 'polygon({})'.format(','.join(['{:.1f}'.format(v) for v in coords]))
    string += ' # '
    if color is not None:
        string += 'color={{{}}} '.format(color)
    if text is not None:
        string += 'text={{{}}} '.format(text)
    return string
  
  • Put together all the previous routines in order to translate the bar regions into knot regions
    • Now we set the label to the coordinate ID determined below
BAR_REGION_HEADER = """# Region file format: DS9 version 4.1
global color=yellow dashlist=8 3 width=1 font="helvetica 10 normal roman" select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1
image
"""

def convert_bars_to_knots(bar_region_file, knot_region_file):
    """Write DS9 region file of polygonal knots

    The knots enclose various bars, which are read from another region
    file in which each bar is tagged with the knot that it belongs to

    """

    bars = load_regions(bar_region_file)
    knots = sort_bars_into_knots(bars)
    coord_ids = find_knot_coord_ids(knots)
    region_strings = []
    for knot_id, knot_data in knots.items():
        m = find_hull_mask(knot_data['coords'])
        x, y = vector_polygon_from_mask(m)
        region_strings.append(polygon_region_string(
            x, y, text=coord_ids[knot_id]))
    with open(knot_region_file, 'w') as f:
        f.write(BAR_REGION_HEADER + '\n'.join(region_strings))
  • Find a coordinate ID for each knot, with two parts
    1. OW coordinate, such as 4235-645
    2. Nominal velocity to nearest 5 km/s
def radec2ow(ra, dec):
    """Implement the O'Dell & Wen coordinate designation

    Note (G1): Sources identified as <[RRS2008] NNNN-NNNN> in Simbad:
         * NNNN-NNN  : MSSs-MSS   (position: 5 3M SS.s -5 2M SS)
         * NNN-NNN   : SSs-MSS    (position: 5 35 SS.s -5 2M SS)
         * NNN-NNNN  : SSs-MMSS   (position: 5 35 SS.s -5 MM SS)
         * NNNN-NNNN : MSSs-MMSS  (position: 5 3M SS.s -5 MM SS)
    """
    c = SkyCoord(ra, dec, unit='deg')
    assert c.ra.hms.h == 5.0
    assert abs(c.ra.hms.m - 35) < 5.0
    rastring = '{:03d}'.format(int(0.5 + 10*c.ra.hms.s))
    if c.ra.hms.m != 35.0:
        rastring = str(int(c.ra.hms.m - 30.0)) + rastring
    assert c.dec.dms.d == -5.0
    decstring = '{:02d}'.format(int(-c.dec.dms.m))
    decstring += '{:02d}'.format(int(0.5 - c.dec.dms.s))
    if decstring.startswith('2'):
        decstring = decstring[1:]
    return '-'.join([rastring, decstring])


def find_knot_coord_ids(knots):
    """Find coordinate ID for each knot"""
    coord_ids = {}
    imhdu = fits.open('new-slits-ha-allvels.fits')['scaled']
    imwcs = WCS(imhdu.header)
    for knot_id, knot_data in knots.items():
        x = [0.5*(x1 + x2) for x1, _, x2, _ in knot_data['coords']]
        y = [0.5*(y1 + y2) for _, y1, _, y2 in knot_data['coords']]
        weights = knot_data['width']
        x0 = np.average(x, weights=weights)
        y0 = np.average(y, weights=weights)
        [ra], [dec] = imwcs.all_pix2world([x0], [y0], 0)
        coord_ids[knot_id] = radec2ow(ra, dec)
        v0 = np.average(knot_data['vel'], weights=weights)
        coord_ids[knot_id] += ' ({})'.format(int(round(v0/5.0)*5.0))
    return coord_ids
  • Take the knot information back to the boxes
    • This is mainly so that we can do the Gaussian fits and sort the results on a per-knot basis
  • Unfortunately, I never assigned a sequential ID to the boxes when they were first converted to bars
    • This means that I am now stuck with trying to match bars to boxes based on coordinates
    • Using floats as a dict key is a non-starter due to FP representation issues
    • So I use the 1-decimal-place string representation ‘{:.1f}’
  • Some boxes fail to match. Can be divided into categories:
    • Zero-size boxes. These must be there by mistake. Just delete them
    • [ ] Red-shifted knots. These do need dealing with eventually
      • XX1204-2007-01b-2045 has a +50 km/s knot
      • (‘1540.1’, ‘1207.5’, ‘1539.1’, ‘1224.8’)
        • From XX1549-2010-01-206
        • Just too red, maybe (around +3 km/s)
      • (‘1498.5’, ‘1069.1’, ‘1479.5’, ‘1068.9’)
        • From YY1068-2013-02-154
        • Some of these are very red! Maybe +76 km/s for this one
    • [X] Near misses:
      • (‘1552.2’, ‘999.1’, ‘1550.8’, ‘1023.7’)
        • From XX1549-2010-01-206
        • Could be (‘1552.2’, ‘998.7’, ‘1550.8’, ‘1023.3’)
      • (‘1552.9’, ‘1019.3’, ‘1538.9’, ‘1019.2’)
        • From YY1019-2013-02-226.reg
        • Could be (‘1552.9’, ‘1018.9’, ‘1538.9’, ‘1018.8’)
  • Dealing with the failed matches
    • Red boxes. Punt to a later date
    • Near misses. Change the bar coordinates to match since they obviously got moved by mistake
def find_bar2knot_map(shapelist, coord_ids):
    """Create a mapping between bar and knot coordinate ID.
    Bar is specified by tuple: (x1, y1, x2, y2)"""
    map_ = {}
    for shape in shapelist:
        if shape.name == 'line' and shape.coord_format == 'image':
            _, shape_dict = shape.attr
            knot_id, = shape_dict['tag']
            key = tuple(['{:.1f}'.format(_) for _ in shape.coord_list])
            map_[key] = coord_ids[knot_id]
    return map_

# This is largely copied from up above
FITS_DIR = 'Calibrated/BGsub'
BOX_FMT = 'box({:.1f},{:.1f},{:.1f},{:.1f},{:.1f}) # text={{{}}}'
BOX_HEADER = """global color=white font="helvetica 5 normal"
image
"""
def update_box_file(box_file, bar2knot_map):
    """Add the knot coordinate ID into all the boxes"""
    # Each box_file has the boxes for one slit
    slit_boxes = pyregion.open(box_file)
    # Also open the fits file associated with this slit
    slit_name = box_file.replace(
        os.path.join(REGION_DIR, 'pvboxes-'), '').replace('.reg', '')
    fits_name = os.path.join(FITS_DIR, slit_name) + '-ha-vhel.fits'
    hdu, = fits.open(fits_name)
    # Get the normal WCS together with the 'V' alternative WCS
    w = WCS(hdu)
    ww = WCS(hdu, key='V')
    newboxes = []
    for b in slit_boxes:
        # Check that it really is a box and that coordinates are in
        # the correct format
        if b.name == 'box' and b.coord_format == 'image':
            # Extract slit pixel coordinates
            # ii is along velocity axis
            # jj is along slit length
            ii, jj, dii, djj, angle = b.coord_list
            # Find the start/end coordinate along the slit
            jj1, jj2 = jj - 0.5*djj, jj + 0.5*djj
            # Then use alt WCS to find velocity plus both x and y
            [v, _], [x1, x2], [y1, y2] = ww.all_pix2world(
                [ii, ii], [jj1, jj2], [0, 0], 0)
            # Convert velocity from m/s -> km/s
            v /= 1000.0
            # Use tuple of rounded coordinates as the key
            key = tuple(['{:.1f}'.format(_) for _ in [x1, y1, x2, y2]])
            try: 
                coord_id = bar2knot_map[key]
                bars_remaining.remove(key)
            except KeyError:
                print('  '*2, 'Failed to match key', key)
                print('  '*3, ii, jj, dii, djj)
                if v > 0.0:
                    coord_id = 'RED KNOT ({:+.0f})'.format(5.0*round(v/5))
                else:
                    coord_id = 'LOST KNOT ({:+.0f})'.format(5.0*round(v/5))
                print('  '*3, coord_id)
                  
            newbox = BOX_FMT.format(ii, jj, dii, djj, angle, coord_id)
            newboxes.append(newbox)


    newbox_file = box_file.replace('pvboxes', 'pvboxes-knots')
    with open(newbox_file, 'w') as f:
        f.write(BOX_HEADER)
        f.write('\n'.join(newboxes))
    return None

REGION_DIR = 'Will-Regions-2016-12'
bars_remaining = []
def retrofit_knots_on_boxes():
    boxfiles = glob.glob(os.path.join(REGION_DIR, 'pvboxes-[XY]*.reg'))
    barfiles = glob.glob(os.path.join(REGION_DIR, 'bars-from-boxes-*-groups.reg'))

    # Get list of all knots with data
    knots = {}
    bar2knot_map = {}
    print('Creating Bar -> Knot map ...')
    for barfile in barfiles:
        print('  ', barfile)
        bars = load_regions(barfile)
        knots.update(sort_bars_into_knots(bars))
        coord_ids = find_knot_coord_ids(knots)
        bar2knot_map.update(find_bar2knot_map(bars, coord_ids))

    print('Updating boxes with knot info ...')
    bars_remaining[:] = list(bar2knot_map.keys())
    for boxfile in boxfiles:
        print('  ', boxfile)
        update_box_file(boxfile, bar2knot_map)

    if bars_remaining:
        print('Bars remaining:')
        for bar in bars_remaining:
            print('  ', bar)

4.3.1.5 DONE Make region files for the knots

import os
import boxbar_utils

REGION_DIR = 'Will-Regions-2016-12'
bar_pattern = 'bars-from-boxes-{}-groups.reg'
knot_pattern = 'knots-{}.reg'
for group in 'slow', 'fast', 'ultra':
    bar_file = os.path.join(REGION_DIR, bar_pattern.format(group))
    knot_file = os.path.join(REGION_DIR, knot_pattern.format(group))
    boxbar_utils.convert_bars_to_knots(bar_file, knot_file)

4.3.1.6 TODO How to plot the knots

  • Now that they are not ellipses, we could just plot the convex hull of the end-points of the bars
    • skimage.morphology.convex_hull_image
  • Then we could use dilation to make sure we don’t have any regions that are too thin
    • skimage.morphology.dilation
    • We could use a circle as the selem argument
      • skimage.morphology.disk(radius)
  • [X] Then we can convert back to ds9 region as polygon
    • Format: polygon(x1, y1, x2, y2, ...)
    • Use a combination of rasterio and shapely

4.3.1.7 DONE Retrofit knot assignments to the PV boxes

  • Actually, we should already have this automatically
    • All we have to do is add the knot ID as a label to the PV box
    • And maybe add the color too
  • We will then use the PV boxes (not the ellipses) for extracting the knot spectra
  • Question is, which knot ID to use:
    • The sequential one: e.g, Fast 066 (-55)
    • Or the coordinate one: e.g., 4416-357 (-20)
    • Or both!
import boxbar_utils
boxbar_utils.retrofit_knots_on_boxes()
python retrofit-knots-to-boxes.py

Testing this worked:

  1. Load up the slits
  2. Load the boxes too

4.3.2 DONE Previous version (now superseded)

4.3.2.1 Horizontal and vertical bars from each slit spectrum

4.3.2.2 Combining the bars to make knots

4.3.2.3 DONE [1/6] Progress on each area of the map

  • [X] Rich field of bowshocks
    • [X] Bars
    • [X] Combined
  • [-] Jets coming in
    • [X] Bars
    • [ ] Combined
  • [ ] LL2 region
    • [ ] Bars
    • [ ] Combined
  • [ ] N bowshocks
    • [ ] Bars
    • [ ] Combined
  • [ ] S knots
    • [ ] Bars
    • [ ] Combined
  • [ ] Slower knots
    • [ ] Bars
    • [ ] Combined

4.4 TODO Fitting Gaussians to the new knots

  • [2016-11-19 Sat] With luck we can get estimates of parameter errors while we do this

4.4.1 Using Saba and Sherpa

  • Sherpa is a general fitting package for python
    • Seems to work with python 3 or 2
  • Saba is a bridge with astropy.modelling
    • Says it only works with 2, but this may be out of date
  • [X] Install saba in py27
    • This was difficult - couldn’t use the dev version of astropy
  • [X] Test that it works
  • [X] Try it in 3.5 too
  • [X] Test it on real data
  • [2016-12-12 Mon] UPDATE
    • I had to install both saba and sherpa from source since they stopped working!
    • But seems to be sorted now

4.4.2 Strategy for fitting different sorts of knot

  • I am trying the most difficult cases, such as the -100 km/s knots.
  • This works out fine in the end, implemented in saba-sherpa-test.ipynb
  • First fit the region [-80, 0] (for nii) or [80, -10] in ha
    • Use 4 Gaussians
    • Use stimulated annealing
    • Limits on parameters:
      • amplitude: [0, None]
      • stddev: [0.0, 20.0]
        • Maybe should increase these for ha
  • Subtract that from the profile, and then fit in the range [-150, -50]

4.4.3 knot_fit_utils.py - routines to fit knots

4.4.3.1 Tweaks to make it work with new box regions

  • [X] Change pattern of region files
  • [X] Make sure we get the knot name somehow
  • [X] Boxes instead of ellipses
  • [X] Make sure we extract nominal velocity correctly

4.4.3.2 Imports and plot setup

import os
import numpy as np
from scipy.ndimage import generic_filter

from saba import SherpaFitter
from astropy.modeling.models import Gaussian1D, Lorentz1D, Const1D
from astropy.io import fits
from astropy.wcs import WCS
import pyregion

import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context='notebook', 
        style='whitegrid', 
        palette='dark',
        font_scale=1.5,
        color_codes=True)

4.4.3.3 Function get_knots_from_region_file to get dict of knot coordinates

def get_knots_from_region_file(fn):
    """Return dict of all knots in region file `fn`

    Dict is keyed by region name, with values: j1, j2, u0
    """
    knot_regions = pyregion.open(fn)
    knots = {}
    for r in knot_regions:
        if r.name == 'box' and r.coord_format == 'image':
            k = r.attr[1]['text']
            x0, y0, dx, dy, theta = r.coord_list
            j1 = int(y0 - dy)
            j2 = int(y0 + dy)
            u0 = vel_from_region_text(k)
            knots[k] = j1, j2, u0
    return knots

4.4.3.4 Function vel_from_region_text to extract knot velocity

def vel_from_region_text(text):
    '''Try to parse something like "4299-524 (-70)" to find velocity'''
    # Try and get something like "(-70)"
    maybe_parens = text.split()[-1]
    if maybe_parens.startswith('(') and maybe_parens.endswith(')'):
        vstring = maybe_parens[1:-1]
        try:
            v0 = float(vstring)
        except ValueError:
            v0 = None
    else:
        v0 = None
    return v0

4.4.3.5 Function fit_knot to fit a single knot in a single slit

# No upper bound on constant term by default
CORE_CMAX = None

def _init_bgmodel(lorentz_mean=15.0):
    """Initialize model for background: constant plus Lorentzian"""
    lorentz_fixed = {'x_0': True, 'fwhm': True}
    lorentz_bounds = {'amplitude': [0, None]}
    constant_bounds = {'amplitude': [0, CORE_CMAX]}
    bgmodel = (Lorentz1D(0.1, lorentz_mean, 100.0, name='Lorentz',
                         bounds=lorentz_bounds, fixed=lorentz_fixed)
		 + Const1D(1.0e-4, bounds=constant_bounds, name='Constant'))
    return bgmodel


# Don't allow core components to intrude into knot velocity space
CORE_VMIN = -10.0
# Should not be narrower than instrumental profile
CORE_WMIN = 3.0
# And not too wide or they compete with Lorentzian
CORE_WMAX = 25.0

def _init_coremodel():
    """Initialize model for core of profile: sum of 5 Gaussians"""
    bounds = {'amplitude': [0, None],
		'stddev': [CORE_WMIN, CORE_WMAX],
		'mean': [CORE_VMIN, None]}
    coremodel = (Gaussian1D(1.0, 5.0, 5.0, bounds=bounds, name='G1') 
                 + Gaussian1D(5.0, 10.0, 5.0, bounds=bounds, name='G2')
                 + Gaussian1D(5.0, 15.0, 5.0, bounds=bounds, name='G3')
                 + Gaussian1D(5.0, 20.0, 5.0, bounds=bounds, name='G4')
                 + Gaussian1D(1.0, 40.0, 5.0, bounds=bounds, name='G5')
    )
    return coremodel


KNOT_VMIN = -120.0
KNOT_VMAX = 0.0
KNOT_WMIN = 3.0
KNOT_WMAX = 30.0

def _init_knotmodel(amp_init=0.01, v_init=-60.0):
    """Initialize model for knot: a single Gaussian"""
    bounds = {'amplitude': [0, None],
		'stddev': [KNOT_WMIN, KNOT_WMAX],
		'mean': [KNOT_VMIN, KNOT_VMAX]}
    knotmodel = Gaussian1D(amp_init, v_init, 5.0, bounds=bounds) 
    return knotmodel


# Knot is fitted in region +/- KNOT_WIDTH around the nominal velocity
# The same region is omitted from the core fit
KNOT_WIDTH = 30.0

# Highest value of reduced chi2 that will still allow estimating
# confidence bounds on the fit parameters.  We increase this from the
# default value of 3 since we sometimes have fits that are worse than
# that :(
MAX_RSTAT = 30.0

# Scale for sqrt(N) contribution to the error budget.  Strictly, we
# should go back to the data in electron counts before calibration and
# continuum removal in order to calculate this.  But that is too much
# work, so we just treat it as a free parameter.  Overestimating it is
# harmless.
POISSON_SCALE = 0.02

def fit_knot(hdu, j1, j2, u0):

    NS, NV = hdu.data.shape
    w = WCS(hdu.header)
    vels, _ = w.all_pix2world(np.arange(NV), [0]*NV, 0)
    vels /= 1000.0

    # Ensure we don't go out of bounds
    j1 = max(j1, 0)
    j2 = min(j2, NS)
    print('Slit pixels {}:{} out of {}'.format(j1, j2, NS))

    knotspec = hdu.data[j1:j2, :].sum(axis=0)
    # make sure all pixels are positive, since that helps the fitting/plotting
    knotspec -= knotspec.min()

    # Levenberg-Marquardt for easy jobs
    lmfitter = SherpaFitter(statistic='chi2',
                            optimizer='levmar',
                            estmethod='confidence')
    # Simulated annealing for trickier jobs
    safitter = SherpaFitter(statistic='chi2',
                            optimizer='neldermead',
                            estmethod='covariance')

    # First do the strategy for typical knots (u0 = [-30, -80])

    # Estimate error from the BG: < -120 or > +100
    bgmask = np.abs(vels + 10.0) >= 110.0
    bgerr = np.std(knotspec[bgmask]) * np.ones_like(vels)

    # Fit to the BG with constant plus Lorentz
    try: 
        vmean = np.average(vels, weights=knotspec)
    except ZeroDivisionError:
        vmean = 15.0

    bgmodel = lmfitter(_init_bgmodel(vmean),
			 vels[bgmask], knotspec[bgmask],
			 err=bgerr[bgmask])
    # Now freeze the BG model and add it to the initial core model
    bgmodel['Lorentz'].fixed['amplitude'] = True
    bgmodel['Constant'].fixed['amplitude'] = True

    # Increase the data err in the bright part of the line to mimic Poisson noise
    # Even though we don't know what the normalization is really, we will guess ...
    spec_err = bgerr + POISSON_SCALE*np.sqrt(knotspec)

    # Fit to the line core
    knotmask = np.abs(vels - u0) <= KNOT_WIDTH
    coremodel = safitter(_init_coremodel() + bgmodel,
                         vels[~knotmask], knotspec[~knotmask],
                         err=spec_err[~knotmask])
    core_fit_info = safitter.fit_info

    # Residual should contain just knot
    residspec = knotspec - coremodel(vels)

    # Calculate running std of residual spectrum
    NWIN = 11
    running_mean = generic_filter(residspec, np.mean, size=(NWIN,))
    running_std = generic_filter(residspec, np.std, size=(NWIN,))

    # Increase error estimate for data points where this is larger
    # than spec_err, but only for velocities that are not in knotmask
    residerr = bgerr
    # residerr = spec_err
    mask = (~knotmask) & (running_std > bgerr)
    residerr[mask] = running_std[mask]
    # The reason for this is so that poor modelling of the core is
    # accounted for in the errors.  Otherwise the reduced chi2 of the
    # knot model will be too high

    # Make an extended mask for fitting the knot, omitting the
    # redshifted half of the spectrum since it is irrelevant and we
    # don't want it to affect tha chi2 or the confidance intervals
    bmask = vels < 50.0

    # Fit single Gaussian to knot 
    amplitude_init = residspec[knotmask].max()
    if amplitude_init < 0.0:
        # ... pure desperation here
        amplitude_init = residspec[bmask].max()
    knotmodel = lmfitter(_init_knotmodel(amplitude_init, u0),
                         vels[bmask], residspec[bmask],
                         err=residerr[bmask])

    # Calculate the final residuals, which should be flat
    final_residual = residspec - knotmodel(vels)

    # Look at stddev of the final residuals and use them to rescale
    # the residual errors.  Then re-fit the knot with this better
    # estimate of the errors.  But only if rescaling would reduce the
    # data error estimate.
    residerr_rescale = final_residual[bmask].std() / residerr[bmask].mean()
    if residerr_rescale < 1.0:
        print('Rescaling data errors by', residerr_rescale)
        residerr *= residerr_rescale
        knotmodel = lmfitter(knotmodel,
                             vels[bmask], residspec[bmask],
                             err=residerr[bmask])
    else:
        residerr_rescale = 1.0

    knot_fit_info = lmfitter.fit_info
    lmfitter._fitter.estmethod.config['max_rstat'] = MAX_RSTAT
    if knot_fit_info.rstat < MAX_RSTAT:
        knot_fit_errors = lmfitter.est_errors(sigma=3)
    else:
        knot_fit_errors = None

    return {
        'nominal knot velocity': u0,
        'velocities': vels,
        'full profile': knotspec,
        'error profile': residerr,
        'core fit model': coremodel,
        'core fit profile': coremodel(vels),
        'core fit components': {k: coremodel[k](vels) for k in coremodel.submodel_names},
        'core fit info': core_fit_info,
        'core-subtracted profile': residspec,
        'knot fit model': knotmodel,
        'knot fit profile': knotmodel(vels),
        'knot fit info': knot_fit_info,
        'knot fit errors': knot_fit_errors,
        'error rescale factor': residerr_rescale,
    }

A different strategy would be to fit all the components at once:

Advantages
  1. No more worries about masking out the core when fitting the knot, and masking out the knot when fitting the core.
  2. More naturally deal with overlapping knots with different velocities
    • We just select out the component that is closest to the nominal velocity.
Disadvantages
  1. It will be very expensive to estimate the errors with confidence-level algorithm.
    • We will probably have to just use covariance matrix instead
    • Alternatively, once we have the fit, we can subtract off the components that are nowhere near our knot velocity. And also remove them from the model. Then fit again to the partial residual, and calculate the confidence level for that. That might work
  2. Might be some instabilities in the fits
    • Degeneracy between wide component and multiple narrow components
    • If there are qualitative differences between the [N II] and Ha core fits, then this might affect the knots
      • One approach would be to fit [N II] first, and then use the same components, with the same relative velocity widths for Ha
      • But allow the absolute velocity to slide and the widths and relative heights to change
Mitigation
Disadvantage (2) can probably can be overcome with some judicious limits on the fit parameters.
One idea
Determine mean velocity in [-10, 50] window and use that to define components that are restricted to certain deltas around the mean:
  • Peak component: [-15, +15]
  • 1 or 2 Blue components: [-30, 0]
  • 1 or 2 Red components: [0, 30]
Another crazy idea
Freeze the width of each component at combined thermal+instrumental width, or maybe slightly larger
  • So that would be about 21 km/s FWHM => 9 km/s sigma for Ha
  • And FWHM = 8 km/s => sigma = 3.5 km/s for [N II]
def fit_knot_unified(hdu, j1, j2, u0, lineid='nii'):

    NS, NV = hdu.data.shape
    w = WCS(hdu.header)
    vels, _ = w.all_pix2world(np.arange(NV), [0]*NV, 0)
    vels /= 1000.0

    # Ensure we don't go out of bounds
    j1 = max(j1, 0)
    j2 = min(j2, NS)
    print('Slit pixels {}:{} out of {}'.format(j1, j2, NS))

    knotspec = hdu.data[j1:j2, :].sum(axis=0)
    # make sure all pixels are positive, since that helps the fitting/plotting
    knotspec -= knotspec.min()

    # Levenberg-Marquardt for easy jobs
    lmfitter = SherpaFitter(statistic='chi2',
                            optimizer='levmar',
                            estmethod='confidence')

    # Simulated annealing for trickier jobs
    safitter = SherpaFitter(statistic='chi2',
                            optimizer='neldermead',
                            estmethod='covariance')

    # The idea is that this strategy should work for all knots

    # Estimate error from the BG: < -120 or > +100
    bgmask = np.abs(vels + 10.0) >= 110.0
    bgerr = np.std(knotspec[bgmask]) * np.ones_like(vels)

    # Define core as [-10, 50], or 20 +/- 30
    coremask = np.abs(vels - 20.0) < 30.0

    # Fit to the BG with constant plus Lorentz
    try: 
        vmean = np.average(vels[coremask], weights=knotspec[coremask])
    except ZeroDivisionError:
        vmean = 15.0

    bgmodel = lmfitter(_init_bgmodel(vmean),
			 vels[bgmask], knotspec[bgmask],
			 err=bgerr[bgmask])
    # Now freeze the BG model and add it to the initial core model
    #bgmodel['Lorentz'].fixed['amplitude'] = True
    #bgmodel['Constant'].fixed['amplitude'] = True

    # Increase the data err in the bright part of the line to mimic Poisson noise
    # Even though we don't know what the normalization is really, we will guess ...
    spec_err = bgerr + POISSON_SCALE*np.sqrt(knotspec)


    ## Now for the exciting bit, fit everything at once
    ##
    knotmask = np.abs(vels - u0) <= KNOT_WIDTH
    # For low-velocity knots, we need to exclude positive velocities
    # from the mask, since they will have large residual errors from
    # the core subtraction
    knotmask = knotmask & (vels < 0.0)

    # Start off with the frozen BG model
    fullmodel = bgmodel.copy()
    core_components = list(fullmodel.submodel_names)

    # Add in a model for the core
    DV_INIT = [-15.0, -5.0, 5.0, 10.0, 30.0]
    NCORE = len(DV_INIT)
    BASE_WIDTH = 10.0 if lineid == 'ha' else 5.0
    W_INIT = [BASE_WIDTH]*4 + [1.5*BASE_WIDTH]
    for i in range(NCORE):
        v0 = vmean + DV_INIT[i]
        w0 = W_INIT[i]
        component = 'G{}'.format(i)
        fullmodel += Gaussian1D(
            3.0, v0, w0,
            bounds={'amplitude': [0, None],
                    'mean': [v0 - 10, v0 + 10],
                    'stddev': [w0, 1.5*w0]},
            name=component)
        core_components.append(component)

    # Now, add in components for the knot to extract
    knotmodel_init = Gaussian1D(
        0.01, u0, BASE_WIDTH,
        # Allow +/- 10 km/s leeway around nominal knot velocity
        bounds={'amplitude': [0, None],
                'mean': [u0 - 10, u0 + 10],
                'stddev': [BASE_WIDTH, 25.0]},
        name='Knot')
    fullmodel += knotmodel_init
    knot_components = ['Knot']
    other_components = []

    # Depending on the knot velocity, we may need other components to
    # take up the slack too
    if u0 <= -75.0 or u0 >= -50.0:
        # Add in a generic fast knot
        fullmodel += Gaussian1D(
            0.01, -60.0, BASE_WIDTH,
            bounds={'amplitude': [0, None],
                    'mean': [-70.0, -50.0],
                    'stddev': [BASE_WIDTH, 25.0]},
            name='Fast other')
        other_components.append('Fast other')

    if u0 <= -50.0:
        # Add in a generic slow knot
        fullmodel += Gaussian1D(
            0.01, -30.0, BASE_WIDTH,
            bounds={'amplitude': [0, None],
                    'mean': [-40.0, -10.0],
                    'stddev': [BASE_WIDTH, 25.0]},
            name='Slow other')
        other_components.append('Slow other')

    if u0 >= -75.0:
        # Add in a very fast component
        fullmodel += Gaussian1D(
            0.001, -90.0, BASE_WIDTH,
            bounds={'amplitude': [0, None],
                    'mean': [-110.0, -75.0],
                    'stddev': [BASE_WIDTH, 25.0]},
            name='Ultra-fast other')
        other_components.append('Ultra-fast other')

    if u0 <= 30.0:
        # Add in a red-shifted component just in case
        fullmodel += Gaussian1D(
            0.01, 40.0, BASE_WIDTH,
            bounds={'amplitude': [0, None],
                    'mean': [30.0, 200.0],
                    'stddev': [BASE_WIDTH, 25.0]},
            name='Red other')
        other_components.append('Red other')




    # Moment of truth: fit models to data
    fullmodel = safitter(fullmodel, vels, knotspec, err=spec_err)
    full_fit_info = safitter.fit_info

    # Isolate the core+other model components 
    coremodel = fullmodel[core_components[0]]
    for component in core_components[1:] + other_components:
        coremodel += fullmodel[component]

    # Subtract the core model from the data
    residspec = knotspec - coremodel(vels)

    # Now re-fit the knot model to the residual

    # Calculate running std of residual spectrum
    NWIN = 11
    running_mean = generic_filter(residspec, np.mean, size=(NWIN,))
    running_std = generic_filter(residspec, np.std, size=(NWIN,))

    # Increase error estimate for data points where this is larger
    # than spec_err, but only for velocities that are not in knotmask
    residerr = bgerr
    # residerr = spec_err
    mask = (~knotmask) & (running_std > bgerr)
    residerr[mask] = running_std[mask]
    # The reason for this is so that poor modelling of the core is
    # accounted for in the errors.  Otherwise the reduced chi2 of the
    # knot model will be too high

    # Make an extended mask for fitting the knot, omitting the
    # redshifted half of the spectrum since it is irrelevant and we
    # don't want it to affect tha chi2 or the confidance intervals
    bmask = vels < 50.0

    knotmodel = lmfitter(knotmodel_init,
                         vels[bmask], residspec[bmask],
                         err=residerr[bmask])

    # Calculate the final residuals, which should be flat
    final_residual = residspec - knotmodel(vels)

    # Look at stddev of the final residuals and use them to rescale
    # the residual errors.  Then re-fit the knot with this better
    # estimate of the errors.  But only if rescaling would reduce the
    # data error estimate.
    residerr_rescale = final_residual[bmask].std() / residerr[bmask].mean()
    if residerr_rescale < 1.0:
        print('Rescaling data errors by', residerr_rescale)
        residerr *= residerr_rescale
        knotmodel = lmfitter(knotmodel,
                             vels[bmask], residspec[bmask],
                             err=residerr[bmask])
    else:
        residerr_rescale = 1.0

    knot_fit_info = lmfitter.fit_info
    lmfitter._fitter.estmethod.config['max_rstat'] = MAX_RSTAT
    if knot_fit_info.rstat < MAX_RSTAT:
        knot_fit_errors = lmfitter.est_errors(sigma=3)
    else:
        knot_fit_errors = None

    return {
        'nominal knot velocity': u0,
        'velocities': vels,
        'full profile': knotspec,
        'error profile': residerr,
        'core fit model': coremodel,
        'core fit profile': coremodel(vels),
        'core fit components': {k: coremodel[k](vels) for k in coremodel.submodel_names},
        'core fit info': full_fit_info,
        'core-subtracted profile': residspec,
        'knot fit model': knotmodel,
        'knot fit profile': knotmodel(vels),
        'knot fit info': knot_fit_info,
        'knot fit errors': knot_fit_errors,
        'error rescale factor': residerr_rescale,
        'knot j range': (j1, j2),
    }

4.4.3.6 Utility functions to calculate summary statistics of line profile

def find_fwhm(f, v, frac=0.5):
    """Find literal FWHM of discretely sampled profile f(v) by linear interpolation

    STILL NOT FULLY TESTED

    Based on the Fortran implementation in 
    /Users/will/Work/BobKPNO/src/newlinemod.f90
    """
    ipeak = np.argmax(f)
    fpeak = f[ipeak]
    m = f >= frac*fpeak
    ileft = v.tolist().index(v[m][0])
    iright = v.tolist().index(v[m][-1])
    if ileft <= 0:
        uleft = v[0]
    elif ileft >= len(f):
        uleft = v[-1]
    else:
        uleft = (
            v[ileft] -
            (v[ileft] - v[ileft-1]) * (f[ileft] - frac*fpeak)
            / (f[ileft] - f[ileft-1])
        )
    if iright < 0:
        uright = v[0]
    elif iright >= len(f):
        uright = v[-1]
    else:
        uright = (
            v[iright] +
            (v[iright+1] - v[iright]) * (f[iright] - frac*fpeak)
            / (f[iright] - f[iright-1])
        )
    return uright - uleft


def get_statistics(f, v):
    """Find mean, sigma, flux, fwhm
    """
    flux = np.trapz(f, v)
    try: 
        vbar = np.average(v, weights=f)
        sigma = np.sqrt(np.average(np.square(v - vbar), weights=f))
    except ZeroDivisionError:
        vbar = np.nan
        sigma = np.nan

    # fwhm = find_fwhm(f, v)
    fwhm = sigma * np.sqrt(8.0*np.log(2.0))
    return {'flux': flux, 'mean velocity': vbar, 'sigma': sigma, 'FWHM': fwhm}

4.4.3.7 Save the fit data as JSON file

import json
from astropy.utils.misc import JsonCustomEncoder

def save_fit_data(kn, save_dir, line_id, slit_id):
    """Save all the fit data for knot and core"""
    knot_id = os.path.basename(save_dir)
    jsonfile = os.path.join(save_dir,
			'{}-{}-{}.json'.format(line_id, knot_id, slit_id))

    # Start with copy of input data dict
    data = kn.copy()          # should this be a depp copy?

    # Add basic info
    data['knot'] = knot_id
    data['slit'] = slit_id
    data['emission line'] = line_id

    # Add some more summary statistics
    data['core fit moments'] = get_statistics(
          data['core fit profile'], data['velocities'])

    data['knot fit moments'] = get_statistics(
          data['knot fit profile'], data['velocities'])

    data['full profile moments'] = get_statistics(
          data['full profile'], data['velocities'])

    # Take a slightly more generous knot window for calculating residual stats
    m = np.abs(data['velocities']
		 - data['nominal knot velocity']) <= 1.5*KNOT_WIDTH

    data['core-subtracted profile moments'] = get_statistics(
          data['core-subtracted profile'][m], data['velocities'][m])

    # Re-write the confidence levels as per the graphics program
    if data['knot fit errors'] is not None:
        p = {k: (_v, _p if _p else np.nan, _m if _m else np.nan)
             for k, _v, _p, _m in zip(*data['knot fit errors'])}
        p['FWHM'] = [np.sqrt(8.0*np.log(2.0))*_w for _w in p['stddev']]
        p['confidence level'] = '3-sigma'
    else:
        p = {'confidence level': 'MAX CHI-SQUARED EXCEEDED!'}

    p['reduced chi^2'] = data['knot fit info'].rstat
    data['knot fit parameters'] = p

    # Extract the core fit parameters from the best-fit model -
    # don't bother with error estimates
    m = data['core fit model']
    data['core fit parameters'] = {
          mn: dict(zip(m[mn].param_names, m[mn].parameters))
          for mn in m.submodel_names}
    data['core fit parameters']['reduced chi^2'] = data['core fit info'].rstat

    # Remove items that we don't want to save to JSON
    del data['core fit components']
    del data['core fit model']
    del data['core fit profile']
    del data['full profile']
    del data['core-subtracted profile']
    del data['error profile']
    del data['velocities']
    del data['knot fit model']
    del data['knot fit profile']
    del data['knot fit errors']               
    del data['knot fit info']
    del data['core fit info']

    with open(jsonfile, 'w') as f:
          json.dump(data, f, indent=4, cls=JsonCustomEncoder,
		default=lambda x: repr(x).split('\n'))

4.4.3.8 Function process_slit to deal with all regions in a given slit

SLIT_DIR = 'Calibrated/BGsub'
REGION_DIR = 'Will-Regions-2016-12'
REGION_PREFIX = 'pvboxes-knots'
KNOTS_DIR = 'Knot-Fits-Final'
STRATEGY = 'unified'

def process_slit(fn):
    print('-*^*- '*10)
    print('Processing', fn)

    if fn.startswith(SLIT_DIR):
        fits_path = fn
    else:
        fits_path = os.path.join(SLIT_DIR, fn)
    hdu, = fits.open(fits_path)

    # Rejig the slit name into a slit_id and a line_id
    slit_name, _ = os.path.splitext(os.path.basename(fits_path))
    # e.g., XX1620-2010-01-236-ha-vhel
    _pos, _y, _m, _n, line_id, _ = slit_name.split('-')
    slit_id = '-'.join([_pos, _y, _m, _n])

    region_path = os.path.join(REGION_DIR,
				 '{}-{}.reg'.format(REGION_PREFIX, slit_id))
    try: 
        knots = get_knots_from_region_file(region_path)
    except FileNotFoundError:
        print('No knots in this slit')
        return

    for name, data in knots.items():
        print('Processing knot', name, 'in slit', os.path.basename(fits_path))
        knot_id = name.split()[0]
        save_dir = os.path.join(KNOTS_DIR, knot_id)
        if not os.path.isdir(save_dir):
            os.makedirs(save_dir)

        if STRATEGY.lower() == 'unified':
            kn = fit_knot_unified(hdu, *data)
        else:
            kn = fit_knot(hdu, *data)

        save_fit_data(kn, save_dir, line_id, slit_id)
        plot_core_fit(kn, save_dir, line_id, slit_id)
        plot_knot_fit(kn, save_dir, line_id, slit_id)

4.4.3.9 Process all the slits

Since glob.glob does not support brace expansion, the easiest thing is to have a list of glob patterns.

import glob

PATTERNS = ['[XY][XY]*-ha-vhel.fits', '[XY][XY]*-nii-vhel.fits']
def process_all_slits(patterns=PATTERNS):
    slit_list = []
    for pattern in patterns:
        slit_list += glob.glob(os.path.join(SLIT_DIR, pattern))
    for slit in slit_list:
        process_slit(slit)

4.4.3.10 Plotting functions

LINE_LABEL = {'ha': 'Ha 6563',  'nii': '[N II] 6583'}

def plot_core_fit(kn, save_dir, line_id, slit_id):
    fig, ax = plt.subplots(1, 1)
    ax.plot('velocities', 'full profile', '.', data=kn)
    fullfit = kn['core fit profile'] + kn['knot fit profile']
    ax.plot(kn['velocities'], fullfit)
    ax.errorbar('velocities', 'full profile', 
                'error profile', data=kn, fmt=None, alpha=0.4, errorevery=4)
    for k, v in kn['core fit components'].items():
        ax.plot(kn['velocities'], v, '--', alpha=0.6, lw='1', color='k')
    ax.plot(kn['velocities'], kn['knot fit profile'], '--',
            alpha=0.6, lw='2', color='r')

    ax.fill_betweenx([0.0, 100.0], 
                     [kn['nominal knot velocity'] - KNOT_WIDTH]*2,
                     [kn['nominal knot velocity'] + KNOT_WIDTH]*2, 
                     alpha=0.1)

    ax.set(xlim=[-150, 200],
           yscale='log', ylim=[0.001, None],
           xlabel='Heliocentric Velocity',
           ylabel='Line profile',
           title='{:s} - {:s} - {:s}'.format(os.path.basename(save_dir),
                                             slit_id, LINE_LABEL[line_id]),
    )
    fig.set_size_inches(8, 6)
    knot_id = os.path.basename(save_dir)
    plotfile = os.path.join(save_dir,
                            '{}-core-fit-{}-{}.png'.format(
                                line_id, knot_id, slit_id))
    fig.savefig(plotfile, dpi=200)
    # Important to close figure explicitly so as not to leak resources
    plt.close(fig)


def plot_knot_fit(kn, save_dir, line_id, slit_id):
    fig, ax = plt.subplots(1, 1)
    ax.plot('velocities', 'core-subtracted profile', '.', data=kn)
    ax.plot('velocities', 'knot fit profile', data=kn)
    ax.errorbar('velocities', 'core-subtracted profile', 
                'error profile', data=kn, fmt=None, alpha=0.4, errorevery=4)
    ax.axvline(kn['nominal knot velocity'], lw=0.5, ls='--')


    param_errors = kn['knot fit errors']
    if param_errors is not None:
        p = {k: (_v, _p if _p else np.nan, _m if _m else np.nan)
             for k, _v, _p, _m in zip(*param_errors)}

        knotmodel = kn['knot fit model']

        knot_min_a = knotmodel.copy()
        knot_min_a.amplitude.value += p['amplitude'][1]

        knot_max_a = knotmodel.copy()
        knot_max_a.amplitude.value += p['amplitude'][2]

        knot_min_v = knotmodel.copy()
        knot_min_v.mean.value += p['mean'][1]
        if not np.isfinite(knot_min_v.stddev.value):
            knot_min_v.stddev.value = KNOT_VMIN

        knot_max_v = knotmodel.copy()
        knot_max_v.mean.value += p['mean'][2]
        if not np.isfinite(knot_max_v.stddev.value):
            knot_max_v.stddev.value = KNOT_VMAX

        knot_min_w = knotmodel.copy()
        knot_min_w.stddev.value += p['stddev'][1]
        if not np.isfinite(knot_min_w.stddev.value):
            knot_min_w.stddev.value = KNOT_WMIN

        knot_max_w = knotmodel.copy()
        knot_max_w.stddev.value += p['stddev'][2]
        if not np.isfinite(knot_max_w.stddev.value):
            knot_max_w.stddev.value = 1.5*KNOT_WMAX

        vels = kn['velocities']
        alpha = 0.15
        ax.fill_between(vels, knot_min_a(vels), knot_max_a(vels),
                        color='k', alpha=alpha)
        ax.fill_between(vels, knot_min_v(vels), knot_max_v(vels),
                        color='k', alpha=alpha)
        ax.fill_between(vels, knot_min_w(vels), knot_max_w(vels),
                        color='k', alpha=alpha)


        ptext = 'Knot fit parameters' + '\n'
        ptext += '($3\sigma$-confidence interval)' + '\n'
        # Reduced chi2
        ptext += r'$\mathrm{Reduced\ }\chi^2 = '
        ptext += r'{:.2f}$'.format(kn['knot fit info'].rstat) + '\n'
        # Amplitude
        ptext += r'$\mathrm{Amplitude} = '
        ptext += '{:.3f}_{{{:+.3f}}}^{{{:+.3f}}}$'.format(*p['amplitude']) + '\n'
        # Mean
        ptext += r'$\mathrm{Mean\ velocity} = '
        ptext += '{:.1f}_{{{:+.1f}}}^{{{:+.1f}}}$'.format(*p['mean'])
        ptext += r'$\mathrm{\ km\ s^{-1}}$' + '\n'
        # Width
        ptext += r'$\mathrm{FWHM} = '
        fwhm = [np.sqrt(8.0*np.log(2.0))*_ for _ in p['stddev']]
        ptext += '{:.1f}_{{{:+.1f}}}^{{{:+.1f}}}$'.format(*fwhm)
        ptext += r'$\mathrm{\ km\ s^{-1}}$'

        ax.text(0.95, 0.95, ptext.replace('nan', r'\infty'),
                ha='right', va='top', fontsize='small',
                transform=ax.transAxes,
                bbox=dict(facecolor='white', alpha=0.7))

    ax.set(xlim=[-150, 200],
           xlabel='Heliocentric Velocity',
           ylabel='Core-subtracted profile',
           title='{:s} - {:s} - {:s}'.format(os.path.basename(save_dir),
                                             slit_id, LINE_LABEL[line_id]),
    )
    fig.set_size_inches(8, 6)
    knot_id = os.path.basename(save_dir)
    plotfile = os.path.join(save_dir,
                            '{}-knot-fit-{}-{}.png'.format(
                                line_id, knot_id, slit_id))
    fig.savefig(plotfile, dpi=200)
    # Important to close figure explicitly so as not to leak resources
    plt.close(fig)

4.4.4 Fit all the knots

import sys
import knot_fit_utils

patterns = sys.argv[1:]
if patterns:
    knot_fit_utils.process_all_slits(patterns)
else:
    knot_fit_utils.process_all_slits()

Run this in the shell since it might take a long time

time python fit-all-knots.py

Do the same for the slow knots

import sys
import knot_fit_utils

knot_fit_utils.REGION_DIR = 'Will-Regions-2016-11/will-knots-blue-slow-SLITS'

patterns = sys.argv[1:]
if patterns:
    knot_fit_utils.process_all_slits(patterns)
else:
    knot_fit_utils.process_all_slits()

Now try it with a different strategy.

import sys
import knot_fit_utils

knot_fit_utils.REGION_DIR = 'Will-Regions-2016-11/will-knots-blue-slow-SLITS'
knot_fit_utils.KNOTS_DIR = 'Knot-Fits-Unified'
knot_fit_utils.STRATEGY = 'unified'

patterns = sys.argv[1:]
if patterns:
    knot_fit_utils.process_all_slits(patterns)
else:
    knot_fit_utils.process_all_slits()
import sys
import knot_fit_utils

knot_fit_utils.REGION_DIR = 'Will-Regions-2016-11/will-knots-blue-fast-SLITS'
knot_fit_utils.KNOTS_DIR = 'Knot-Fits-Unified'
knot_fit_utils.STRATEGY = 'unified'

patterns = sys.argv[1:]
if patterns:
    knot_fit_utils.process_all_slits(patterns)
else:
    knot_fit_utils.process_all_slits()

4.5 [2/3] Wrangling data from knot fits

4.5.1 DONE knot_table_utils.py - Merge knot info into big table

from collections import OrderedDict
import json
import glob
import numpy as np
import pandas as pd

def data_from_json(fn):
    """Returns all data from json file named `fn` in form of dict"""
    with open(fn) as f:
        data = json.load(f)
    return data


# Include 'Red other' component, since this often is quite bright and
# probably isn't a knot at all
CORE_COMPONENTS = ['G0', 'G1', 'G2',
                   'G3', 'G4', 'Red other']
def find_core_moments(data):
    """Estimate moments from the fit parameters for the core

    Returns (mean, sigma)
    """
    vels = []
    fluxes = []
    sigmas = []
    for name, params in data.items():
        if name in CORE_COMPONENTS:
            vels.append(params['mean'])
            fluxes.append(params['amplitude'])
            sigmas.append(params['stddev'])

    vels = np.array(vels)
    fluxes = np.array(fluxes)
    sigmas = np.array(sigmas)

    m = np.isfinite(vels) & np.isfinite(fluxes) & np.isfinite(sigmas)

    vmean = np.average(vels[m], weights=fluxes[m])
    variance = np.average((vels[m]-vmean)**2, weights=fluxes[m])
    variance += np.average(sigmas[m]**2, weights=fluxes[m])

    return vmean, np.sqrt(variance)


def summarise_data(d):
    """Summarise data from json file into form suitable for table

    Returns OrderedDict so we have control over the column order
    """
    out = OrderedDict()
    out['line'] = d['emission line']
    out['knot'] = d['knot']
    out['slit'] = d['slit']
    out['Vnom'] = d['nominal knot velocity']
    # out['Wc'] = d['core fit moments']['FWHM']
    # out['Vc'] = d['core fit moments']['mean velocity']
    u0, sigma = find_core_moments(d['core fit parameters'])
    out['Vc'] = u0
    out['Wc'] = np.sqrt(8*np.log(2.0)) * sigma
    out['Fc'] = d['core fit moments']['flux']
    out['F'] = d['knot fit moments']['flux']
    try:
        out['A'] = d['knot fit parameters']['amplitude'][0]
        out['dA-'] = d['knot fit parameters']['amplitude'][1]
        out['dA+'] = d['knot fit parameters']['amplitude'][2]
        out['V'] = d['knot fit parameters']['mean'][0]
        out['dV-'] = d['knot fit parameters']['mean'][1]
        out['dV+'] = d['knot fit parameters']['mean'][2]
        out['W'] = d['knot fit parameters']['FWHM'][0]
        out['dW-'] = d['knot fit parameters']['FWHM'][1]
        out['dW+'] = d['knot fit parameters']['FWHM'][2]
        out['chi2'] = d['knot fit parameters']['reduced chi^2']
        # Quality of fit: amplitude / amplitude error
        out['Q'] = 2*out['A']/(out['dA+'] - out['dA-'])
        out['chi2c'] = d['core fit parameters']['reduced chi^2']
    except KeyError:
        out['Q'] = 0.0
        for k in ['A', 'dA-', 'dA+',
                  'V', 'dV-', 'dV+',
                  'W', 'dW-', 'dW+',
                  'chi2', 'chi2c']:
            # Fill with NaN only those items that we failed to get
            if not k in out:
                out[k] = np.nan

    return out

JSON_FILE_GLOB = 'Knot-Fits/*/*.json'

def _dictlist_from_json_files(debug=False):
    """Returns list of OrderedDict rows"""
    dictlist = []
    for fn in glob.glob(JSON_FILE_GLOB):
        if debug:
            print('Appending data from', fn)
        dictlist.append(summarise_data(data_from_json(fn)))
    return dictlist


def _dataframe_from_dictlist(data):
    """Return a pandas dataframe"""
    df = pd.DataFrame(data=data, columns=data[0].keys())
    # Use a MultiIndex 
    df = df.set_index(['knot', 'slit', 'line'])
    # And move the line (ha or nii) to the columns
    df = df.unstack()
    return df


def get_dataframe():
    return  _dataframe_from_dictlist(_dictlist_from_json_files())

4.5.2 DONE Write out big data table for Alba

import os
import knot_table_utils
import pandas as pd
import numpy as np
from astropy.table import Table

fitdir = 'Knot-Fits-Final'
knot_table_utils.JSON_FILE_GLOB = fitdir + '/*/*.json'
d = knot_table_utils.get_dataframe()
# Switch over the levels in the column multiindex so that we can
# easily extract the 'ha' and 'nii' parts
d = d.swaplevel(0, 1, axis =1)
for line in 'ha', 'nii':
    # Use reset_index() to move the pandas dataframe index to normal
    # columns so that the astropy table converter will pick them up
    t = Table.from_pandas(d[line].reset_index())
    t.meta['ID'] = fitdir
    tabfile = os.path.join(fitdir,  'all-knots-{}-table.fits'.format(line))
    t.write(tabfile, overwrite=True, format='fits')
    t.write(tabfile.replace('.fits', '.xml'),
            overwrite=True, format='votable')
    t.write(tabfile.replace('.fits', '.tab'),
            format='ascii.commented_header')

Knot-Fits-Final/all-knots-ha-table.tab

4.5.3 TODO Merge data from all slits for given knot

  • Only use the highest quality fits

4.6 Make graphs of the new knot data

  • Width-width
    • Select out the best-determined values
    • Does measured [N II] width increase for noisy data?
  • Correlation with velocities
  • Line ratios
    • Nebula versus knot

4.6.1 Histograms of different speed classes

  • We can divide the knots into 3 populations according to the fitted peak velocity:
    Ultra-fast
    Vhel < -75 km/s
    Fast
    Vhel = -75 → -40 km/s
    Slow
    Vhel = -40 → -5 km/s
  • The fast/slow division is very clear, since there is a distinct lack of knots with V ≈ -40 km/s
  • The Fast/ultrafast division is based on fact that there are no bright knots with V < -75
  • The velocity differences cannot be due only to projection angle because the other properties also differ between the populations. In particular:
    1. Linewidths
      • The ultra-fast population has narrowest linewidths, peaking around 10 km/s for the [N II] FWHM (after subtracting instrumental width)
      • The fast population has the majority of widths in range 12 → 20
      • The slow population has a much broader range of widths. About half of them are similar to fast population, but the other half are much broader: 25 → 50 km/s
      • The Nebular widths should be (min, mean, max) = (14, 22, 30) km/s
        • [-] TODO They are currently much broader, but I can fix that in post
      • Interpretation:
        • From HRH, the FWZI is equal to the bowshock velocity, independent of orientation
        • For a triangular line profile: FWHM = 0.5 FWZI, but for other shapes this might vary. For a very flat-nosed bowshock, the wings hardly contribute and so FWHM \ll FWZI
    2. Brightness
    3. [N II] / Ha ratio

4.6.1.1 PDF Files

4.7 TODO Revisiting making region files per slit

  • Now that I am making more regions, I need to re-open this task

4.7.1 [0/2] Create high and low velocity region files

  • [-] knots faster than -50 km/s
    • [X] Test it with the knots I have
    • [ ] Re-run when I have all the knots
  • [-] knots slower than -50 km/s
    • [X] Test it with the knots I have
    • [ ] Re-run when I have all the knots

4.7.2 DONE Adaptation for new Will region files

  • [2016-11-18 Fri] There are a few minor issues with using knot-regions-for-slits.py for my new knot regions
  • [X] Don’t get velocities from the table any more
    • We parse the velocity from the text field
    • And use nominal widths
  • [X] Filter out the stars and arrows - anything that is not a knot
    • Add an if clause to the knot_dict comprehension
  • [X] Work around color parsing bug in pyregion
    • Problem is with reading colors that contain hash symbols:
      import pyregion
      badstring = 'fk5;ellipse(5:34:26.665,-5:26:16.18,6.62208",12.3556",345) # color=#6a8 width=4 text={4267-616 (-55)} tag={Fast blue}'
      goodstring = badstring.replace('#6a8', '6a8')
      
      badregion, =  pyregion.parse(badstring)
      goodregion, = pyregion.parse(goodstring)
      
      print('pyregion version', pyregion.__version__)
      print('Bad region:', badregion.attr)
      print('Good region:', goodregion.attr)
              

4.8 DONE Making region files of all the knots in each slit

  • This will be a little complicated, but we should be able to do it:
    1. Read in the region file to get the shapes of each knot
    2. Read the table of velocities and widths of each knot too
    3. Create a mask in RA, Dec for each knot
    4. Loop over all the slits
      • (a) Initialize empty list of regions for this slit
      • (b) Find the intersection between slit and each knot region
      • (c) For each knot that overlaps the slit
        • I. Convert to our big image pixel coords: (X, Y)
        • II. Construct a region in P-V space
        • III. Append to list of regions for this slit
      • (d) Write out a region file for knots in the slit

4.8.1 knot-regions-for-slits.py

4.8.1.1 Imports and command line args

import sys
import os
import glob
import numpy as np
from astropy.io import fits
from astropy.table import Table
from astropy.wcs import WCS
from astropy.wcs.utils import pixel_to_skycoord, skycoord_to_pixel
import pyregion

try: 
    knot_region_file = sys.argv[1]
    line_id = sys.argv[2]
    region_frame = sys.argv[3]
except IndexError:
    sys.exit('Usage: {} KNOT_REGION_FILE (ha|nii) (linear|image)'.format(sys.argv[0]))

4.8.1.2 Step 1: Read in region file and turn into a dict of masks

First read in the file using pyregion

with open(knot_region_file) as f:
    knot_region_string = f.read()
# Workaround for bug in pyregion.parse when color is of form '#fff'
knot_region_string = region_string.replace('color=#', 'color=')
knot_regions = pyregion.parse(knot_region_string)

We now have a pyregion.ShapeList object, in which each element is a pyregion.Shape object. We want to construct a dict keyed on the knot name, and with value being single-element pyregion.ShapeList that contains only one knot.

knot_dict = {knot.attr[1]['text']: pyregion.ShapeList([knot])
             for knot in knot_regions
             if 'text' in knot.attr[1] and knot.name == 'ellipse'}

We can now use the get_mask(hdu=XXX) method on each value in knot_dict to get the mask corresponding to each knot for a particular FITS HDU.

4.8.1.3 Step 2: Read table of velocities

tab = Table.read('alba-knots-frompdf.tab',
                 format='ascii', delimiter='\t')
vcol = {'ha': 'V(Ha)', 'nii': 'V([N II])'}
wcol = {'ha': 'W(Ha)', 'nii': 'W([N II])'}

The columns that we will need from this table are:

  • knot
  • V(Ha) and V([N II])
  • W(Ha) and W([N II])

4.8.1.4 Step 3: Load the image and all the PV spectra

  • We already have the regions from Step 1. But we also need to set up the HDU for the slit positions on the plane of the sky.
  • We use a trimmed-down version of what we already did in Program to generate spectral map: spectral-map.py
  • In this step, we just get the HDU from the image and construct a WCS object for good measure
    • The HDU is used to obtain the mask for each knot in image coordinates
imhdu = fits.open('new-slits-{}-allvels.fits'.format(line_id))['scaled']
imwcs = WCS(imhdu.header)
  • We also will need 2D arrays that correspond to the image pixels (X, Y)
ny, nx = imhdu.data.shape
Y, X = np.mgrid[0:ny, 0:nx]
  • And then get the list of spectra, which we wrangle into a dict
  • Note that we are using the ones that start XX or YY, which have a default WCS that has been calibrated in heliocentric velocity and in image pixels from the new-slits map.
speclist = glob.glob('Calibrated/BGsub/*-{}-vhel.fits'.format(line_id))
specdict = {fn.split('/')[-1].split('.')[0]: fits.open(fn)[0] for fn in speclist}
  • Setup the region files that we are going to write for each slit
    • We will write the knot regions as ellipses in image coordinates
region_template = 'ellipse({1:.1f},{2:.1f},{3:.1f},{4:.1f},0) # text={{{0}}}'
region_header_lines = [
    'global color=green font="helvetica 10 normal"', 
    region_frame,
]
slit_region_dir = knot_region_file.replace('.reg', '-SLITS')
if not os.path.isdir(slit_region_dir):
    os.mkdir(slit_region_dir)

4.8.1.5 Step 4: Process each slit

4.8.1.5.1 Parse the knot_id to try and find a velocity
def look_for_velocity(knot_id, line_id):
    '''Try to parse something like "4299-524 (-70)"
    Returns (velocity, width)
    '''
    # Try and get something like "(-70)"
    maybe_parens = knot_id.split()[-1]
    if maybe_parens.startswith('(') and maybe_parens.endswith(')'):
        vstring = maybe_parens[1:-1]
        try:
            v0 = float(vstring)
            w = 30.0 if line_id == 'ha' else 20.0
        except ValueError:
            v0, w = None, None
    else:
        v0, w = None, None
    return v0, w
4.8.1.5.2 Find the mask that defines the slit in image pixels
  • We want to find specmask which defines the slit position in the image pixel (X, Y) frame
def get_specmask(specwcs, imshape, slit_pix_width=4):
    """Find image mask that corresponds to a given slit

    `specwcs` is a WCS for the slit spectrum, which should have the second pixel dimension along the slit and the X, Y coords as the second and third world coordinates.  `imshape` is the shape (ny, nx) of the desired mask
    """

    # Length of slit in slit pixels
    ns = specwcs._naxis2
    # Shape of image mask
    ny, nx = imshape

    # Coord arrays along the slit
    V, X, Y = specwcs.all_pix2world([0]*ns, range(ns), [0]*ns, 0)

    # Initialize empty mask
    specmask = np.zeros(imshape).astype(bool)

    # Fill in the mask pixel-by-pixel along the slit
    for x, y in zip(X, Y):
        # Find output pixels corresponding to corners of slit pixel
        # (approximate as square)
        i1 = int(0.5 + x - slit_pix_width/2)
        i2 = int(0.5 + x + slit_pix_width/2)
        j1 = int(0.5 + y - slit_pix_width/2)
        j2 = int(0.5 + y + slit_pix_width/2)
        # Make sure we don't go outside the output grid
        i1, i2 = max(0, i1), max(0, i2)
        i1, i2 = min(nx, i1), min(nx, i2)
        j1, j2 = max(0, j1), max(0, j2)
        j1, j2 = min(ny, j1), min(ny, j2)

        specmask[j1:j2, i1:i2] = True

    return specmask
4.8.1.5.3 Loop over the individual slits to do the work
for specname, spechdu in specdict.items():
    print(specname)
    pvregions = []
    # WCS transform for the slit
    specwcs = WCS(spechdu.header, key='V')
    specmask = get_specmask(specwcs, imhdu.data.shape)
    if (X[specmask].max() - X[specmask].min()
        > Y[specmask].max() - Y[specmask].min()):
        orient = 'horizontal'
    else:
        orient = 'vertical'

    slit_region_file =  slit_region_dir + '/' + specname + '.reg'
    for knot_id, knot_region in knot_dict.items():
        # Find mask for knot and overlap with the slit mask
        knotmask = knot_region.get_mask(imhdu)
        overlap = knotmask & specmask
        # Number of pixels in overlap region
        n = overlap.sum()
        if n > 0:
            # New Will knots have velocity encoded in knot_id
            v0, dv = look_for_velocity(knot_id, line_id)
            if v0 is None:
                # But fall back on table look-up for the original Alba knots
                if not knot_id in tab['knot']:
                    # If not there either, then skip this region 
                    print('Warning: Knot', knot_id, 'not found in table!')
                    continue
                # Extract row from table
                knotrow = tab[tab['knot'] == knot_id][0]
                v0, dv = knotrow[vcol[line_id]], knotrow[wcol[line_id]]
            if region_frame == 'image':
                # Find j-pixel coordinates along the slit that correspond to this knot
                _, jslit, _ = specwcs.all_world2pix([0]*n, X[overlap], Y[overlap], 0)
                j1, j2 = jslit.min(), jslit.max()
                # Find i-pixel coordinates coresponding to knot velocity +/- width
                # Make sure to convert from km/s -> m/s since wcs is in SI
                v1, v2 = 1000*(v0 - dv/2), 1000*(v0 + dv/2)

                [i1, i2], _, _ = specwcs.all_world2pix([v1, v2], [0, 0], [0, 0], 0)
                i0, w = 0.5*(i1 + i2), i2 - i1
                j0, h = 0.5*(j1 + j2), j2 - j1
                pvregions.append([knot_id, i0, j0, w, h])
            elif region_frame == 'linear':
                # Regions written in x = km/s and y = map X or Y,
                # depending on orientation
                S = X if orient == 'horizontal' else Y
                s1, s2 = S[overlap].min(), S[overlap].max()
                s0, ds = 0.5*(s1 + s2), (s2 - s1)
                pvregions.append([knot_id, v0, s0, dv, ds])    
            else:
                raise NotImplementedError('Region frame must be "linear" or "image"')
    # If there are any knot regions for this slit, then write them out
    if pvregions:
        print(len(pvregions), 'regions found')
        region_lines = [region_template.format(*data) for data in pvregions]
        with open(slit_region_file, 'w') as f:
            f.write('\n'.join(region_header_lines + region_lines))

The box regions look like this

box(39,239.5,13.8,8.5,0) # text={4280-551}

while the ellipse region looks like this

ellipse(445.54987,1190.0392,41.5,12,3.4650748e-06) # color=yellow text={050-422}

4.8.2 Run it to make the region files

Takes a while, so do it in eshell

time python knot-regions-for-slits.py Alba-Regions-2016-10/blue_knots_final.reg ha image

4.9 TODO Making plots of the PV spectra with knots indicated

  • This will be like the separate ds9 instances that I ran in Trying out the spectra display
  • But will be done with aplpy like for The isovelocity images
  • Layout
    • [X] Ha and [N II] spectrum side by side
    • [X] With image of the slits underneath as a key, indicating which slit this is
    • [ ] Somehow incorporate coordinate info on the PV diagrams
      • Currently it is in map pixels
      • Put a OW linear scale to the coordinate key:
        • RA: 4240, 4360, 4480, (5)000, (5)120
        • Dec: 000, 200, 400, 600, 800
        • This is of course a nightmare because the numbers are not decimal
          • But rather some digits are in base-60!

4.9.1 owutil.py - Convert to and from OW96 coordinates

def ra_ow(ra):
    """Convert astropy.coordinates RA to OW96 scheme"""
    h, m, s = ra.hms
    assert(int(h) == 5 and int(m/10) == 3)
    ra_code = "{:04d}".format(int((m - 30)*1000 + 10*(s + 0.05)))
    if ra_code.startswith('5'):
        ra_code = ra_code[1:]
    return ra_code

def dec_ow(dec):
    """Convert astropy.coordinates Dec to OW96 scheme"""
    d, m, s = dec.dms
    assert(int(d) == -5)
    dec_code = "{:04d}".format(int(abs(m)*100 + abs(s) + 0.5))
    if dec_code.startswith('2'):
        dec_code = dec_code[1:]
    return dec_code


def ow_from_coord(c):
    return "{}-{}".format(ra_ow(c.ra), dec_ow(c.dec))


if __name__ == '__main__':
    from astropy import coordinates as coord
    import astropy.units as u

    # th1c
    c = coord.get_icrs_coordinates('tet01 ori c')
    print('theta 1 C Ori is', ow_from_coord(c))

    # LL2
    c = coord.SkyCoord('5:34:40.854 -5:22:42.58', unit=(u.hourangle, u.deg))
    print('LL Ori is', ow_from_coord(c))
python owutil.py

4.9.2 multi-slit-pv-graphs.py - generate figures of each slit PV spectrum

4.9.2.1 Run the PV figure program

{ time date; } 2>&1
{ time python multi-slit-pv-graphs.py vert; }  2>&1

python multi-slit-pv-graphs.py horiz
python multi-slit-pv-graphs.py ll2

python multi-slit-pv-graphs.py east
python multi-slit-pv-graphs.py ll1

4.9.2.2 Imports and command line arg to choose dataset

import sys
import os
import glob
import numpy as np
from astropy.io import fits
from astropy.table import Table
from astropy.wcs import WCS
from astropy.wcs.utils import pixel_to_skycoord, skycoord_to_pixel
from astropy import coordinates as coord
import astropy.units as u
import owutil
import pyregion
import matplotlib
matplotlib.use("Agg")
import aplpy
from matplotlib import pyplot as plt
from matplotlib import cm, colors

try: 
    dataset_id = sys.argv[1]
except IndexError:
    sys.exit('Usage: {} DATASET_ID'.format(sys.argv[0]))

4.9.2.3 Read in all the spectra

  • Patterns for finding all the slits in each dataset
glob_patterns = {
    'll2': 'XX1[123]*',
    'horiz': 'YY[01][019]*',
    'vert': 'XX1[56]??-2010-01-*',
    'east': 'YY1[234]*',
    'll1': 'XX0[45]*',
}
  • Generate list of FITS files
hfiles = glob.glob('Calibrated/BGsub/' + glob_patterns[dataset_id] + '-ha-vhel.fits')
nfiles = glob.glob('Calibrated/BGsub/' + glob_patterns[dataset_id] + '-nii-vhel.fits')
map_fn = 'new-slits-ha-allvels.fits'
  • Location of the region files:
slit_region_dir = 'Alba-Regions-2016-10/blue_knots_final-SLITS'
  • Figure layout parameters:
    • Spectra above, map below
    • Ha and [N II] spectra side by side
figwidth, figheight = 12, 12
subplot_windows = { 
    # x0, y0, dx, dy in fractions of figure size
    'ha': [0.08, 0.4, 0.44, 0.58],
    'nii': [0.54, 0.4, 0.44, 0.58],
    'map': [0.15, 0.06, 0.7, 0.28]
}
  • Where to center the slits in each dataset
  • And how long the total window needs to be, so that we do not cut anything off
  • These are used in the recenter method below.
XYcenter = {
    'll2': 1290.0,
    'horiz': 1475.0,
    'vert': 1050.0,
    'east': 800.0,
    'll1': 925.0,
}
XYlength = {
    'll2': 720.0,
    'horiz': 720.0,
    'vert': 720.0,
    'east': 1200.0,
    'll1': 720.0,
}
  • Style for slit PV plots
    • Contour levels show the bright parts
    • Color scale shows the faint parts
pv_contour_style = {
    'levels': [0.05, 0.0707, 0.1, 0.141, 0.2, 0.282, 0.4, 0.564],
    'colors': 'k',
}
pv_colorscale_style = {
    'aspect': 'auto', 'cmap': 'CMRmap', 'stretch': 'sqrt',
    'vmin': -0.0003, 'vmax': 0.05}
  • A black box to make the labels stand out better
blackbox = {'facecolor': 'black', 'alpha': 0.7}
  • Fix up the WCS of the slit spectra
    1. [X] Convert m/s -> km/s
      • For this, it was only necessary to fool WCSlib by changing the coordinate tpe from VOPT to offset, so that astropy.wcs wouldn’t apply any “fixes”
    2. [X] Convert pixels -> RA or Dec
      • This is difficult to get to work with aplpy since we can’t have a strictly 2D wcs in which only one of the axes is celestial.
      • There might be a way to get it to work with a pseudo-3d wcs and slicing it, but it seems that would only work if the pixel axes are orthogonal and in same order as the ra, dec axes
      • Might be better to use WCSaxes instead
      • So for the moment, we will ditch this
    3. [X] The horizontal slits come out upside down. This seems to be because CDELT2 is negative.
      • The only way to fix this would seem to be to
        • flip the data array
        • change CRPIX2 to be 1 + NAXIS2 - CRPIX2
        • change sign of CDELT2
      • But of course this causes a problem with the region files if they are in image coordinates
        • [2016-10-29 Sat 20:02] I tried to switch to world coordinates for the region file
          • But that was a can of worms and I am abandoning it
            1. There was a problem with aplpy for non-celestial frames, which I have fixed by monkey-patching
            2. But pyregion also has problems with these, which I can’t easily fix
        • So, back to the image coordinates
          • Which means that I have to apply the axis inversion to the regions too
          • FIXED [2016-10-29 Sat 23:58]
def fix_pv_wcs(hdr, use_celestial=False):
    newhdr = hdr.copy()

    newhdr['CTYPE1'] = 'offset'

    if use_celestial:
        for k in 'CTYPE', 'CRVAL', 'CRPIX', 'CDELT':
            newhdr[k+'2'] = hdr[k+'3A']
        newhdr['CDELT2'] *= hdr['PC3_2A']

    return newhdr

def invert_second_fits_axis(hdu):
    """Flip the second (Y) axis of a FITS image in `hdu`

Modifies the HDU in place.  Does not return a value

    """
    # Flip the y-axis of the data array, which is the first python axis
    hdu.data = hdu.data[::-1, :]
    # We need to also operate on the alternative 'A' WCS because we
    # use it for placing the OW labels
    for key in '', 'A':
        # Move reference pixel
        hdu.header['CRPIX2' + key] = (1 + hdu.header['NAXIS2']
                                      - hdu.header['CRPIX2' + key])
        # Change sign of pixel scale
        hdu.header['CDELT2' + key] *= -1.0

    return None

def invert_second_region_axis(regions, ny):
    """Flip the second (Y) axis of RegionList `regions`.  

Second argument, `ny` is length of the y-axis.  

All y coordinates are transformed to 1 + ny - y

Modifies the HDU in place.  Does not return a value

    """
    for region in regions:
        region.coord_list[1] = 1 + ny - region.coord_list[1]
    return None
  • Re-use the function to Find the mask that defines the slit in image pixels
<<get_specmask>>
  • Loop over the spectra
    • Read in the HDUs from the FITS files
      • For map of slits, make one mask for all slits and another mask for only this slit
      • For slit PV diagrams, fix the WCS info
    • Set up the figure sub-panels
    • Plot into each sub-panel
      • Slit spectra:
        1. colorscale
        2. contours
        3. Regions on Ha only
        4. Coordinate grid
        5. [ ] Label of which emission line
      • Map of slit positions
        • [X] Zoom in on region of interest
        • [X] Highlight the current slit
    • Save figure file
for hfn, nfn in zip(hfiles, nfiles):
    fig = plt.figure(figsize=(figwidth, figheight))

    hhdu = fits.open(hfn)[0]
    nhdu = fits.open(nfn)[0]
    map_hdu = fits.open(map_fn)['scaled']
    hregfile = os.path.basename(hfn).replace('.fits', '.reg')
    try:
        regions = pyregion.open(os.path.join(slit_region_dir, hregfile))
    except FileNotFoundError:
        regions = None

    all_slits = np.isfinite(map_hdu.data)
    this_slit = get_specmask(WCS(hhdu.header, key='V'), map_hdu.data.shape)
    map_hdu.data[all_slits] = 1.0
    map_hdu.data[this_slit] = 10.0

    hhdu.header = fix_pv_wcs(hhdu.header)
    nhdu.header = fix_pv_wcs(nhdu.header)

    if hhdu.header['CDELT2'] < 0.0:
        invert_second_fits_axis(hhdu)
        invert_second_fits_axis(nhdu)
        if regions is not None:
            invert_second_region_axis(regions, ny=hhdu.header['NAXIS2'])

    hf = aplpy.FITSFigure(data=hhdu, figure=fig, subplot=subplot_windows['ha'])
    nf = aplpy.FITSFigure(data=nhdu, figure=fig, subplot=subplot_windows['nii'])
    mf = aplpy.FITSFigure(data=map_hdu, figure=fig, subplot=subplot_windows['map'])

    for f in hf, nf:
        f.recenter(25.0, XYcenter[dataset_id],
                   width=300.0, height=XYlength[dataset_id])
        f.show_colorscale(**pv_colorscale_style)
        f.show_contour(**pv_contour_style)
        f.add_grid()
        f.grid.set_alpha(0.3)

    if regions is not None:
        hf.show_regions(regions, text_offset=0.0)

    nf.hide_yaxis_label()
    nf.hide_ytick_labels()

    # Add labels for OW coords at y tick points
    w = WCS(nhdu.header, key='A').celestial
    yticks = nf._ax1.get_yticks()
    cc = coord.SkyCoord.from_pixel(yticks, np.zeros_like(yticks), w)
    ows = [owutil.ow_from_coord(c) for c in cc]
    x = nf._ax1.get_xticks()[-1] 
    for y, ow in zip(yticks, ows):
        nf._ax1.text(x, y, ow,
                     bbox=blackbox, color='orange', ha='right', va='center')

    # Add labels for each emission line
    hf.add_label(0.95, 0.95, 'H alpha', relative=True,
                 bbox=blackbox, size='large',
                 horizontalalignment='right', color='yellow')
    nf.add_label(0.05, 0.95, '[N II]', relative=True,
                 bbox=blackbox, size='large',
                 horizontalalignment='left', color='yellow')

    # Deal with the slit position map at the bottom
    mf.recenter(83.6875, -5.4167, width=0.15, height=0.15)
    mf.show_colorscale(aspect='equal', vmin=0.0, vmax=5.0, cmap='RdPu')
    mf.add_grid()
    mf.grid.set_color('black')
    mf.grid.set_alpha(0.1)
    # Include the WFI map for orientation
    mf.show_contour('WFI-Images/Orion_H_A_deep.fits',
                    levels=[3, 4.5, 6, 9, 12, 15, 20, 40, 80, 160, 320],
                    filled=True, alpha=0.3, cmap='Blues',
                    norm=colors.LogNorm(), vmin=3.0, vmax=400.0, overlap=True,
    )

    figfile = hfn.replace('-ha-vhel.fits', '-plot.jpg')
    fig.savefig(figfile, dpi=300)
    print(figfile) 

5 Slits we haven’t used

5.1 Spatial alignment of [O III] slit in W region

  • spec228-oiii.fits
    • Peak brightness at y = 401.6
  • spm231.fits (image + slit in [O III] filter)
    • Peak brightness at y = 339.9
    • Declination: -5:24:05.49
  • Difference: 61.7

5.2 Message from Alba [2016-04-04 Mon]

ya revisé todas las observaciones que tenemos. Tengo algunas dudas de posiciones que no han sido usadas para los cubos de Ha y [NII] y no sé porqué. Te resumo:

-En observaciones de 2007a hay 9 posiciones si usar: spec014-transf.fits spec016-transf.fits spec021-transf.fits spec025-transf.fits spec029-transf.fits spec036-transf.fits spec040-transf.fits spec044-transf.fits spec078-transf.fits

-En 2007b solo una sin usar (spec023-transf.fits) que es horizontal y todas las demas son verticales. Imagino que es por eso que no se usó?

-En 2013b sin usa la spm087_bcrx.fits.

Respecto a las observaciones en [OIII] y [SII] hay 6 espectros tomados en la zona del oeste que nos pueden ser útiles. Son estas:

-Del 2007a spec060-transf.fits en [OIII] spec065-transf.fits en [SII]

-Del 2010: spm082-083h.fits en SII spm086-088h.fits en SII spm225-226h.fits en SII spm228-229h.fits en OIII

No son suficientes como para hacer un cubo, pero los espectros si pueden ser últiles para el analisis de los knots (de los red bow shocks no porque ninguno cruza esas posiciones). Te mando un .reg con las seis las posiciones (algunas se solapan!).

Qué opinas? Lo miras y lo discutimos mañana?

6 New figures for the paper

6.1 The calibration figures

  • Alba didn’t like the color scheme (all pinks and purples)
  • But to change it I need to re-run the calibration program
    • Although I only need to do it for the slits that we are going to show
    • [ ] So find out from Alba what they are

6.2 The isovelocity images

  • Use Aplpy to make 3-color plots
  • This seems to be more involved than I remember it, but never mind

6.2.1 Get file names and brightness limits from ds9

6.2.1.1 Utility script for getting info from DS9 about current frame

for chan in red green blue; do
    echo "#### $chan channel ####"
    xpaset -p ds9 rgb $chan
    xpaget ds9 file
    xpaget ds9 scale limits
done

6.2.1.2 Ha line core

6.2.1.3 [N II] line core

6.2.1.4 H a near blue

6.2.1.5 H a far blue

6.2.1.6 H a wide blue

6.2.1.7 [N II] wide blue

6.2.2 DONE Make RGB isovelocity image with AplPy

  • Center of FOV convert to degrees:
    • RA src_calc{deg(15*(5@34’45”))} {{{results(83.6875)}}}
    • Dec src_calc{deg(-5@25’00”)} {{{results(-5.41666666667)}}}
  • Box size
    • RA 1 minute: src_calc{deg(15*(0@1’0”))} {{{results(0.25)}}}
    • Dec 10 arcmin: src_calc{deg(0@10’0”)} {{{results(0.166666666667)}}}
from astropy.io import fits
import aplpy
figfile = 'rgb-isovel-{}-{}.pdf'.format(SPECIES, SUFFIX)
rgbfiles = ['new-slits-{}{}-multibin.fits'.format(SPECIES, vrange)
            for vrange in RANGES.split()]

# Unpack the channel brightness limits from table
[r1, r2], [g1, g2], [b1, b2] = LIMTAB

# aplpy can only deal with the primary headers, so sort that out first
template = 'rgb-for-aplpy-{}.fits'
channels = ['red', 'green', 'blue']
newfiles = [template.format(chan) for chan in channels]
for newfn, fn in zip(newfiles, rgbfiles):
    hdu = fits.open(fn)['SCALED']
    hdu.writeto(newfn, clobber=True)

aplpy.make_rgb_image(newfiles, 'rgb-for-aplpy.png',
                     vmin_r=r1, vmin_g=g1, vmin_b=b1,
                     vmax_r=r2, vmax_g=g2, vmax_b=b2,
                     stretch_r=STRETCH, stretch_g=STRETCH, stretch_b=STRETCH, 
                     make_nans_transparent=True)
f = aplpy.FITSFigure(newfiles[0])
f.show_rgb('rgb-for-aplpy.png')
f.recenter(83.6875, -5.4167, width=0.25, height=0.167)
if '.reg' in REGIONS:
    f.show_regions(REGIONS)
f.add_grid()
f.grid.set_color('white')
f.grid.set_alpha(0.2)
f.save(figfile)
0.0750.7
0.33.3
0.0451.5

Now we have a sqrt stretch on the intensity

0.554
1.212
0.457

Old limits when we had a linear stretch

0.52.4
1.06.4
0.43.9
0.051.0
0.00.3
0.00.1
0.010.3
0.0020.12
0.0010.05
0.00.1
0.00.2
0.10.6
0.00.03
0.00.1
0.10.6
0.00.02
0.00.05
0.020.2

[2016-10-13 Thu] New versions of these images with the knot regions superimposed. First, Alba’s version of the knot regions in Alba-Regions-2016-10/blue_knots_final.reg, which have somewhat random colors.

Next, my original version in blue-knots.reg, which have a somewhat logical color scheme, but no labels

And thirdly, another version I found in new_blue-knots_id.reg

6.2.3 Figure showing the slit positions

import matplotlib
matplotlib.use("Agg")
from astropy.io import fits
from astropy.wcs import WCS
import aplpy
import numpy as np
from matplotlib import cm, colors

def extract_window_hdu(hdu, x1=0.0, x2=0.6, y1=0.3, y2=1.0):
    """Extract a window from the image in `hdu`

    The window is specified by corners `x1`, `x2`, `y1`, `y2` in
    fractional coordinates. 
    Returns a new `astropy.io.fits.ImageHDU`

    """
    ny, nx = hdu.data.shape
    xslice = slice(int(x1*nx), int(x2*nx))
    yslice = slice(int(y1*ny), int(y2*ny))
    w = WCS(hdu.header)
    newdata = hdu.data[yslice, xslice]
    newheader = w.slice((yslice, xslice)).to_header()
    return fits.ImageHDU(data=newdata, header=newheader)



fn = 'WFI-Images/Orion_H_A_deep.fits'
slit_fn = 'new-slits-ha-allvels.fits'
# cmap = cm.PuRd
# cmap = cm.magma_r
cmap = cm.copper_r
slit_hdu = fits.open(slit_fn)['scaled']
shallow_hdu = fits.open(fn.replace('deep', 'shallow'))[0]
m = np.isfinite(slit_hdu.data)
slit_hdu.data[m] = 1.0
slit_hdu.data[~m] = 0.0
figfile = 'fov-with-slits.pdf'
f = aplpy.FITSFigure(fn)
f.recenter(83.7375, -5.4167, width=0.35, height=0.25)
f.show_grayscale(pmin=65.0, pmax=95, stretch='sqrt')
f.show_contour(extract_window_hdu(shallow_hdu),
               levels=[20.0, 30.0, 40.0, 50.0,
                       70.0, 100.0, 200.0, 400.0, 800.0],
               norm=colors.LogNorm(), vmin=0.3, vmax=1000.0,
               cmap=cmap, filled=True, alpha=0.5, overlap=True)
f.show_contour(slit_hdu,
               levels=[0.01, 10.0],
               filled=True, alpha=0.4, colors='#00a0ff', overlap=True)
f.add_grid()
f.grid.set_color('white')
f.grid.set_alpha(0.2)
f.save(figfile)
f.save(figfile.replace('.pdf', '.jpg'), dpi=300, format='jpeg')
print(figfile)

Run this in the shell since it can take 2 minutes if using filled contours

time python fov-with-slits.py

6.3 Finding charts

6.3.1 TODO Finding chart for Red Arcs

  • This is going to use the WFI image as a backdrop, so it is similar to the above figure
  • How do we want to put the regions on?
import sys
import matplotlib
matplotlib.use("Agg")
from astropy.io import fits
import aplpy
import numpy as np
from matplotlib import cm, colors

fn = 'WFI-Images/Orion_H_A_deep.fits'
figfile = sys.argv[0].replace('.py', '.pdf')
regfile = 'Alba-Regions-2016-10/bowshocks_arcs.reg'
f = aplpy.FITSFigure(fn)
f.recenter(83.6458, -5.4167, width=0.15, height=0.15)
f.show_grayscale(pmin=65.0, pmax=95, stretch='sqrt')
f.add_grid()
f.grid.set_color('white')
f.grid.set_alpha(0.2)
f.show_regions(regfile)
f.save(figfile)
print(figfile)
python red-bow-finding-chart.py
INFO:Auto-settingvminto3.191e+00[aplpy.core]
INFO:Auto-settingvmaxto2.112e+01[aplpy.core]
INFO:Auto-settingresolutionto292.762dpi[aplpy.core]
red-bow-finding-chart.pdf

6.3.2 DONE Finding chart for blue knots

import sys
import matplotlib
matplotlib.use("Agg")
from astropy.io import fits
import aplpy
import numpy as np
from matplotlib import cm, colors

fn = 'WFI-Images/Orion_H_A_deep.fits'
figfile = sys.argv[0].replace('.py', '.pdf')
regfile = 'Alba-Regions-2016-10/blue_knots_final.reg'
f = aplpy.FITSFigure(fn)
f.recenter(83.6458, -5.4167, width=0.15, height=0.15)
f.show_grayscale(pmin=65.0, pmax=95, stretch='sqrt')
f.add_grid()
f.grid.set_color('white')
f.grid.set_alpha(0.2)
f.show_regions(regfile)
f.save(figfile)
print(figfile)
python blue-knot-finding-chart.py

blue-knot-finding-chart.pdf

6.3.3 Finding chart for new Will knots

import sys
import logging
logging.disable(logging.INFO)
import matplotlib
matplotlib.use("Agg")
from astropy.io import fits
import aplpy
import numpy as np
from matplotlib import cm, colors

try:
    vclass = sys.argv[1]
except IndexError:
    vclass = 'fast'

fn = 'WFI-Images/Orion_H_B_deep.fits'
figfile = sys.argv[0].replace('.py', '-{}.pdf'.format(vclass))
# regfile = 'Will-Regions-2016-12/knots-{}-wcs.reg'.format(vclass)
regfile = 'Will-Regions-2016-12/bars-from-boxes-{}-groups-wcs.reg'.format(vclass)
f = aplpy.FITSFigure(fn)
f.recenter(83.6458, -5.4167, width=0.15, height=0.15)
f.show_grayscale(pmin=65.0, pmax=95, stretch='sqrt')
f.add_grid()
f.grid.set_color('red')
f.grid.set_alpha(0.2)
f.show_regions(regfile)
f.save(figfile)
print(figfile, end='')
python new-knot-finding-chart.py ultra

new-knot-finding-chart-ultra.pdf

python new-knot-finding-chart.py fast

new-knot-finding-chart-fast.pdf

python new-knot-finding-chart.py slow

new-knot-finding-chart-slow.pdf

6.3.3.1 Color the regions according to velocity

  • If we use aplpy’s FitsFigure.show_regions, then we have the problem that pyregions cannot deal with the colors of the form #fff
  • Possible plan to get round this
    • Read the region file first with load_regions from boxbar_utils.py, which removes hash from colors (see down here)
    • But then we have to decide between options:
      1. Somehow work out how to represent the colors in the pyregion.ShapeList so that FitsFigure.show_regions will automatically do the right thing
      2. Or, use FitsFigure.show_polygons instead, in which case we would have to do the work of extracting the coordinates and attributes from the ShapeList
      3. Or even, ditch aplpy entirely and use wcsaxes instead. In that case, we would just use matplotlib’s normal ax.add_patch with a matplotlib.patches.Polygon(xy, ..., transform=ax.get_transform('world'))
        • Note that this is now incorporated as astropy.visualization
    • I think that 2 or 3 would be easier and more robust

6.3.3.2 Re-implementation using wcsaxes

import sys
import logging
logging.disable(logging.INFO)
import matplotlib
matplotlib.use("Agg")
from astropy.io import fits
from astropy.wcs import WCS
import numpy as np
from matplotlib import cm, colors

try:
    vclass = sys.argv[1]
except IndexError:
    vclass = 'fast'

fn = 'WFI-Images/Orion_H_B_deep.fits'
figfile = sys.argv[0].replace('.py', f'-{vclass}.pdf')
regfile = f'Will-Regions-2016-12/knots-{vclass}-wcs.reg'

hdu, = fits.open(fn)
wcs = WCS(hdu)
fig, ax = plt.subplots(projection=wcs)

f.recenter(83.6458, -5.4167, width=0.15, height=0.15)
f.show_grayscale(pmin=65.0, pmax=95, stretch='sqrt')
f.add_grid()
f.grid.set_color('red')
f.grid.set_alpha(0.2)
f.show_regions(regfile)
f.save(figfile)
print(figfile, end='')

7 Large scale structures in Orion

7.1 TODO The giant red bow

  • Connection with red-shifted “bar” to the S of LL 1?
    • You can sort of imagine the continuation of the lower wing of the bow raching this feature
    • But to be honest I do not think they are related
      • So far in the wing of the bowshock, the verlocities should not be significantly redshifted anyway
  • Connection to foreground scattering clouds
    • There is a bright line of scattering cloud that runs parallel to the N wing of the bow, but offset to the east by a few arc minutes
    • Perhaps it is related
    • We could maybe do an image of F435W / F555W to highlight the scattering

8 Knot measurements by Alba

[2016-01-14 Thu]

8.1 Plan of things to do

8.1.1 DONE Tasks for will

  • [X] Move all the original spectra to somewhere that Alba can find them
  • [X] Show Alba where to find the HST images

8.1.2 TODO More tasks for Will

  • [X] Check positions of red bowshock from Alba region files
  • [ ] Look for previous mentions of the “wall”
  • [X] Look into Alba’s other region files, such as the new_X.reg files
  • [ ] try out google hangouts

8.1.3 Identify coherent structures in the map

  • [ ] Plot positions of Alba’s knots

8.2 New tables [2016-01-19 Tue]

8.2.1 Dealing with the ajustes files

8.2.1.1 Clean up and rationalize the data

First convert it to tab-separated and strip out the anomalous lines with metadata about each knot’s fit

for emline in 'nii', 'ha':
    rslt = []
    print('Cleaning', emline, 'ajustes data')
    with open('ajustes_{}.dat'.format(emline)) as f:
        stanzas = f.read().split('\n\n')
        print('Found {} stanzas'.format(len(stanzas)))
        col_labels = stanzas[0].split('\n')[0].split()
        rslt.append('\t'.join(col_labels))
        for stanza in stanzas[1:]:
            # skip first line since it is only metadata
            lines = stanza.split('\n') 
            for line in lines[1:]:
                values = line.split()
                # pad short lines
                values += '-'*(len(col_labels) - len(values))
                rslt.append('\t'.join(values))

    with open('ajustes_{}_cleaned.tab'.format(emline), 'w') as f:
        f.write('\n'.join(rslt))
python clean_ajustes_data.py
import numpy as np
from matplotlib import pyplot as plt
from astropy.table import Table
import seaborn as sns
hatab = Table.read('ajustes_ha_cleaned.tab',
                 format='ascii.tab'
)
niitab = Table.read('ajustes_nii_cleaned.tab',
                 format='ascii.tab'
)
outtab = [
    ['knot', 'knot flux Ha', 'BG flux Ha', 'knot flux N II', 'BG flux N II', 'knot ratio', 'BG ratio'], None,
]
knots = sorted(set(hatab['KNOT']))
for knot in knots:
    haflux = 0.0
    bghaflux = 0.0
    niiflux = 0.0
    bgniiflux = 0.0
    for _, zona, _, _, flux, _, _ in hatab[hatab['KNOT']==knot]:
        if zona.startswith('Core'):
            bghaflux += flux
        else:
            haflux += flux
    for _, zona, _, _, flux, _, _ in niitab[niitab['KNOT']==knot]:
        if zona.startswith('Core'):
            bgniiflux += flux
        else:
            niiflux += flux
    knot_ratio = niiflux/haflux
    bg_ratio = bgniiflux/bghaflux

    outtab.append([knot,
                   '{:.2f}'.format(haflux),
                   '{:.1f}'.format(bghaflux),
                   '{:.2f}'.format(niiflux),
                   '{:.1f}'.format(bgniiflux),
                   '{:.3f}'.format(knot_ratio),
                   '{:.3f}'.format(bg_ratio)])
knotknot flux HaBG flux Haknot flux N IIBG flux N IIknot ratioBG ratio
050-4222.14186.6-1.0017.8-0.4670.095
4242-4584.2198.40.8132.30.1910.328
4244-55412.33163.110.0054.40.8110.334
4245-7420.4721.50.127.30.2540.339
4252-6080.9942.20.3915.60.3880.369
4252-61610.1980.26.3930.30.6270.377
4254-5516.4170.61.9923.30.3100.330
4258-7442.0136.60.5812.40.2890.339
4260-62418.63175.88.1174.70.4350.425
4261-6336.14149.63.2765.70.5320.439
4261-6560.4923.00.167.20.3320.314
4263-4601.3657.80.2417.20.1780.298
4265-6376.0039.91.809.80.3000.246
4266-61518.4477.75.7823.90.3130.308
4268-4131.2668.80.3622.70.2860.330
4272-5453.6669.80.9823.40.2670.336
4272-6280.4818.30.086.10.1590.330
4273-6252.5232.40.7810.40.3100.321
4274-4391.1454.90.1415.70.1260.286
4277-5390.9822.40.215.80.2100.259
4280-5515.4744.51.6912.30.3090.276
4280-6580.4221.80.097.50.2200.344
4284-3083.4273.51.4621.00.4260.286
4285-4440.5936.60.1110.30.1860.282
4289-5240.8252.00.2216.70.2720.320
4292-3233.3076.92.0222.30.6130.290
4293-5571.1127.30.228.00.1950.295
4320-6261.5439.70.2411.40.1560.288
4331-4531.6347.60.3113.00.1910.273
4332-4010.1524.80.066.30.3920.255
4335-20794.23179.322.5954.30.2400.303
4359-5211.9052.10.3114.20.1640.273
4374-4571.8434.70.488.50.2610.243
4376-3290.5238.70.0910.80.1720.279
4377-5260.5233.70.067.40.1250.219
4378-4140.4425.30.056.10.1050.243
4378-4340.2243.8-1.0011.2-4.5450.256
4383-4110.3937.10.0410.00.0990.270
4385-2430.3336.8-1.009.1-3.0540.248
4389-3270.5968.10.0817.90.1290.263
4396-5410.8774.4-1.0014.6-1.1550.196
4402-4000.5455.50.1012.10.1900.219
4405-3490.1436.1-1.008.1-7.0520.225
4406-3300.2439.40.039.00.1310.229
4407-2290.1929.30.037.70.1680.261
4409-2430.1848.5-1.0014.1-5.5220.291
4456-3240.7479.3-1.0019.0-1.3540.239
  • We export this with C-c t e to alba-knots-ratios.tab
  • And the graph the ratios in Plot {N II}/Ha ratio in knots vs nebula

8.2.2 Original files

8.2.3 Message from Alba [2016-01-18 Mon]

te mando todos los ajustes (diferenciando Ha y [NII]). En cada knot especifico si ajusto knot junto con el core (cuando están cerca en v) o si los ajusto por separado. También el numero total de gaussianas usadas.

Te mando también el fichero con los tamaños. Los he calculado teniendo en cuenta la apertura utilizada y la resolución indicada en las cabeceras de cada espectro 2D. La tercera columna indica si la redija está en vertical u horizontal.

Por cierto, se me olvidó comentarte que hay dos knots medio conflictivos: 4272-628 que tiene velocidad muy diferente en Ha y [NII] y quizás no esté midiendo lo mismo en ambos 4335-207 donde el knot y el core no se resuelven y el ajuste de lo que considera el knot en cada linea es muy diferente. No me parece muy bueno comparar las dos líneas

Por último el programa que uso para ajustar los parametros no da como salida los errores de los parametros,solo el chi2.

8.3 Things that are missing

  • [X] Error estimates
    • not available
  • [X] Sizes of knots
    • How many pixels were taken along slit?
    • Do any cross multiple slits?
      • only measured in brightest slit
  • [X] Background brightness

8.4 Table of velocity, fwhm, flux

knotV HaV [NII]W HaW [NII]F HaF [NII]NOTAS:[N II]/Ha
050-422-38.9127.062.147.47E-004[NII]=ms3.5e-4
4242-458-20.90-22.5430.5917.714.210.810.19
4244-554-20.02-12.3127.0323.5312.3310.000.81
4245-742-61.27-62.0032.2714.960.470.120.26
4252-608-80.00-82.2529.1719.310.990.390.39
4252-616-21.62-18.7328.2428.2410.196.390.63
4254-551-68.29-68.6726.1217.486.411.990.31
4258-744-59.53-62.1833.5627.972.010.580.29
4260-624-58.57-60.4135.1533.2218.638.110.44
4261-633-59.68-60.8631.0126.476.143.270.53
4261-656-58.25-64.6925.2918.580.490.160.33
4263-460-31.88-32.0532.9423.531.360.240.18
4265-637-53.71-55.2137.6521.996.001.800.30
4266-615-57.04-57.6831.8323.1718.445.780.31
4268-413-63.15-64.4128.0216.381.260.360.29
4272-545-71.89-73.0028.3219.283.660.980.27
4272-628-65.06-75.1132.9418.280.480.080.17
4273-625-53.35-55.7628.2427.472.520.780.31
4274-439-37.60-44.9828.2412.591.140.140.12
4277-539-77.01-77.0528.2414.410.980.210.21
4280-551-64.00-64.1130.2620.765.471.690.31
4280-658-61.88-63.7628.2424.650.420.090.21
4284-308-36.73-39.8728.2420.123.421.460.43
4285-444-60.10-60.2325.8722.010.590.110.19
4289-524-71.51-73.5225.1617.250.820.220.27
4292-323-54.91-56.0028.5823.533.302.020.61
4293-557-40.98-41.0632.9418.831.110.220.20
4320-626-64.73-65.8927.2712.331.540.240.16
4331-453-30.21-32.3928.2418.831.630.310.19
4332-401-70.99-69.9315.2713.830.150.060.40
4335-207-19.36-21.5227.5015.6194.2322.590.24
4359-521-37.00-38.5831.1816.001.900.310.16
4374-457-20.08-16.1723.5318.861.840.480.26
4376-329-63.16-66.8632.9419.150.520.090.17
4377-526-70.96-71.5028.2413.300.520.060.12
4378-414-67.13-69.6431.1417.170.440.050.11
4378-434-68.2128.240.220.01[NII]=FBL0.05
4383-411-60.48-62.3037.2611.480.390.040.10
4385-243-77.6231.380.330.02[NII]=FBL0.06
4389-327-63.3434.580.595.68E-004[NII]=ms9.6e-4
4396-541-90.00-91.2523.5913.720.870.100.11
4402-400-75.26-78.9723.5313.150.540.100.19
4405-349-78.1320.730.147.93E-004[NII]=ms5.7e-3
4406-330-74.93-77.7023.5310.030.240.030.13
4407-229-78.61-80.3925.9311.720.190.030.16
4409-243-67.1722.160.185.41E-004[NII]=ms3.0e-3
4456-324-68.9519.230.748.39E-004[NII]=ms1.1e-3

8.5 Key to Notes column

[NII]=FBL
Flujo bajo línea
[NII]=ms
¿límite superior?

8.6 New version of the table, copy-pasted from PDF

  • Flux upper limits are indicated by 0.0 in the F(Ha) or F([N II]) column, with the limit appearing in the dF column
  • Exported to alba-knots-frompdf.tab
knotV(Ha)V([N II])W(Ha)W([N II])F(Ha)dF(Ha)F([N II])dF([N II])size
050-422-38.91-38.2127.0615.992.140.160.00.104.4
4242-458-20.90-22.5430.5917.714.210.420.910.049.8
4244-554-16.55-15.2328.2421.2315.851.498.260.2714.2
4245-742-61.27-61.9632.2714.960.470.070.130.063.1
4252-608-80.10-82.2529.1719.310.990.100.440.023.7
4252-616-16.75-17.6237.6530.1213.831.027.710.796.9
4254-551-68.29-68.6726.1217.486.410.112.250.128.1
4258-744-59.53-62.1833.5627.972.010.170.660.096.9
4260-612-64.32-63.2438.7534.103.590.341.430.264.4
4261-352-48.29-47.9622.9814.990.00.420.00.1114.3
4261-422-57.81-57.7435.0928.401.310.110.300.043.1
4261-626-59.67-61.1935.2226.625.050.302.990.225.6
4263-460-31.88-32.0532.9423.531.360.220.270.065.7
4265-630-54.69-55.0034.0629.1310.080.743.770.4210.6
4266-615-57.04-57.6831.8323.1718.440.626.530.4416.9
4268-414-42.08-41.9137.9222.980.500.130.00.133.1
4271-440-60.68-64.0630.2016.391.260.240.400.035.6
4272-545-64.20-63.9228.3319.967.310.282.640.2710.0
4272-622-54.43-55.6431.1721.422.530.150.840.096.9
4273-639-65.21-64.8432.9419.990.480.050.00.124.4
4273-704-57.61-64.8433.2621.981.400.250.00.3116.8
4277-539-77.01-77.0528.2414.410.980.090.230.054.4
4280-551-72.92-76.2927.4222.450.630.060.210.023.7
4284-308-36.73-39.8728.2420.123.420.441.650.135.6
4285-444-60.15-59.9725.8722.010.590.080.120.044.4
4289-524-71.51-73.5225.1617.250.820.080.250.044.4
4289-647-32.43-28.2828.2420.710.500.170.180.034.4
4292-323-54.91-56.0028.5823.533.300.262.290.196.9
4293-557-41.21-41.1432.9418.831.110.080.240.034.8
4320-626-64.73-65.8927.2712.331.540.100.270.026.9
4331-453-30.21-32.3928.2418.831.630.210.350.086.8
4332-401-71.00-69.9315.2713.830.150.050.070.023.1
4334-560-67.06-72.0816.6417.100.160.040.090.045.6
4335-207-19.36-21.5227.5015.6194.231.2925.530.5523.0
4359-521-37.34-38.5831.1816.001.900.320.350.056.9
4374-457-66.67-67.1624.5814.990.190.070.00.032.6
4376-329-79.92-81.6626.2715.680.490.100.130.024.3
4377-526-70.96-71.5028.2413.300.520.060.070.023.7
4378-434-68.00-68.4428.2416.990.220.040.00.034.3
4381-411-67.31-69.6430.7117.170.440.050.050.013.1
4383-343-63.12-65.4138.2216.460.400.080.070.023.1
4385-243-77.62-78.3131.3818.990.330.050.00.035.6
4389-327-63.03-65.3844.7115.990.670.200.150.016.8
4396-541-90.00-91.2523.5913.720.870.120.110.028.1
4402-400-75.26-78.9723.5313.150.540.080.120.035.6
4405-349-78.13-77.9320.7315.990.140.030.00.044.3
4406-330-68.93-69.2023.5313.990.270.050.00.035.6
4407-229-78.61-80.3925.9311.720.190.040.040.014.4
4409-243-67.17-66.7822.1612.990.180.040.00.025.6
4456-324-68.95-71.4319.238.990.740.200.140.016.8

8.7 Graphs of Alba’s knots

8.7.1 Plot Ha width versus [N II] width

import numpy as np
from matplotlib import pyplot as plt
from astropy.table import Table
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('talk')
sns.set_color_codes()
tab = Table.read('alba-knots-frompdf.tab',
                 format='ascii', delimiter='\t',
)

tab['[N II]/Ha'] = tab['F([N II])'] / tab['F(Ha)']
tab['dW([N II])'] = tab['W([N II])'] * tab['dF([N II])'] / tab['F([N II])']
tab['dW(Ha)'] = tab['W(Ha)'] * tab['dF(Ha)'] / tab['F(Ha)']
tab.sort('W(Ha)')

wmax = 48.0
mask = np.isfinite(tab['[N II]/Ha']) 
figfile = 'alba-knots-widths.pdf'
sns.set_palette("Oranges_d")
fig, ax = plt.subplots(1, 1)
w0 = np.linspace(0.0, wmax)
fwhm_over_sigma = 2*np.sqrt(2*np.log(2.0))
sig2_fs = 10.233
Te_factor = 82.5*(1.0 - 1.0/14.0)
for T4 in [0.0, 0.5, 1.0, 1.5, 2.0]:
    ax.plot(w0, np.sqrt(w0**2 + Te_factor*T4*fwhm_over_sigma**2 + sig2_fs*fwhm_over_sigma**2),
            label='T = {:.0f} K'.format(1e4*T4),
            lw=4,
            alpha=0.6)
s = 5*(20 + 20*np.log10(tab[mask]['F(Ha)']))
# cmap = sns.diverging_palette(240, 10, as_cmap=True)
# cmap = 'coolwarm'
cmap = 'Blues'
ax.scatter('W([N II])', 'W(Ha)', s=s, c='[N II]/Ha', data=tab[mask], 
           cmap=cmap, zorder=100, alpha=0.95, label=None)
ax.errorbar('W([N II])', 'W(Ha)', xerr='dW([N II])', yerr='dW(Ha)',
            fmt='none', ecolor='b', data=tab[mask], alpha=0.3, errorevery=2, label=None)
ax.set_xlim(0, wmax)
ax.set_ylim(0, wmax)
ax.set_xlabel('[N II] FWHM, km/s')
ax.set_ylabel('Ha FWHM, km/s')
ax.legend(loc='lower right', frameon=True)
fig.set_size_inches(6, 6)
fig.tight_layout()
fig.savefig(figfile)

8.7.1.1 Empirical description of the width-width plot

  • The symbol color indicates the knot [N II]/Ha ratio (darker colors = higher ratio)
  • The symbol size indicates the log of the Ha flux from the knot
  • The relative error on width is assumed to be equal to the relative error in the flux
    • To reduce confusion, error bars are only shown on every second point
  • The curves account for the different atomic weight of H and N, together with the fine-structure broadening of H alpha

8.7.1.2 Interpretation of the width-width plot

  • Given the width uncertainties, all the points are consistent with a constant thermal width for all the knots, corresponding to T ~= 10,000 K
  • The [N II]/Ha ratio tends to be higher in knots with higher non-thermal linewidths, as measured by W([N II]).

8.7.2 Plot [N II]/Ha ratio in knots vs nebula

import numpy as np
from matplotlib import pyplot as plt
from astropy.table import Table, join
from astropy.io import fits
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('talk')
sns.set_color_codes()
rtab = Table.read('alba-knots-ratios.tab',
                 format='ascii', delimiter='\t',
)
vtab = Table.read('alba-knots-cleaned.tab',
                 format='ascii', delimiter='\t',
)
ptab = Table.read('alba-knots-frompdf.tab',
                 format='ascii', delimiter='\t',
)
tab = join(join(rtab, vtab, keys='knot'), ptab, keys='knot', join_type='outer')


## Read in the new tables from Alba 24 Oct 2016
## These have only the ratios - no errors or anything
ktab = Table.read('Alba-Ratios-2016-10/NIIHa_knot.dat',
                  format='ascii.no_header', names=['knot', 'Alba ratio'], 
)
ntab = Table.read('Alba-Ratios-2016-10/NIIHa_neb.dat',
                  format='ascii.no_header', names=['knot', 'Alba BG ratio'], 
)

## Merge Alba's tables into the big table
tab = join(join(tab,
                ktab, keys='knot', join_type='outer'),
           ntab, keys='knot', join_type='outer')

## Calculate contrast with BG and estimate error in ratio
tab['log Ha contrast'] = np.log10(tab['knot flux Ha']/tab['BG flux Ha'])
tab['[N II]/Ha new'] = tab['F([N II])'] / tab['F(Ha)']
tab['error'] = tab['[N II]/Ha new']*np.sqrt(
    (tab['dF([N II])']/tab['F([N II])'])**2 + (tab['dF(Ha)']/tab['F(Ha)'])**2
)
# The upper limits are for cases where F([N II]) = 0.0
mask = tab['F([N II])'] == 0.0
tab['upper limit'] = tab['dF([N II])'] / tab['F(Ha)']
tab['upper limit'].mask = ~mask

## Save the big table with all the columns
selected_columns = ['knot',
                    'BG ratio', 'Alba BG ratio',
                    '[N II]/Ha new', 'knot ratio', 'Alba ratio', 'upper limit']
tab[selected_columns].write('big-ratio-table.tab', format='ascii.tab',
                            formats={'[N II]/Ha new': '{:0.3f}',
                                     'upper limit': '{:0.3f}'})

## Potentially switch between the old values and the new tables from Alba
if False:
    bgratio_name = 'BG ratio'
    ratio_name = '[N II]/Ha new'
else:
    bgratio_name = 'Alba BG ratio'
    ratio_name = 'Alba ratio'

figfile = 'alba-knots-ratios.pdf'
sns.set_palette("Oranges_r")
fig, ax = plt.subplots(1, 1)

s = 20*np.sqrt(tab['BG flux Ha'])
sv = (-10 - tab['V(Ha)'])*2
sd = 150*np.sqrt(tab['F(Ha)']/tab['size'])
# cmap = sns.diverging_palette(240, 10, as_cmap=True)
cmap = 'Oranges'

# Deal with upper limits - take 3 x sigma
#mask = tab['NOTAS:'] == '[NII]=ms'

ax.scatter(bgratio_name, ratio_name, s=sd[~mask], c='log Ha contrast',
           data=tab[~mask],
           cmap=cmap, zorder=100, alpha=0.8)
ax.errorbar(bgratio_name, ratio_name, yerr='error', fmt='none',
            data=tab[~mask],
            alpha=0.3, errorevery=2)
# and plot the upper limits separately
ax.scatter(bgratio_name, 'upper limit', s=sd[mask], c='log Ha contrast',
           data=tab[mask],
           cmap='Purples', zorder=100, alpha=0.6, marker='v')
ax.plot([0, 1], [0, 1], c='k', ls=':', alpha=0.6)

# Plot histogram of values from the inner nebula
hdu = fits.open('../Ratios-for-Alba/ratio-6583-6563.fits')[0]
H, edges = np.histogram(hdu.data, bins=300, range=[0.0, 0.71], density=True)
x = 0.5*(edges[:-1] + edges[1:])
ax.fill_between(x, H/10, alpha=0.1, color=(0.15, 0.05, 0.3))

ax.set_xlim(0.0, 0.81)
ax.set_ylim(0.0, 0.81)
ax.set_xlabel('Nebula [N II]/Ha')
ax.set_ylabel('Knot [N II]/Ha')
fig.set_size_inches(6, 6)
fig.tight_layout()
fig.savefig(figfile)
import numpy as np
from matplotlib import pyplot as plt
from astropy.table import Table
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('talk')
sns.set_color_codes()
tab = Table.read('big-ratio-table.tab',
                 format='ascii', delimiter='\t', fill_values=[('--', 'nan')])

upper_mask = tab['[N II]/Ha new'] == 0.0

figfile = 'alba-ratios-discrepancy.pdf'
fig, ax = plt.subplots(1, 1)

ax.scatter('BG ratio', 'Alba BG ratio', data=tab,
           label='Nebula [N II]/Ha',
           s=100, alpha=0.6, c='y')
ax.scatter('[N II]/Ha new', 'Alba ratio', data=tab[~upper_mask],
           label='Knot [N II]/Ha',
           s=100, alpha=0.6, c='c')
ax.scatter('upper limit', 'Alba ratio', data=tab[upper_mask],
           label='Upper limit [N II]/Ha',
           s=50, c='k', alpha=0.3)
m = np.isfinite(tab['knot ratio']) & (tab['knot ratio'] > 0.0)
ax.scatter('knot ratio', 'Alba ratio', data=tab[m],
           label='Original knot [N II]/Ha',
           s=50, alpha=0.6, c='r')

ax.plot([0.0, 1.0], [0.0, 1.0], '--', c='k', alpha=0.2, label=None)
ax.plot([0.0, 1.0], [0.0, 1.13], ':', c='k', alpha=0.4, label=None)
ax.legend(frameon=True, loc='lower right')
ax.set(xlabel='Will ratio', ylabel='Alba ratio',
       xlim=[0.0, 0.81], ylim=[0.0, 0.81])

fig.set_size_inches(6, 6)
fig.tight_layout()
fig.savefig(figfile)

8.7.2.1 Empirical description of the ratio-ratio plot

  • The symbol color indicates the Knot/BG brightness contrast ratio (darker colors = higher contrast)
  • The symbol size indicates tha knot density, as estimated from sqrt(F(Ha)/size)
  • The downward pointing triangles are upper limits on the knot [N II]/Ha ratio
  • Out of date
    • The symbol size USED TO indicate the knot velocity (larger symbols = more blue-shifted)

8.7.2.2 Correlations apparent in the ratio-ratio plot

  • Use Rneb and Rknot for the two ratios:
    • Rknot shows a much larger range of variation than Rneb:
      • Rneb = 0.2 → 0.45
      • Rknot = <0.05 → 0.8
    • There is a tendency for Rknot to be higher when Rneb is higher
    • Also, higher Rneb correlates with higher contrast
    • And correlates with higher density
    • There is no apparent correlation with knot velocity
    • We can divide the points into 3 clusters
      1. Rknot <= 0.23
      2. Rknot = 0.23 → 0.35
      3. Rknot >= 0.35
    • The min, median, max value of Rneb for each group is
      1. 0.19, 0.28, 0.34
      2. 0.24, 0.32, 0.34
      3. 0.25, 0.35, 0.43
    • Alternatively, we can divide according to Rneb
      Rneb < 0.3
      • N = 27 (quartiles 25, 50, 75 = 7, 14, 28 )
      • Rknot/Rneb quartiles: 25, 50, 75 = 0.4, 0.5, 0.9
      Rneb > 0.3
      • N = 22 (quartiles 25, 50, 75 = 6, 11, 16 )
      • Rknot/Rneb quartiles: 25, 50, 75 = 0.85, 0.9, 1.1
  • The general behavior is consistent with what we see in Alba’s Fig 9:
    • The majority of knots show Rknot < Rneb
    • Some knots with high Rneb show Rknot > Rneb

8.7.2.3 Possible explanations for the behavior of the ratios

  • The [N II]/Ha ratio depends on
    1. Local ionization parameter: \(U = F / c\, n\)
      • [N II]/Ha is decreasing function of \(U\) as N/N+ goes down
    2. Temperature
      • For temperatures increasing above 10^4 K, [N II]/Ha will increase with T due to the increased collisional excitation rate and reduced recombination rate
      • But at high enough temperatures (T > 40,000 K), collisional ionization will convert N in N+, so [N II]/Ha will start to go down again.
        • This requires a shock Mach number > 3.5 or so.
  • If density goes up, then \(U\) should go down, so [N II]/Ha should go up
    • if the flux is equal
    • For the knots with measured [S II] ratios, knots are higher density than the nebula
      • Although there are only 3 with measured densities, most are upper or lower limits
      • Typically n(knot) ≈ 2-3 n(neb)
      • [ ] We should try and estimate densities from Hα surface brightness
      • [ ] For knot 4261-422 in Table 5, the errors on N_e are too small, given the stated errors on 6716/6731
  • So we can easily explain the cases where [N II]/Ha goes up
    • It could be either decreased \(U\) due to compression, or could be contribution from cooling zone
    • For the one knot in this category that also has [O III], the [O III]/Ha is also increased in knot
      • Implying that it is not a change in \(U\) in this case
  • Harder to explain [N II]/Ha going down
    • Lower [N II]/Ha in knot seems to be associated with lower contrast: knot/nebula in H alpha
    • Also with lower densities
    • Also with lower W([N II])
    • Lower [N II]/Ha in knot is correlated with higher T determined from linewidths
      • This is shown in the following plot: Plot Kinematic T vs {N II}/Ha ratio
      • But given that these are also the fainter knots, I suspect that t may just be an increase in uncertainty in the T estimates
        • Unfortunately, we have no estimate of thr uncertainty in the linewidths
  • Explanation of larger range in [N II]/Ha for knots, as compared with nebula:
    • Nebula is an average over a long line of sight (ionization bounded) that samples regions of varying ionization parameter
    • Knots are more compact -> small range of z -> selecting a partiicular value of U
      • In particular the low [N II]/Ha ratios must come from knots in the more highly ionized interior of the nebula, where He is ionized and F is significantly higher than on average for that line of sight, such that even with the extra compression, U is still high.
      • This seems more likely than collisional ionization as an explanation.

8.7.3 Plot Kinematic T vs [N II]/Ha ratio

import numpy as np
from matplotlib import pyplot as plt
from astropy.table import Table
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('talk')
sns.set_color_codes()
tab = Table.read('alba-knots-frompdf.tab',
                 format='ascii', delimiter='\t',
)
tab['[N II]/Ha'] = tab['F([N II])'] / tab['F(Ha)']

mupper = tab['F([N II])'] == 0.0
tab['upper limits'] = tab['dF([N II])'] / tab['F(Ha)']

fwhm_over_sigma = 2*np.sqrt(2*np.log(2.0))

sig2_ha = (tab['W(Ha)']/fwhm_over_sigma)**2
sig2_n2 = (tab['W([N II])']/fwhm_over_sigma)**2
sig2_fs = 10.233
Te_factor = 82.5*(1.0 - 1.0/14.0)
tab['Te'] = 1e4*(sig2_ha - sig2_n2 - sig2_fs)/Te_factor
mask = np.isfinite(tab['[N II]/Ha']) & np.isfinite(tab['Te'])
figfile = 'alba-knots-Te-Nrat.pdf'
sns.set_palette("Purples_r")
fig, ax = plt.subplots(1, 1)
# s = 10 + 30*np.sqrt(tab['F(Ha)'])
s = 10*tab['size']
s = 100*np.sqrt(tab['F(Ha)']/tab['size'])
# cmap = sns.diverging_palette(240, 10, as_cmap=True)
# cmap = 'coolwarm'
cmap = 'Oranges'
m = mask & (~mupper)
ax.scatter('[N II]/Ha', 'Te', s=s[m], c='W([N II])', data=tab[m], 
           cmap=cmap, zorder=100, alpha=0.8, label=None)
m = mask & (mupper)
ax.scatter('upper limits', 'Te', s=s[m], c='W([N II])', data=tab[m], 
           cmap='Purples', zorder=100, alpha=0.6, label=None, marker='<')
ax.set_xlim(0, 0.85)
ax.set_ylim(0, 3.e4)
ax.set_xlabel('[N II] / H a')
ax.set_ylabel('Temperature, K')
ax.legend(loc='lower right')
fig.set_size_inches(6, 6)
fig.tight_layout()
fig.savefig(figfile)
  • We currently have the symbol size set to a proxy for the density:
    • \(\sqrt{F / \ell}\)
    • It looks like higher density correlates with higher [N II]/Ha, which is what we would expect

8.7.4 Histogram of [N II]/Ha ratio from MUSE data of inner Orion Nebula

import numpy as np
from matplotlib import pyplot as plt
from astropy.io import fits
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('talk')
sns.set_color_codes()

figfile = 'nii-ha-histogram-muse.pdf'
sns.set_palette("Purples_r")
fig, ax = plt.subplots(1, 1)

huygens_ratios = fits.open('../Ratios-for-Alba/ratio-6583-6563.fits')[0].data
H, edges = np.histogram(huygens_ratios, bins=300, range=[0.0, 0.71], density=True)
x = 0.5*(edges[:-1] + edges[1:])
# ax.plot(x, H, drawstyle='steps-mid', c='y')
ax.fill_between(x, H, alpha=0.3, color='y', label='Huygens')

eon_ha = fits.open('new-slits-ha-allvels.fits')['scaled']
eon_nii = fits.open('new-slits-nii-allvels.fits')['scaled']
eon_ratios = 1.13*eon_nii.data / eon_ha.data

# Also write out the ratio image
fits.PrimaryHDU(header=eon_nii.header,
                data=eon_ratios).writeto('new-slits-nii-ha-ratio-allvels.fits',
                                         clobber=True)


H, edges = np.histogram(eon_ratios, bins='auto', range=[0.0, 0.71], density=True)
x = 0.5*(edges[:-1] + edges[1:])
# ax.plot(x, H, drawstyle='steps-mid')
ax.fill_between(x, H, alpha=0.3, color='g', label='EON')
#ax.set(xlim=[0.05, 0.8], xscale='log')
ax.legend()

fig.set_size_inches(6, 6)
fig.tight_layout()
fig.savefig(figfile)

So the Western region covered by our slits has a tendency for slightly higher [N II]/Ha ratios than in the inner nebula. But there is considerable overlap in the distributions. The positions of the knots seem to be entirely representative of the area as a whole in terms of the nebular ratio.

8.7.5 Correlation between [N II]-Ha velocity difference and line ratio

8.7.5.1 Create velocity difference map from MUSE

import os
from astropy.io import fits

mapdir = '../OrionMuse/LineMaps'
outdir = '../Ratios-for-Alba'

nhdu, = fits.open(os.path.join(mapdir, 'mean-N_II-6583.fits'))
hhdu, = fits.open(os.path.join(mapdir, 'mean-H_I-6563.fits'))

nhdu.data -= hhdu.data

nhdu.writeto(os.path.join(outdir, 'delta-V-6583-6563.fits'), clobber=True)

9 TODO Yet more datasets to look at

9.1 Orion S horizontal slits

  • These are the os000.fits files from 2007-01-11
  • In ~/Work/SPM2007/
  • I think these are ha+nii

9.2 LL Ori later slits

10 Final products of calibration

10.1 Spectral images

  • new-slits-*.fits

10.2 Calibrated slit spectra

  • Calibrated/BGsub/{XX,YY}*-vhel.fits

11 DONE New general version of the slit flux calibration and astrometry

  • It would be better to use the WFI images to flux calibrate the spectra
    • As well as being more reliable, this is necessary in the case of some of the LL2 slits, since the Image+slit was taken in [S II] instead of H alpha
  • We will apply it first to the LL2 slits, plus any other odd slits that are lying around
    • With the plan being to add in the original horizontal and vertical slits afterwards
  • As well as flux calibration, it will also help with the offset along the slit

11.1 General policy for the workflow

  • The previous version had used org-babel source blocks that read directly from org-mode tables.
    • This is convenient for rapid development
    • But it makes the overall logic hard to follow
    • And hard to share with anyone not using emacs
  • So for this implementation I plan to do the following:
    1. Tangle all python scripts to files
    2. Write all data tables to files
      • Initial input table can be written as TSV from org table
      • Read in with astropy tables

11.2 Outline of steps

  1. We use the image+slit in order to find the slit reference position
    • The image-spectrum offset along the slit has to be found iteratively through a later step
  2. Compare the spectrum profile (spec) with a synthetic slit (image) on the WFI image
    • Similar to Application to the vertical slits and Repeat for the horizontal slits
    • Find the jshift value that best matches up the profiles
    • Find brightness normalization
      • Check for calibration gradients along the slit by plotting spec/image against s (distance along slit)
        • [2015-09-08 Tue] It appears that the trends are not linear. They tend to either curve down near the left hand edge, or curve up at the right hand edge, combined with a more linear increase from left to right
        • A cubic would probably capture the behavior well
        • [X] Fit a chebychev polynomial to the ratio
      • Check for zero point errors by plotting spec against image
      • [X] One problem will be disentangling the above two effects, because if there is a strong systematic brightness gradient along the slit then their effects will be degenerate
        • It turns out this is not too difficult in most cases, so long as we assume that the along-slit variation is a low-order polynomial
        • Some slits have a compact region with strong brightness gradients near the middle of the slit, which allows one to clearly see if a zero-point offset is present
    • Write out the slit astrometry and flux calibration to a table, similar to how we did here for the vertical slits and here for the horizontal slits
  3. Project the calibrated spectra onto a common spatial grid
    • For different velocity ranges
      • This requires finding the heliocentric correction
    • This is what we already did for the vertical and horizontal slits
      • We need to generalise this slightly so we can treat any PA (maybe that will work out of the box) and both cases for the dimension order of the spectra files (Y-V, or V-Y)

11.3 Table of all slits

  • Note taken on [2015-09-01 Tue 15:20]
    I haven’t quite decide what is going into this table yet.
    • It will at least have the dataset (YYYY-MM) and the ids for the image+slit and/or the spectrum.
    • Currently it also has a column for notes, so I can remember what is what.
    • Do I want to add the values if we determine them by hand?
      • Position of slit on image
      • Offset image-spectrum
      • Flux calibration
    • Or alternatively, could we determine all of those automatically?
      • Probably not
  • This does not have all the slits yet, but it will eventually (I hope)
    • [2015-09-16 Wed] Adding the original horizontal slits
  • The Notes column is just to remind us of stuff
  • After editing, remember to export the table to file with C-c t e
  • [2015-09-04 Fri] Added some new columns:
    saxis
    dimension along the slit length in IMAGE coordinates (1 for x-axis; 2 for y-axis)
    • This is checked by hand in DS9
    islit
    pixel position of slit along the perpendicular axis (y-axis for saxis=1, x-axis for saxis=2)
    • This is measured by hand in DS9
    shift
    pixel offset along the slit between image+slit and spectrum
    • This has to be found by trial and error (from plotting slit profile from spectrum and comparing with synthetic slit)
    norm
    the factor we divide the spectrum by so that it is equal to the flux calibration image
    • This is for the center of the slit, since we will need to include a polynomial correction term later
    • Also found by trial and error - it does not need to be exact since any inaccuracy will get compensated by the leading term of the Chebyshev fit
    zero
    Correction to the zero point of the spectra
    • Found by looking at the background in the ha and nii spectra, and a little bit of trial and error so that the upper graph panel of spectrum versus image passes through the origin
Datasetimidspecidsaxisislitshiftnormzeror(nii)Notes
2006-023263242283.574.10001.01.8LL2 first epoch
2006-023183192280.078.330031.8LL2 first epoch
2006-022602612251.455.360011.8LL2 first epoch
2006-022702712253.560.650041.8LL2 first epoch
2006-022762772256.060.700061.8LL2 first epoch
2006-022812822257.764.360081.8LL2 first epoch
2006-022862872261.064.400081.8LL2 first epoch
2006-022912922262.765.600061.8LL2 first epoch
2006-022962972267.066.300081.8LL2 first epoch
2006-023033042270.570.330081.8LL2 first epoch
2006-023133122278.070.260061.8Image follows spec!
2007-01b20612062-c2256.065.7000201.8LL2 second epoch
2007-01b20372038-20402249.060.7000161.8LL2 second epoch
2007-01b20412042-20442250.063.7000211.8LL2 second epoch
2007-01b20452046-20482251.564.6500251.8LL2 second epoch
2007-01b20492050-20522253.065.7000251.8LL2 second epoch
2007-01b20532054-20562254.568.7000251.8LL2 second epoch
2007-01b20572058-20602254.566.7000251.8LL2 second epoch
2007-010540552262.568.5000101.9LL1 extreme N
2007-010670682273.072.5000101.9LL1 extreme N
2007-010720732277.574.5200101.8LL1 extreme N
2010-01078079-0802248.062.55002.01.8Vertical W of LL2
2010-01202203-2042233.652.565002.01.8Vertical Far West
2010-01124125-1272235.554.534002.1.8Vertical Far West
2010-01206207-2082234.65477002.1.8Vertical Far West
2010-01128129-1302237.55735002.1.8Vertical Far West
2010-01133134-1352239.45855002.1.8Vertical Far West
2010-01210211-2122236.45570002.1.8Vertical Far West
2010-01137138-1392241.36036002.1.8Vertical Far West
2010-01214215-2162238.46075002.1.8Vertical Far West
2010-01145146-1472244.36510002.1.8Vertical Far West
2010-01248249-2502254.96833002.1.8Vertical Far West
2010-01219220-2212240.057.577002.1.8Vertical Far West
2010-01157158-1592252.26516002.1.8Vertical Far West
2010-01236237-2382247.06372002.1.8Vertical Far West
2010-01240241-2422249.56555002.1.8Vertical Far West
2010-01244245-2462251.66445002.1.8Vertical Far West
2010-01252253-2542258.26957002.1.8Vertical Far West
2013-020240251465.0-73.15004.51.8Horizontal LL2
2013-02165166-1671468.5-67.35005.51.8Horizontal West
2013-02169170-1711468.5-65.32005.51.8Horizontal West
2013-02237238-2391467.0-74.450071.8Horizontal West
2013-02232233-2341468.5-78.480091.8Horizontal West
2013-02226227-2281468.5-75.4200101.8Horizontal West
2013-02149150-1511468.0-70.36003.51.8Horizontal West
2013-02154155-1561467.2-67.36004.51.8Horizontal West
2013-02159160-1611467.5-70.36005.01.8Horizontal West
2013-02033034-0351467.0-70.38006.01.8Horizontal West
2013-02029030-0311465.2-76.38004.51.8Horizontal West
2013-12116117-1181395.0-160.40010.51.8Horizontal below LL2
2013-120860881404.0-155.30011.01.8W of HH269
2013-120900891404.5-160.40010.51.8Image follows spec
2013-12102103-1041402.5-150.35010.51.8V faint image
2013-12111112-1131400.0-145.40010.51.8W of HH269
2015-02000300041463.0-80.130001.8
2015-02001200131464.0-60.130001.8
2007-sii0630652269.070.5500001.17[S II] LL1
2010-sii078082224863550001.17[S II] west
2010-sii0850862252.766550001.17
2010-sii224225224361770001.17
2007-oiii059060226571500001.6[O III] LL1
2010-oiii224228224362770001.6[O III] west

Run it for a single slit

python slit-calibration.py 2015-02 # 1>&2

11.3.1 Different binning between image and spectrum

  • Note that binning along the slit axis differs between image+slit and spectrum for these 2013-12 datasets:
    • 086
    • 090
    • 102
    • 111
  • It is x2 for the image+slit, but x3 for the spectrum
  • This is now dealt with in the find_slit_coords function

11.3.2 DONE Allow for offsets perpendicular to slit

  • In some cases, the pointing may have drifted in between the image+slit exposure and the spectrum exposure
  • I suspect this is happening with the 2013-12 spectra at least
  • [2015-10-06 Tue] Cancel this since it is not that important

11.3.3 DONE Problems with the flux zero-point of the spectra

  • The original way I was dealing with this was to use the zero column in the table to finesse things by hand so that the “calib image” versus “integrated spectrum” plot goes through the origin
    • However, I don’t like this approach because there is no check that the “continuum” parts of the spectrum (in between the emission lines) still have a sensible value (at the very least, non-negative!)
  • We can do better. By looking at vignetted edge of the untrimmed spectra exposures, we can see whether the “continuum” values are real continuum or not.
    • In many cases, it is obvious that there is still a residual constant value that needs to be subtracted
    • For instance, with 2013-12-086 dataset:
      • ~/Dropbox/papers/LL-Objects/SPMDIC13/
      • The raw spectrum is spm088.fits
      • The reduced spectrum is spm088_bcrx.fits, spec088-ha.fits, spec088-nii.fits
      • Unfortunately, they are not all the same
        • spm088_bcrx.fits looks like its flux zero point is 16
        • spec088-ha.fits looks like its flux zero point is 11
        • spec088-nii.fits looks like its flux zero point is 12
      • We will try using 11.0
  • So plan is to re-use the zero column to be the value that we subtract from the ha and nii spectra pixels before summing in wavelength.
  • This works more or less well for 2013-12 and 2010
  • [X] Linear trend in zero-point for 2007-01 LL1 slits
    • The “continuum” goes negative at the N end of the slits
    • This would be best dealt with by modifying the images before using them
    • The problem is seen in all the spectra - it seems to be due to having subtracted the image in superbias.fits, which is obviously wrong, since it has a linear gradient from 1150 to 1230, which is a delta of 80
      • I thing the real value should be about 1160 to 1170
      • So if we map the y-axis onto [0, 1], then we need to add 80*y - 10
      • This is done below
  • Now that we have fixed up the LL1 slits, we still get reasonable calibrations, even when we determine zero directly from the spectral images
    • The only problem is that the Bright Bar is slightly less prominent on the spectra than it is on the calibration image
    • It can be largely fixed by pushing zero up to about 10
    • This is fine for Ha, but is a bit low for nii in the S end of the slit
    • But the high vel components are in the N end and not seen in nii, so it doesn’t matter

11.3.3.1 Script to fix the 2007 bias subtractions

import glob
import numpy as np
from astropy.io import fits
fnlist = glob.glob('spec*-ha.fits') + glob.glob('spec*-nii.fits')
for fn in fnlist:
    print(fn)
    hdu, = fits.open(fn)
    ny, nx = hdu.data.shape
    y = np.linspace(0.0, 1.0, ny).reshape((ny, 1))
    hdu.data += 0.5*(80*y - 5.0)
    hdu.writeto(fn.replace('.fits', '-fix.fits'), clobber=True)

11.3.3.2 DONE Script to cut out the ha and nii spectra from the 2015 slits

  • We only have a lamp spectrum for the 0013 setting
    • So I calculate an offset by hand for the 0004 exposure
    • 656.76 - 529.148 = 127.612
    • 654.275 - 525.139 = 129.136
    • Tweaked it by hand by aligning on WCS in ds9
      • Strangely, the shift was slightly different for nii and ha
import numpy as np
from astropy.io import fits
jwin = 160
for slitid, lineid, j0 in  [['0004', 'ha', 570], ['0004', 'nii', 930],
                            ['0013', 'ha', 440], ['0013', 'nii', 810],]:
    fn = 'spm{}o_sub.fits'.format(slitid)
    hdu, = fits.open(fn)
    hdu.data = hdu.data[j0:j0+jwin]
    hdu.header['CRPIX2'] -= j0
    # if (slitid, lineid) == ('0004', 'nii'):
    #     hdu.data *= 3.0         # forced to use weaker doublet component
    if (slitid, lineid) == ('0004', 'ha'):
        hdu.header['CRPIX2'] += 127.0
    if (slitid, lineid) == ('0004', 'nii'):
        hdu.header['CRPIX2'] += 125.0
    hdu.writeto(                                                                
        fn.replace('.fits', '-{}.fits'.format(lineid)),
        clobber=True
    )

11.4 Program to perform flux calibration: slit-calibration.py

11.4.1 Imports

import os
import sys
import numpy as np
import astropy
from astropy.table import Table
from astropy.io import fits
from astropy.wcs import WCS
from astropy.wcs.utils import pixel_to_skycoord
from matplotlib import pyplot as plt
import seaborn as sns
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.modeling import models, fitting

11.4.2 Read in the table of all slits

  • We want the ID columns to be read as strings since they contain leading zeros in some cases, which need to be preserved
  • This was not happening automatically for the imid column so I use a custom converter
converters = {'imid': [astropy.io.ascii.convert_numpy(np.str)]}
tab = Table.read('all-slits-input.tab',
                 format='ascii.tab', converters=converters)

11.4.3 Fits files for the spectra and image+slit

  • These are kept in nested dicts of dicts of template formats, which are keyed
    1. By the file type (see below)
    2. By the date of observations (YYYY-MM)
      • with a suffix b to distinguish different sets from same temporada
  • The file types are:
    fullspec
    The original full spectrum file, which includes ha and nii
    • This is only used for the flux calibration and positioning, summing up in the wavelength direction
    • [2015-09-11 Fri] UPDATE: I don’t use these any more
    • So it doesn’t matter that it isn’t rectified
    • [X] Ideally it will be CR-rejected and bias-subtracted, but I am not sure I have those for all datasets
      • [2015-09-11 Fri] This wasn’t important since I have ended up using the ha and nii spectra instead
    • Note that some of the datasets where two exposures are summed use a format like 117-118 for the file name for the full spectrum
    • Also note that for the 2013-02 dataset I have made symbolic links into the WesternShocks/ folder for the full spectrum files, so that we don’t have to know about the individual night folders (150213/ and 160213/) any more.
    ha
    The extracted Ha line
    • In cases where the full spectrum has form like 117-118, then the extracted Hα spectrum just uses 117
    • This is implemented in the function find_fits_filepath()
    nii
    The extracted [N II] line
    • Same as for Hα
    image
    The image+slit exposure
    • I only use this for finding the ra, dec of the slit center
file_templates = {
    'fullspec' : {
        '2006-02': 'Work/SPM2005/pp{}.fits',
        '2007-01b': 'Work/SPM2007/Reduced/HH505/slits/reducciones/spec{}.fits',
        '2007-01': 'Work/SPM2007/Reduced/spec{}-transf.fits',
        '2007-sii': 'Work/SPM2007/Reduced/spec{}-transf.fits',
        '2007-oiii': 'Work/SPM2007/Reduced/spec{}-transf.fits',
        '2010-01': 'Dropbox/SPMJAN10/reducciones/spm{}h.fits',
        '2010-sii': 'Dropbox/SPMJAN10/reducciones/spm{}h.fits',
        '2010-oiii': 'Dropbox/SPMJAN10/reducciones/spm{}h.fits',
        '2013-02': 'Dropbox/SPMFEB13/WesternShocks/spm{}_bcr.fits',
        '2013-12': 'Dropbox/papers/LL-Objects/SPMDIC13/spm{}_bcrx.fits',
        '2015-02': 'Dropbox/SPMFEB15/archivos/spm{}o_bcrx.fits',
    },
    'siis' : {
        '2007-sii': 'Work/SPM2007/Reduced/spec{}-siis.fits',
        '2010-sii': 'Dropbox/SPMJAN10/reducciones/spec{}-siis.fits',
    },
    'siil' : {
        '2007-sii': 'Work/SPM2007/Reduced/spec{}-siil.fits',
        '2010-sii': 'Dropbox/SPMJAN10/reducciones/spec{}-siil.fits',
    },
    'oiii' : {
        '2007-oiii': 'Work/SPM2007/Reduced/spec{}-oiii.fits',
        '2010-oiii': 'Dropbox/SPMJAN10/reducciones/spec{}-oiii.fits',
    },
    'ha' : {
        '2006-02': 'Work/SPM2007/Reduced/HH505/slits/SPMha/spec{}-halpha.fits',
        '2007-01b': 'Work/SPM2007/Reduced/HH505/slits/reducciones/spec{}-ha.fits',
        '2007-01': 'Work/SPM2007/Reduced/spec{}-ha-fix.fits',
        '2010-01': 'Dropbox/SPMJAN10/reducciones/spec{}-ha.fits',
        '2013-02': 'Dropbox/SPMFEB13/WesternShocks/spec{}-ha.fits',
        '2013-12': 'Dropbox/papers/LL-Objects/SPMDIC13/spec{}-ha.fits',
        '2015-02': 'Dropbox/SPMFEB15/archivos/spm{}o_sub-ha.fits',
    },
    'nii' : {
        '2006-02': 'Work/SPM2007/Reduced/HH505/slits/SPMnii/spec{}-nii.fits',
        '2007-01b': 'Work/SPM2007/Reduced/HH505/slits/reducciones/spec{}-nii.fits',
        '2007-01': 'Work/SPM2007/Reduced/spec{}-nii-fix.fits',
        '2010-01': 'Dropbox/SPMJAN10/reducciones/spec{}-nii.fits',
        '2013-02': 'Dropbox/SPMFEB13/WesternShocks/spec{}-nii.fits',
        '2013-12': 'Dropbox/papers/LL-Objects/SPMDIC13/spec{}-nii.fits',
        '2015-02': 'Dropbox/SPMFEB15/archivos/spm{}o_sub-nii.fits',
    },
    'image' : {
        '2006-02': 'Dropbox/Papers/LL-Objects/feb2006/pp{}-ardec.fits',
        '2007-01b': 'Work/SPM2007/Reduced/HH505/slits/reducciones/spm{}-ardec.fits',
        '2007-01': 'Work/SPM2007/Reduced/spm{}-ardec.fits',
        '2007-sii': 'Work/SPM2007/Reduced/spm{}-ardec.fits',
        '2007-oiii': 'Work/SPM2007/Reduced/spm{}-ardec.fits',
        '2010-01': 'Dropbox/SPMJAN10/reducciones/posiciones/spm{}-ardec.fits',
        '2010-sii': 'Dropbox/SPMJAN10/reducciones/posiciones/spm{}-ardec.fits',
        '2010-oiii': 'Dropbox/SPMJAN10/reducciones/posiciones/spm{}-ardec.fits',
        '2013-02': 'Dropbox/SPMFEB13/WesternShocks/spm{}_ardec.fits',
        '2013-12': 'Dropbox/papers/LL-Objects/SPMDIC13/spm{}-ardec.fits',
        '2015-02': 'Dropbox/SPMFEB15/archivos/spm{}-ardec.fits',
    },
}

def find_fits_filepath(db, filetype):
    """Return path to the FITS file for an image or spectrum 
    """
    id_ = db['imid'] if filetype == 'image' else db['specid']
    id_ = str(id_)
    if filetype in ('ha', 'nii') and db['Dataset'] not in ['2013-12']:
        id_ = id_.split('-')[0]
    template = file_templates[filetype][db['Dataset']]
    path = template.format(id_)
    print('~/'+path)
    homedir = os.path.expanduser('~')
    return os.path.join(homedir, path)

11.4.4 Construct the synthetic slit from the reference image

A function to trace the profile of a slit
  • Input are arrays of RA and Dec coordinates
  • Together with the image itself and its WCS object
  • Output is an array of the profile along the slit
    • Although the function makes no assumption about the geometry of the coordinate arrays, so it doesn’t need to be a slit
  • [ ] Currently the output profile is simply calculated from the nearest pixel, but I have grander plans for this eventually:
    • I should construct a logical mask for each slit pixel, based on the pixel size, and then average all the image pixels for which the mask is True
    • This will be pretty slow if I am using the entire reference image array every time
    • So I should first extract a sub-image, given by the limits of the slit
def slit_profile(ra, dec, image, wcs):
    """
    Find the image intensity for a list of positions (ra and dec)
    """
    xi, yj = wcs.all_world2pix(ra, dec, 0)
    # Find nearest integer pixel
    ii, jj = np.floor(xi + 0.5), np.floor(yj + 0.5)
    print(ra[::100], dec[::100])
    print(ii[::100], jj[::100])
    return np.array([image[j, i] for i, j in zip(ii, jj)])

The actual photometric standard image we are going to use. This is from Massimo’s ground-based program. The pixel size is 0.238 arcsec

wfi_dir = '/Users/will/Work/OrionTreasury/wfi'
photom, = fits.open(os.path.join(wfi_dir, 'Orion_H_A_deep.fits'))
wphot = WCS(photom.header)

11.4.5 Find the world coordinates of each pixel along the slit

  • [2015-09-06 Sun] To make this more useful, I will return the entire array of RA and Dec for each pixel along the slit (instead of just RA0, Dec0 for the slit center)
    • This means that the ds and PA parameters will no longer be needed, but I will leave them in anyway.
  • Similar to what I did here and here
    • But simpler really
  • We need to find the following:
    • (RA0, Dec0) of the slit center
      • This comes from the WCS header of the image+slit, together with the islit and shift values from the table above
    • Pixel scale along the slit ds
    • PA of the slit
  • Note that we do the conversion to ICRS frame. Some of the earlier images are in FK4, which is 1950 epoch!
    • We fix this using astropy.coordinates.SkyCoord with the coordinate frame taken from the radesys WCS parameter.
def find_slit_coords(db, hdr, shdr):
    """Find the coordinates of all the pixels along a spectrograph slit

    Input arguments are a dict-like 'db' of hand-measured values (must
    contain 'saxis', 'islit' and 'shift') and a FITS headers 'hdr' from
    the image+slit exposure and 'shdr' from a spectrum exposure

    Returns a dict of 'ds' (slit pixel scale), 'PA' (slit position
    angle), 'RA' (array of RA values in degrees along slit), 'Dec'
    (array of Dec values in degrees along slit)

    """
    jstring = str(db['saxis'])  # which image axis lies along slit
    dRA_arcsec = hdr['CD1_'+jstring]*3600*np.cos(np.radians(hdr['CRVAL2']))
    dDEC_arcsec = hdr['CD2_'+jstring]*3600
    ds = np.hypot(dRA_arcsec, dDEC_arcsec)
    PA = np.degrees(np.arctan2(dRA_arcsec, dDEC_arcsec))

    # Pixel coords of each slit pixel on image (in 0-based convention)
    if jstring == '1':
        # Slit is horizontal in IMAGE coords
        ns = shdr['NAXIS1']
        iarr = np.arange(ns) - float(db['shift'])
        jarr = np.ones(ns)*float(db['islit'])
        try:
            image_binning = hdr['CBIN']
            spec_binning = shdr['CBIN']
        except KeyError:
            image_binning = hdr['CCDXBIN']
            spec_binning = shdr['CCDXBIN']

        # correct for difference in binning between the image+slit and the spectrum
        iarr *= spec_binning/image_binning
    elif jstring == '2':
        # Slit is vertical in IMAGE coords
        ns = shdr['NAXIS2']
        iarr = np.ones(ns)*float(db['islit'])
        jarr = np.arange(ns) - float(db['shift'])
        try:
            image_binning = hdr['RBIN']
            spec_binning = shdr['RBIN']
        except KeyError:
            image_binning = hdr['CCDYBIN']
            spec_binning = shdr['CCDYBIN']

        jarr *= spec_binning/image_binning
    else:
        raise ValueError('Slit axis (saxis) must be 1 or 2')

    print('iarr =', iarr[::100], 'jarr =', jarr[::100])
    # Also correct the nominal slit plate scale
    ds *= spec_binning/image_binning

    # Convert to world coords, using the native frame
    w = WCS(hdr)
    observed_frame = w.wcs.radesys.lower()
    # Note it is vital to ensure the pix2world transformation returns
    # values in the order (RA, Dec), even if the image+slit may have
    # Dec first
    coords = SkyCoord(*w.all_pix2world(iarr, jarr, 0, ra_dec_order=True),
                      unit=(u.deg, u.deg), frame=observed_frame)
    print('coords =', coords[::100])
    print('Binning along slit: image =', image_binning, 'spectrum =', spec_binning)
    # Make sure to return the coords in the ICRS frame
    return {'ds': ds, 'PA': PA,
            'RA': coords.icrs.ra.value,
            'Dec': coords.icrs.dec.value}

11.4.6 Package up the slit coordinates for use in a FITS header

We also need to create the WCS keywords so that the slit coordinates can be reconstructed from a FITS header. The general approach is as follows:
  • We use the standard FITS notation for axes: i, j
    • i = 1, 2, … are the world coordinate axes
    • j = 1, 2, … are the image (data array) coordinate axes
    • Python arrays have the order reversed
    • Reference pixel has CRPIXj and CRVALi
    • Scale is specified with CDELTi and PCi_j
      • CDELTi has the pixel scale in arcsec and wavelength
      • PCi_j just has the rotation matrix according to the PA of the slit
  • The calibrated FITS spectra will be regularized so that the image x-axis (j=1 in FITS parlance) is wavelength, and the image y-axis (j=2) is displacement along slit.
    • Any necessary transpose of the data array is done in the main loop
    • [ ] We could also have a degenerate third image axis that is perpendicular to the slit (dimension of 1 along this axis). Is this necessary? Is it wise?
  • We use 3 world coordinate axes, even though there are only two true image axes, so we can specify the variation of both RA and Dec along the slit
    • The provisional order of the axes is
      i = 1
      Wavelength
      i = 2
      RA
      i = 3
      Dec
    • This is to have the best mapping between the i and j axis orderings
  • We get all the wavelength info directly from the WCS of the original spectrum
    • [ ] Later, we will want to put it in velocity units, possibly as an alternate WCS axis
  • We get the celestial coordinate info from the list of slit_coords
    • We convert these to astropy.coord.SkyCoord form and use the separation() and position_angle() methods between each pair of adjacent pixels
    • We then check that each pair has the same separation and PA as for the first pair
      • This works fine for the separations using the np.allclose default relative tolerance of 1e-5, but for the PAs I had to relax it to 1e-4
      • Then we use the separation to set CDELTi values and the PA to set elements of the PCi_j matrix
  • We set a load of extra WCS keywords
    • The coordinate types and units
    • The reference frame and date of observations
  • At the end, we calculate the coordinates of the slit pixels using this WCS that we have created
    • In theory these should be the same as the slit_coords values that were fed in
    • I see differences of order 0.01 arcsec by the end of the slit
      • Not sure why, but I’m not too worried
def make_slit_wcs(db, slit_coords, spechdu):
    # Input WCS from original spectrum
    wspec = WCS(spechdu.header)
    wspec.fix()

    #
    # First find wavelength scale from the spectrum  
    #

    # For original spectrum, the wavelength and slit axes are 0-based,
    # but in FITS axis order instead of python access order, since
    # that is the way that that the WCS object likes to do it
    ospec_wavaxis = 2 - db['saxis']
    ospec_slitaxis = db['saxis'] - 1

    # The rules are that CDi_j is used if it is present, and only if
    # it is absent should CDELTi be used
    if wspec.wcs.has_cd():
        dwav = wspec.wcs.cd[ospec_wavaxis, ospec_wavaxis]
        # Check that the off-diagonal terms are zero
        assert(wspec.wcs.cd[0, 1] == wspec.wcs.cd[1, 0] == 0.0)
    else:
        dwav = wspec.wcs.cdelt[ospec_wavaxis]
        if wspec.wcs.has_pc():
            # If PCi_j is also present, make sure it is identity matrix
            assert(wspec.wcs.pc == np.eye(2))
    wav0 = wspec.wcs.crval[ospec_wavaxis]
    wavpix0 = wspec.wcs.crpix[ospec_wavaxis]

    #
    # Second, find the displacement scale and ref point from the slit_coords
    #
    # The slit_coords should already be in ICRS frame
    c = SkyCoord(slit_coords['RA'], slit_coords['Dec'], unit=u.deg)
    # Find vector of separations between adjacent pixels
    seps = c[:-1].separation(c[1:])
    # Ditto for the position angles
    PAs = c[:-1].position_angle(c[1:])
    # Check that they are all the same as the first one
    assert(np.allclose(seps/seps[0], 1.0))
    # assert(np.allclose(PAs/PAs[0], 1.0, rtol=1.e-4))
    # Then use the first one as the slit pixel size and PA
    ds, PA, PA_deg = seps[0].deg, PAs.mean().rad, PAs.mean().deg
    # And for the reference values too
    RA0, Dec0 = c[0].ra.deg, c[0].dec.deg

    #
    # Now make a new shiny output WCS, constructed from scratch
    #
    w = WCS(naxis=3)

    # Make use of all the values that we calculated above
    w.wcs.crpix = [wavpix0, 1, 1]
    w.wcs.cdelt = [dwav, ds, ds]
    w.wcs.crval = [wav0, RA0, Dec0]
    # PC order is i_j = [[1_1, 1_2, 1_3], [2_1, 2_2, 2_3], [3_1, 3_2, 3_3]]
    w.wcs.pc = [[1.0, 0.0, 0.0],
                [0.0, np.sin(PA), -np.cos(PA)],
                [0.0, np.cos(PA), np.sin(PA)]]

    #
    # Finally add in auxillary info
    #
    w.wcs.radesys = 'ICRS'
    w.wcs.ctype = ['AWAV', 'RA---TAN', 'DEC--TAN']
    w.wcs.specsys = 'TOPOCENT'
    w.wcs.cunit = [u.Angstrom, u.deg, u.deg]
    w.wcs.name = 'TopoWav'
    w.wcs.cname = ['Observed air wavelength', 'Right Ascension', 'Declination']
    w.wcs.mjdobs = wspec.wcs.mjdobs
    w.wcs.datfix()              # Sets DATE-OBS from MJD-OBS

    # Check the new pixel values
    npix = len(slit_coords['RA'])
    check_coords = pixel_to_skycoord(np.arange(npix), [0]*npix, w, 0)
    # These should be the same as the ICRS coords in slit_coords
    print('New coords:', check_coords[::100])
    print('Displacements in arcsec:', check_coords.separation(c).arcsec[::100])
    # 15 Sep 2015: They seem to be equal to within about 1e-2 arcsec

    return w

Unfortunately, DS9 does not understand this lovely conformant and informative WCS structure that we have devised, so we need to dumb it down:

  • Actually I don’t use this any more
def fixup4ds9(w):
    w.wcs.ctype  = ['LINEAR', 'LINEAR', 'LINEAR']
    # w.wcs.cdelt[1:] *= 3600
    # w.wcs.units[1:] = 'arcsec', 'arcsec'
    w.wcs.crval[1], w.wcs.crval[2] = 0.0, 0.0
    w.wcs.name = 'TopoWavDS9'
    return w

11.4.7 Fit Chebyshev polynomials to along-slit variation

def fit_cheb(x, y, npoly=3, mask=None):
   """Fits a Chebyshev poly to y(x) and returns fitted y-values"""
   fitter = fitting.LinearLSQFitter()
   p_init = models.Chebyshev1D(npoly, domain=[x.min(), x.max()])
   if mask is None:
       mask = np.ones_like(x).astype(bool)
   p = fitter(p_init, x[mask], y[mask])
   print(p)
   return p(x)

11.4.8 Make some useful and pretty plots

  • Three-pane plot that we use for manually adjusting the calibration parameters
  • [2015-09-22 Tue] Add the nii/ha ratio to third pane
sns.set_palette('RdPu_d', 3)
def make_three_plots(spec, calib, prefix, niirat=None):
    assert spec.shape == calib.shape
    fig, axes = plt.subplots(3, 1)

    vmin, vmax = 0.0, np.median(calib) + 5*calib.std()

    ypix = np.arange(len(calib))
    ratio = spec/calib
    mask = (ypix > 10.0) & (ypix < ypix.max() - 10.0) \
           & (ratio > np.median(ratio) - 2*ratio.std()) \
           & (ratio < np.median(ratio) + 2*ratio.std()) 
    try:
        ratio_fit = fit_cheb(ypix, ratio, mask=mask)
    except:
        ratio_fit = np.ones_like(ypix)

    alpha = 0.8

    # First, plot two profiles against each other to check for zero-point offsets
    axes[0].plot(calib, spec/ratio_fit, '.', alpha=alpha)
    axes[0].plot([vmin, vmax], [vmin, vmax], '-', alpha=alpha)
    axes[0].set_xlim(vmin, vmax)
    axes[0].set_ylim(vmin, vmax)
    axes[0].set_xlabel('Calibration Image')
    axes[0].set_ylabel('Corrected Integrated Spectrum')

    # Second, plot each against slit pixel to check spatial offset
    axes[1].plot(ypix, calib, alpha=alpha, label='Calibration Image')
    axes[1].plot(ypix, spec/ratio_fit, alpha=alpha, lw=1.0, label='Corrected Integrated Spectrum')
    axes[1].plot(ypix, spec, alpha=alpha, lw=0.5, label='Uncorrected Integrated Spectrum')
    axes[1].set_xlim(0.0, ypix.max())
    axes[1].set_ylim(vmin, vmax)
    axes[1].legend(fontsize='xx-small', loc='lower right')
    axes[1].set_xlabel('Slit pixel')
    axes[1].set_ylabel('Profile')

    # Third, plot ratio to look for spatial trends
    axes[2].plot(ypix, ratio, alpha=alpha)
    axes[2].plot(ypix, ratio_fit, alpha=alpha)
    if niirat is not None:
        axes[2].plot(ypix, niirat, 'b')
    axes[2].set_xlim(0.0, ypix.max())
    axes[2].set_ylim(0.0, 1.5)
    axes[2].set_xlabel('Slit pixel')
    axes[2].set_ylabel('Ratio: Spec / Calib')


    fig.set_size_inches(5, 8)
    fig.tight_layout()
    fig.savefig(prefix+'.png', dpi=300)

    return ratio_fit

11.4.9 Use command line argument to restrict which datasets are processed

  • Read a single command line argument to choose which datasets to process
    • It can be as specific as a single position:
      • E.g, 2006-02-281
    • Or all positions from one set:
      • E.g, 2007-01b
    • Or all from one year:
      • E.g, 2013
    • Or whatever
  • The purpose of this is to speed things up when iterating to find the offsets and norms
if len(sys.argv) > 1:
    selector_pattern = sys.argv[1]
else:
    selector_pattern = ''

11.4.10 DONE Remove background and sum over wavelength across line

  • Although in principal some of the “background” is real continuum, most of it is not
  • We can get better results by simply removing it, as I did down here
  • But for the time being we will stick to the original simple version
def extract_profile(data, wcs, wavrest, dw=7.0):
    data = remove_bg_and_regularize(data, wcs, wavrest)
    # pixel limits for line extraction
    lineslice = wavs2slice([wavrest-dw/2, wavrest+dw/2], wcs)
    return data[:, lineslice].sum(axis=1)

New version, adapted from what I did in the ratio section

def wavs2slice(wavs, wcs):
    """Convert a wavelength interval `wavs` (length-2 sequence) to a slice of the relevant axis`"""
    assert len(wavs) == 2
    isT = row['saxis'] == 1
    if isT:
        _, xpixels = wcs.all_world2pix([0, 0], wavs, 0)
    else:
        xpixels, _ = wcs.all_world2pix(wavs, [0, 0], 0)
    print('Wav:', wavs, 'Pixel:', xpixels)
    i1, i2 = np.maximum(0, (xpixels+0.5).astype(int))
    return slice(min(i1, i2), max(i1, i2))

def remove_bg_and_regularize(data, wcs, wavrest, dwbg_in=7.0, dwbg_out=10.0):
    '''
    Transpose data if necessary, and then subtract off the background (blue and red of line)
    '''
    isT = row['saxis'] == 1
    # Make sure array axis order is (position, wavelength)
    if isT:
        data = data.T
    if row['Dataset'] == '2015-02':
        # Don't try this for the newest data, I already removed the BG
        return data
    # pixel limits for blue, red bg extraction
    bslice = wavs2slice([wavrest-dwbg_out/2, wavrest-dwbg_in/2], wcs)
    rslice = wavs2slice([wavrest+dwbg_in/2, wavrest+dwbg_out/2], wcs)
    # extract backgrounds on blue and red sides
    bgblu = data[:, bslice].mean(axis=1)
    bgred = data[:, rslice].mean(axis=1)
    # take weighted average, accounting for cases where the bg region
    # does not fit in the image
    weight_blu = data[:, bslice].size
    weight_red = data[:, rslice].size
    print('Background weights:', weight_blu, weight_red)
    bg = (bgblu*weight_blu + bgred*weight_red)/(weight_blu + weight_red)
    return data - bg[:, None]

11.4.10.1 Original version

def extract_profile(data):
    return remove_background(data).sum(axis=wavaxis)


def remove_background(data):
    return data - row['zero']

11.4.11 Loop over the slit positions and do the stuff

# Emission lines included in each type of full spectrum
linesets  = {
    'sii' : ['siil', 'siis'],
    'oiii': ['oiii'],
    'default': ['ha', 'nii'],
}

restwavs = {
    'ha': 6562.79,
    'nii': 6583.45,
    'siis': 6716.44,
    'siil': 6730.816,
    'oiii': 5006.84,
}

for row in tab:
    if row['Dataset'].endswith('sii'):
        lineset = linesets['sii']
    elif row['Dataset'].endswith('oiii'):
        lineset = linesets['oiii']
    else:
        lineset = linesets['default']
    
    full_id = row['Dataset'] + '-' + row['imid']
    if not full_id.startswith(selector_pattern):
        continue
    print(row)
    imslitfile = find_fits_filepath(row, 'image')


    line_hdus = []
    for line_id in lineset:
        line_hdus.append(fits.open(find_fits_filepath(row, line_id))[0])
    
    imhdu = fits.open(imslitfile)[0]

    # World coordinates along slit
    slit_coords = find_slit_coords(row, imhdu.header, line_hdus[0].header)

    # Find synthetic profile from calibration image
    calib_profile = slit_profile(slit_coords['RA'], slit_coords['Dec'],
                                 photom.data, wphot)

    ratio = None
    if lineset == ['ha', 'nii']:
        # This part is too difficult to generalise to other lines for the moment
        hahdu, niihdu = line_hdus
        # Find actual profile along slit from spectrum
        wavaxis = row['saxis'] - 1  # This always seems to be true
        ha_profile = extract_profile(hahdu.data, WCS(hahdu.header), 6562.79)
        # Take the nii/ha calibration correction factor  from the table
        nii_profile = row['r(nii)']*extract_profile(niihdu.data, WCS(niihdu.header), 6583.45)
        spec_profile = (ha_profile+1.333*nii_profile)/row['norm']
        plt_prefix = 'plots/{:03d}-{}-calib'.format(row.index, full_id)
        ratio = make_three_plots(spec_profile, calib_profile, plt_prefix, niirat=nii_profile/ha_profile)

    #
    # Save calibrated spectra to files
    #

    for lineid, hdu in zip(lineset, line_hdus):
        restwav = restwavs[lineid]
        print('Saving', lineid, 'calibrated spectrum')
        # Apply basic calibration zero-point and scale
        hdu.data = remove_bg_and_regularize(hdu.data, WCS(hdu.header), restwav)/row['norm']
        # Regularize spectral data so that wavelength is x and pos is y
        # This is now done by the bg subtraction function

        # Apply polynomial correction along slit
        if ratio is not None:
            hdu.data /= ratio[:, None]
        # Extend in the third dimension (degenerate axis perp to slit)
        hdu.data = hdu.data[None, :, :]

        # Create the WCS object for the calibrated slit spectra
        wslit = make_slit_wcs(row, slit_coords, hdu)
        # Set the rest wavelength for this line
        wslit.wcs.restwav = (restwav*u.Angstrom).to(u.m).value
        # # Remove WCS keywords that might cause problems
        # for i in 1, 2:
        #     for j in 1, 2:
        #         kwd = 'CD{}_{}'.format(i, j)
        #         if kwd in hdu.header:
        #             hdu.header.remove(kwd) 
        # Then update the header with the new WCS structure as the 'A'
        # alternate transform
        hdu.header.update(wslit.to_header(key='A'))
        # Also save the normalization factor as a per-slit weight to use later
        hdu.header['WEIGHT'] = row['norm']

        # And better not to change the original WCS at all
        # Unless we have transposed the array, which we have to compensate for
        if row['saxis'] == 1:
            for k in ['CRPIX{}', 'CRVAL{}', 'CDELT{}', 'CD{0}_{0}']:
                hdu.header[k.format('1')], hdu.header[k.format('2')] = hdu.header[k.format('2')], hdu.header[k.format('1')] 
        # # And write a bowdlerized version that DS9 can understand as the main WCS
        # hdu.header.update(fixup4ds9(wslit).to_header(key=' '))
        calibfile = 'Calibrated/{}-{}.fits'.format(full_id, lineid)
        hdu.writeto(calibfile, clobber=True)

11.4.12 Test what is going on

# print(wphot.wcs)
# for row in tab:
#     print([row[x] for x in ('Dataset', 'imid', 'specid', 'Notes')])

11.5 Run slit-calibration.py

  • For debugging we can redirect stdout to stderr with 1>&2
    • This means that the normal output (e.g., print statements) will show up in the *Org-Babel Error Output* buffer
python slit-calibration.py  # 1>&2

12 Fix the issue with the [N II]/Ha ratio

12.1 DONE [1/1] Program to check the [N II]/Ha ratios: slit-ratio-check.py

  • We want to check the individual ha and nii images we are using
  • And also double check against the full spectrum image wherever possible

12.1.1 Re-use stuff from slit-calibration.py

  <<slit-calib-imports>>
from astropy.convolution import convolve_fft, Box1DKernel
  <<read-slit-table>>
  <<slit-calib-filenames>>

12.1.2 Convert wavelength to pixel

def wav2pix(wav, wcs, nwav, isT):
    if isT:
        _, (xpix,) = wcs.all_world2pix([0], [wav], 0)
    else:
        (xpix,), _ = wcs.all_world2pix([wav], [0], 0)
#    print(wcs.wcs.crpix, wcs.wcs.crval, wcs.wcs.get_cdelt(), wcs.wcs.get_pc())
    print('Wav:', wav, 'Pixel:', xpix)
    return max(0, min(nwav, int(xpix+0.5)))

12.1.3 Make a sensible WCS (even if wavelength info missing)

0.0994382022472

wcs_extra = {
    '2007-01': (440, 6583.45, 0.1),
    '2006-02': (446, 6583.45, 0.1),
    ('2006-02', '323'): (442, 6583.45, 0.1),
    ('2006-02', '318'): (442, 6583.45, 0.1),
    ('2006-02', '260'): (491, 6583.45, 0.1),
    '2007-01b': (440, 6583.45, 0.1),
    '2010-01': (440, 6583.45, 0.1),
    '2013-02': (167, 6583.45, -0.056),
    ('2015-02', '0003'): (1015, 6583.45, 0.05775),
    ('2015-02', '0012'): (888, 6583.45, 0.05775),
}
def makeWCS(hdr, dset, imid, jwav):
    w = WCS(hdr)
    dwav = w.wcs.get_cdelt()[jwav]*w.wcs.get_pc()[jwav, jwav]
    if dwav == 1.0:
        # No WCS info from header, so fix it by hand
        extras =  wcs_extra.get((dset, imid)) or wcs_extra.get(dset)
        if w.wcs.has_cd():
            w.wcs.crpix[jwav], w.wcs.crval[jwav], w.wcs.cd[jwav, jwav] = extras
        else:
            w.wcs.crpix[jwav], w.wcs.crval[jwav], w.wcs.cdelt[jwav] = extras
        print('Fixing WCS to CRPIX = {}, CRVAL = {}, CDELT = {}'.format(*extras) )
        print('Confirmation CRPIX = {}, CRVAL = {}, CDELT = {}'.format(w.wcs.crpix[jwav],
                                                                       w.wcs.crval[jwav],
                                                                       w.wcs.get_cdelt()[jwav]) )
    return w

12.1.4 DONE Extract profile along slit for an isolated line

def extract_profile(hdu, wavrest, dset, imid,
                    dw=4.0, dwbg_in=6.0, dwbg_out=8.0,
                    isT=False, smooth=10):
    jwav = 1 if isT else 0
    w = makeWCS(hdu.header, dset, imid, jwav)
    # Make sure array axis order is (position, wavelength)
    data = hdu.data.T if isT else hdu.data
    nslit, nwav = data.shape
    dwav = w.wcs.get_cdelt()[jwav]*w.wcs.get_pc()[jwav, jwav]
    sgn = np.sign(dwav)         # Need to take slices backwards if this is negative
    print('Check: wavrest = {}, dwav = {}, nslit = {}, nwav = {}'.format(wavrest, dwav, nslit, nwav))
    # pixel limits for line extraction
    i1 = wav2pix(wavrest-dw/2, w, nwav, isT)
    i2 = wav2pix(wavrest+dw/2, w, nwav, isT)
    # pixel limits for blue bg extraction
    iblu1 = wav2pix(wavrest-dwbg_out/2, w, nwav, isT)
    iblu2 = wav2pix(wavrest-dwbg_in/2, w, nwav, isT)
    # pixel limits for red bg extraction
    ired1 = wav2pix(wavrest+dwbg_in/2, w, nwav, isT)
    ired2 = wav2pix(wavrest+dwbg_out/2, w, nwav, isT)
    print(iblu1, iblu2, i1, i2, ired1, ired2)
    # extract backgrounds on blue and red sides
    bgblu = data[:, iblu1:iblu2:sgn].mean(axis=1)
    bgred = data[:, ired1:ired2:sgn].mean(axis=1)
    # take weighted average, accounting for cases where the bg region
    # does not fit in the image
    weight_blu = data[:, iblu1:iblu2:sgn].size
    weight_red = data[:, ired1:ired2:sgn].size
    bg = (bgblu*weight_blu + bgred*weight_red)/(weight_blu + weight_red)
    data -= bg[:, None]

    profile = data[:, i1:i2:sgn].sum(axis=1)
    if smooth is not None:
        profile = convolve_fft(profile, Box1DKernel(smooth))
    return profile

12.1.5 Find the celestial coordinates along the slit by using the WCS of the calibrated spectrum

def slit_coords_from_wcs(w, isT, nslit):
    """Input arguments: `w` a WCS object with 3 dimensions and i axis
order (wav, ra, dec) and with j axis order (wav, parallel, perp);
`isT` flag that is True for horizontal slits; `nslit`, is number of
pixels along the slit.  Returns: `coords`, `coord_label` where `coord`
is array of major coordinate for each slit pixel and `coord_label` is
its coordinate name (RA or Dec)

    """
    # Note that j axis order is always wavelength, then along slit,
    # then across slit
    wav, ra, dec = w.all_pix2world([0]*nslit, range(nslit), [0]*nslit, 0)
    if isT:
        # Slit axis mainly along RA
        coord_label = w.wcs.cname[1]
        coords = ra
    else:
        # Slit axis mainly along Dec
        coord_label = w.wcs.cname[2]
        coords = dec
    print('Major coordinate:', coord_label)
    print(coords[::100])
    return coords, coord_label

12.1.6 Loop over all the slits and check the ratios

datasets = set(tab['Dataset'])
MAXLINES = 18
sns.set_palette("deep", 6)
# sns.set_palette(sns.husl_palette(MAXLINES//3, l=0.4))
# sns.set_palette(sns.diverging_palette(10, 220, sep=80, n=MAXLINES/2, center='dark'))
ratio_types = 'nii-ha', 'nii-ha-full', 'nii-nii-full'
fig_ax_dict = {(ds, rtype): plt.subplots(1, 1)
               for ds in datasets for rtype in ratio_types}
def get_plot_dict(iline):
    """Lines get thicker and fainter as iline increases"""
    x = iline/MAXLINES
    return {"lw": 0.8 + 1.2*x, "alpha": 1.0 - 0.6*x}

for row in tab:
    print(row['Dataset'], row['imid'], row['specid'])
    full_id = row['Dataset'] + '-' + row['imid']
    specfile = find_fits_filepath(row, 'fullspec')
    hafile = find_fits_filepath(row, 'ha')
    niifile = find_fits_filepath(row, 'nii')
    calibfile = 'Calibrated/{}-{}.fits'.format(full_id, 'ha')
    spechdu = fits.open(specfile)[0]
    hahdu = fits.open(hafile)[0]
    niihdu = fits.open(niifile)[0]
    calhdu = fits.open(calibfile)[0]
    calw = WCS(calhdu.header, key='A')

    isT = row['saxis'] == 1
    dset = row['Dataset']
    imid = row['imid']

    # First use the extracted ha and nii spectra, plotted against RA or Dec
    ha = extract_profile(hahdu, 6562.79, dset, imid, isT=isT)
    nii = extract_profile(niihdu, 6583.45, dset, imid, isT=isT)
    fig, ax = fig_ax_dict[(dset, 'nii-ha')]
    coords, coord_label = slit_coords_from_wcs(calw, isT, len(ha))
    ax.plot(coords, nii/ha, label=str(row['imid']), **get_plot_dict(len(ax.lines)))
    ax.set_xlabel(coord_label)
    if coord_label == 'Declination':
        ax.set_xlim(-5.51, -5.33)
    else:
        ax.set_xlim(83.85, 83.55)  # RA should increase right-to-left

    # Then use the full spectrum, just plotted against pixel
    ha = extract_profile(spechdu, 6562.79, dset, imid, isT=isT)
    nii = extract_profile(spechdu, 6583.45, dset, imid, isT=isT)
    niib = extract_profile(spechdu, 6548.05, dset, imid, isT=isT)
    fig, ax = fig_ax_dict[(dset, 'nii-ha-full')]
    ax.plot(nii/ha, label=str(row['imid']), **get_plot_dict(len(ax.lines)))
    fig, ax = fig_ax_dict[(dset, 'nii-nii-full')]
    ax.plot(niib/nii, label=str(row['imid']), **get_plot_dict(len(ax.lines)))


for (ds, rtype), (fig, ax) in fig_ax_dict.items():
    ax.legend()
    ax.set_ylim(0.0, 0.5)
    if not rtype == 'nii-ha':
        ax.set_xlabel('Pixel')
    if 'ha' in rtype:
        ax.set_ylabel('6583 / 6563')
    else:
        ax.set_ylabel('6548 / 6583')
    fig.savefig('plots/{}-check-{}.png'.format(rtype, ds), dpi=300)

12.2 Check the pixel bounds for bg and line

12.2.1 Script to check the grating angles and other observational parameters

  <<slit-calib-imports>>
  <<read-slit-table>>
  <<slit-calib-filenames>>
outtab = [['ID', 'Spec', 'Date', 'Detector', 'Grating', 'Aperture', 'Gain'], None]
for row in tab:
    full_id = row['Dataset'] + '-' + row['imid']
    specfile = find_fits_filepath(row, 'fullspec')
    hdr = fits.open(specfile)[0].header
    outtab.append([full_id, row['specid'], hdr.get('DATE-OBS'),
                   hdr.get('DETECTOR') or hdr.get('CAMERA'),
                   hdr.get('APERTURE'), hdr.get('GRATING'), hdr.get('GAIN')])
IDSpecDateDetectorGratingApertureGain
2006-02-3263242006-02-05SITE3150 microns-114
2006-02-3183192006-02-05SITE3150 microns-114
2006-02-2602612006-02-05SITE3150 microns-74
2006-02-2702712006-02-05SITE3150 microns-44
2006-02-2762772006-02-05SITE3150 microns-44
2006-02-2812822006-02-05SITE3150 microns-64
2006-02-2862872006-02-05SITE3150 microns-74
2006-02-2912922006-02-05SITE3150 microns-84
2006-02-2962972006-02-05SITE3150 microns-84
2006-02-3033042006-02-05SITE3150 microns-84
2006-02-3133122006-02-05SITE3150 microns-104
2007-01b-20612062-c2007-01-13SITE3150 microns04
2007-01b-20372038-20402007-01-13SITE3150 microns04
2007-01b-20412042-20442007-01-13SITE3150 microns04
2007-01b-20452046-20482007-01-13SITE3150 microns04
2007-01b-20492050-20522007-01-13SITE3150 microns04
2007-01b-20532054-20562007-01-13SITE3150 microns04
2007-01b-20572058-20602007-01-13SITE3150 microns04
2007-01-0540552007-01-10SITE3150 microns04
2007-01-0670682007-01-10SITE3150 microns04
2007-01-0720732007-01-10SITE3150 microns04
2010-01-078079-0802010-01-15SITE3150 micron-84
2010-01-202203-2042010-01-17SITE3150 micron14
2010-01-124125-1272010-01-16SITE3150 micron-14
2010-01-206207-2082010-01-17SITE3150 micron14
2010-01-128129-1302010-01-16SITE3150 micron-14
2010-01-133134-1352010-01-16SITE3150 micron-14
2010-01-210211-2122010-01-17SITE3150 micron14
2010-01-137138-1392010-01-16SITE3150 micron-14
2010-01-214215-2162010-01-17SITE3150 micron14
2010-01-145146-1472010-01-16SITE3150 micron-14
2010-01-248249-2502010-01-17SITE3150 micron14
2010-01-219220-2212010-01-17SITE3150 micron14
2010-01-157158-1592010-01-16SITE3150 micron14
2010-01-236237-2382010-01-17SITE3150 micron14
2010-01-240241-2422010-01-17SITE3150 micron14
2010-01-244245-2462010-01-17SITE3150 micron14
2010-01-252253-2542010-01-17SITE3150 micron04
2013-02-0240252013-02-16e2vm270 micron-42.2
2013-02-165166-1672013-02-18e2vm2150 micron132.2
2013-02-169170-1712013-02-18e2vm2150 micron102.2
2013-02-237238-2392013-02-19e2vm2150 micron112.2
2013-02-232233-2342013-02-19e2vm2150 micron92.2
2013-02-226227-2282013-02-19e2vm2150 micron92.2
2013-02-149150-1512013-02-18e2vm2150 micron42.2
2013-02-154155-1562013-02-18e2vm2150 micron72.2
2013-02-159160-1612013-02-18e2vm2150 micron82.2
2013-02-033034-0352013-02-16e2vm2150 micron-52.2
2013-02-029030-0312013-02-16e2vm2150 micron-42.2
2013-12-116117-1182013-12-11e2vm215002.2
2013-12-0860882013-12-11e2vm2150-22.2
2013-12-0900892013-12-11e2vm2150-22.2
2013-12-102103-1042013-12-11e2vm2150-22.2
2013-12-111112-1132013-12-11e2vm215002.2
2015-02-000300042015-02-03e2vm270 microns92.2
2015-02-001200132015-02-03e2vm270 microns22.2

12.2.2 2006-02 dataset

xpaset -p ds9 fits ~/Work/SPM2005/pp324.fits
  • Measurements of X at top of image for blue component of 6583
  • Pixel difference between 6583 and 6548 is 356 => dwav = 0.1
A
  • X = 442
  • Grating = -11
  • 324, 319
B
  • X = 491
  • Grating = -7
  • 261
C
  • X = 446
  • Grating = -4, -8, -7 (!!!), -10
    • 287 has grating of -7, which is the same as 261
    • but the position of the spectrum on the chip is the same as for all the others
    • So we can’t use the Grating value for anything
  • 271, 277, 282, 287, 292, 297, 304, 312

12.2.3 2007-01b dataset

xpaset -p ds9 fits ~/Work/SPM2007/Reduced/HH505/slits/reducciones/spec2058-2060.fits
  • All the same
  • X = 440
  • 2062, 2038, 2042, 2046, 2050, 2054, 2058
  • Grating = 0

12.2.4 2007-01 dataset

xpaset -p ds9 fits ~/Work/SPM2007/Reduced/spec055-transf.fits
  • All the same
  • X = 440

12.2.5 2010-01 dataset

xpaset -p ds9 fits ~/Dropbox/SPMJAN10/reducciones/spm079-080h.fits
  • Only tiny changes in position between X=440 and X=443

12.2.6 2013-02

xpaset -p ds9 fits ~/Dropbox/SPMFEB13/WesternShocks/spm025_bcr.fits
  • dwav = -0.056
  • Y = 167 (6583), Y = 799 (6548)
  • Changes to Y = 165 in 238, 233

12.2.7 2013-12

xpaset -p ds9 fits ~/Dropbox/papers/LL-Objects/SPMDIC13/spm112-113_bcrx.fits
  • dwav = 0.05664
  • Y = 843 (6583), Y = 218 (6548)
  • But this already has a decent WCS

12.2.8 2015-02

xpaset -p ds9 fits ~/Dropbox/SPMFEB15/archivos/spm0013o_bcrx.fits
0004
  • Y = 1015 (6583), Y = 402 (6548)
  • dwav = 0.05775
0013
  • Y = 888 (6583), Y = 271 (6548)

12.2.9 Looking at the calibrated spectra

xpaset -p ds9 fits $PWD/Calibrated/2006-02-303-ha.fits

So it seems like all the trouble was due to differences in the zero-level between the nii and the ha spectra!

12.3 Run slit-ratio-check.py

python slit-ratio-check.py # 1>&2

About

Alba project on high-velocity knots in West of Orion Nebula

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published