Astrophotography is really two distinct sets of skills. Each has it's own set of skills and requirements. Each has to be mastered before you can produce quality images. Needless to say you can never get to good at either.
The first set is what I call data acquisition. This is set of skill is made up of everything needed to acquire data. This includes the mount, the telescopes, cameras, acquisition software, etc. This is the hardest to to teach because no two systems are alike.
The second set of skills is processing. This is the techniques and methods used turn the raw data into presentable images. It includes calibrating data, stacking, bad data rejection, levels, curves, etc.
On this page I will try to outline my process, or as it is commonly referred to as work flow. By no means is meant to be comprehensive! Instead it is mean to be an outline.In addition I have a one shot color camera (OSC). The fundamentally changes the process as you have to add a couple of steps to the flow.
For software I use two programs 1) CCDStack and 2) Photoshop CS4. There are other programs that you can use that will work.
Although it is called calibration I prefer to think of it as just another step. All imaging systems (at least the ones we can afford) suffer from some level of vignetting. The vignetting manifests itself by a darkening along the edges of the image as shown below.
Notice how the image is darker towards the edge and lighter towards the middle. That;s vignetting. This is corrected by dividing the image by whats is called a flat. The collection of field flats is part a data acquisition and will not be covered here. I assume you have collected suitable data set for those. To further correct for camera properties you must also subtract the read bias from the image. Again I assume you have collected those. There is one more calibration frame that most people have to use but I don't. I use a QHY8 camera that uses a chip that has very little dark current. Dark current is the signal that arises from the CCD chip as it is just sitting there, even with no light falls on it. If you take a picture of a 5 minute exposure in the dark and subtract from it a 0.001 sec exposure the resulting signal is the dark current. In my case the result is practically zero. The advantage is that I don't have to spend time collecting dark frames at specific times and temperatures.
Step 1 - Make master flat and master bias
To get a good master flat and master bias I take 20 exposures of each. In CCDStack you make the masters in the "Process/Calibrate/Make Master Flat (Bias). Tell it where to get the files. The results of both are below.
After each master is created and saved, do the actual calibration by first loading your data files "Files/Open" then selecting all the files you want to work with. After they are loaded "Process/Calibrate/". For a one shot color you use the flat frame for the flat but use the bias frame in the dark frame spot. Not sure why this works but it does! The results are shown below.
You now have a set of images that have the vignetting removed and any bias signal removed. On to stacking!
A OSC camera contains the all the RGB data needed for a complete color picture but it is stored in a way that requires a little work to get it out. In each "pixel" of the camera there are actually four pixels, a red pixel, a blue pixel and two green pixels. We want to process them separately as this will give us the best result. You have all your calibrated images loaded so go to "Color/Convert Bayer (OSC). Here is where it gets a little tricky. All of the standard settings should be good but you need to know which Bayer pattern is right for you camera. All of the manufacturers have this information available. Mine is the one in the upper right of the section (RGGB). Click apply to all. The program will now create three separate images one for the red channel, blue channel and one for the green channel. I save these is a directory called "working". Once those are files saved clear them all out from the list and reload the red channel images (all of them).
Stacking your images improves the signal to noise ratio allowing the signal to increase as the bright pixels add together while the is suppressed. This has the effect of increasing the signal while minimizing the noise. The process is pretty simple. The first thing that needs to happen is load all your calibrated images (or just leave them from the calibration step) and go to "Stack/Register". The process of registration matches all the images to the same scale and position. When the dialog box opens it gives you several options. I have found that using the standard "Star Match" with the "Automatic Standard Settings" works just fine. After the stars have been matched you have to "Apply" it. "Bicubic B-Spline" fit works just fine.
Next you have to normalize the data. This makes all the backgrounds and highlights equal values. Go to "Stack/Normalize/Control/Both". The program will ask you to select a background area. Just click and drag to pick a small area. The same thing will happen with the highlight. It will look like nothing happened but it did.
Now it gets tricky. It turns out that data behaves in a statistically predictable way. In this section you examine the data pixel by pixel to see if an individual image has a value the is outside this normal range. If it is outside the range it is "rejected" and replaced by a "normal" value. You control what "normal" is and how it is replaced. Go to "Stack/Data Reject/Procedures". In the drop down pick the "STD Sigma Reject". Don't worry about all of the choices in the section the std sigma method works fine. I am sure there are differences between them all but I have never noticed them. Once the dialog box opens there is only is only one setting that matters and it's "factor". This number determines the sensitivity of the rejection. The higher the number the more picky the rejection. You vary the number, generally between 2 and 3 to get about 1% rejection. Play with this until you get the rejection right. The method I use is to select an area to look at and run all the image with a rejection number. In the "information" box I look at % rejected stat. I vary the the factor to get around 1% rejection then check all of the photos to see if they are about the same.
After that is done you have to "interpolate rejected pixels" and apply to all. That is it. There is no point to posting a picture since you can't really see all that big of a difference.
What we have been waiting for. Right now you have all of your images that have been calibrated, registered, normalized, and data rejected. Go to "stack/combine". From here there are several options. I have compared median, mean and sum. To be honest I cant really tell the difference but I am sure the is some mathematical reason to one over another but I always use mean. After the computer finished you have an image that will be name "mean". You did it. Save the file as both a .fits file and a .tif file.
Don't forget we just complete only the red channel. You must do the same thing, starting with registration.