Very interesting. How can one learn more about how to do the post-processing for Astrophotography?
That's a long a complicated subject. It can take years to learn -- astrophotography is far more complicated than regular photography.
For starters... image acquisition:
The Earth is rotating on it's axis -- moving from West to East. This causes the illusion that the sky is moving from East to West. If you use a _very_ wide angle lens to capture a huge section of sky, you may be able to get away with a tripod. You can use something called the "Rule of 600" to find the exposure time. The "rule of 600" says that if you divide 600 by the focal length of the lens (whatever that happens to be) then the result is the number of seconds you can use for an exposure in which the stars will NOT "elongate" (start to grow tails due to movement of the sky). The rule is based on the assumption of a 35mm film camera (or full frame digital camera). With a full frame camera and a 14mm lens, you could do 600 ÷ 14 = 42.9 (about 43 seconds). But if you are using a crop-frame camera, you have to divide that value by your crop factor. That's going to be 1.5 to 1.6 depending on your camera. That's going to drop you down to about 27 to 30 seconds per frame. With an 18-55mm kit lens on an APS-C DSLR, it's about 21 seconds. Going longer than that and you risk elongated stars UNLESS your camera is mounted on something that tracks the sky. Sometimes you deliberately want star trails ... that's a completely different topic but for that you would, of course, go longer.
When imaging a specific object (such as I did here), the field of view is considerably narrower and this _requires_ tracking. This gets complicated... quickly. If you are using a telescope on an alt/az mount (level to the horizon) then the image will appear to "twist" as you track it -- and that's no good... it'll result in blurred images. The mount has to be equatorially aligned.
It's not *just* a matter of using an equatorial mount (this is a mount where the major axis of rotation (the "right ascension axis") is parallel to Earth's axis of rotation. That means as the Earth rotates from West to East... the mount is rotating from East to West ... and at exactly the same angular speed. This causes objects to be maintained in the field of view. But getting the mount aligned accurately is VERY important. For visual use it just needs to be close. For astro-images purposes it really needs to be accurate. There are numerous techniques to accurately align the mount. One of the most popular is something called the "drift alignment" method. I'll skip explaining that because it'd be a whole post in itself. You can find numerous articles on the net that explain how to do it.
The mount needs to be SOLID.... I mean REALLY SOLID. I can't over-emphasize that enough. Serious imagers spend a lot of money on the scopes... but they probably spend even MORE money on their mounts. If your mount isn't rock solid then it doesn't really matter how good your camera and scope are. A decent mount for astro-imaging probably starts at around $1000-1500 and goes up from there (to be clear... I'm JUST talking about the mount... that's not the price of the scope.) Also you don't want to overload the mount. Too much weight causes the mount to flex and you get bad tracking. Take whatever the "marketing" version is of how much weight the mount can handle... divide by 2. Try to avoid going over that much weight (some people say 60% of whatever the manufacturer "claims" is the real limit you should use.)
And even with all this care and precision... the tracking will STILL blur. So the next element is to use auto-guiding. This involves a 2nd camera and usually (but not always) a 2nd scope. These are both on the same mount. This adds weight to the mount so of course all this needs to be factored into the mount capacity. The auto-guider takes an initial shot of the sky using a wide field view. You pick a suitable "guide" star within that frame. The tracking software then does some automatic calibration to learn the responsiveness of the mount and then it's ready to go (PHD is the most popular auto-guiding software and it's free). Once tracking begins, give the scope about 5 minutes to let the auto-guider settle and then you can start imaging.
The ability to knock back noise is a Poisson progression... the noise can be reduced by the square root of the number of "light" frames that you shoot. e.g. if you shoot 9 frames then you can reduce the noise by a factor of 3 (the square root of 9). Shoot 16 frames and you can reduce noise by a factor of 4 (the square root of 16). Most imagers indicate there's no much point going over 25 light frames. These frames are not used to increase light gathering... they are used to reduce noise and create a smooth image.
The idea is that there will be a TON of noise in the image... but also a lot of faint detail. It's VERY hard to tell the difference. Take enough frames and the faint detail will consistently appear at the same spot in each frame. The noise will usually be random. Some noise will be consistent (pattern noise)... but a technique called "dithering" can eliminate that. When "dithering", the image acquisition software (I used "Backyard EOS") will communicate with the auto-guider. Between each "light" frame, the image acquisition software will tell the auto-guiding software to perform a random shift of the image. The faint detail which is real will shift in the image frame according to how much the scope moved. The pattern noise, on the other hand, will stay put. This makes it easier for the stacking software to determine which data is "real" and which is noise.
You also need "dark" frames... these are frames shot at the same temperature, ISO, and shutter speed as the "light" frames... but with the shutter closed (or scope capped). The general guideline is to shoot half as many dark frames as you did light frames. These frames contain noise caused by the camera sensor itself. This helps the stacking software determine how much noise is normally present in your camera (at that temperature... noise is related to the physical temperature of the sensor... hotter temps produce more noise. Professional imaging cameras for astrophotography use "cooled" cameras to reduce the noise.)
You also need something called "flat" frames. All camera lenses have some vignetting. Turns out telescopes are basically big camera lenses and, not surprisingly, they also have vignetting. Typically the center of an image is brighter than the edges. While this normally is not noticeable, it's a problem for astrophotography because the image data has to be "stretched" to tease out the detail. This "stretching" process also has the undesirable side-effect of exaggerating the vignetting problem. The "flat" frames are a series of images taken by the same camera, scope, focal length... in order to detect _very_ slight differences in light distribution across the frame. Believe it or not the image stacking software can detect these faint differences (not normally visible to the human eye) and fix them. If you don't fix them then you WILL notice the uneven lighting across the frame once the data is stretched.
You can also gather something called "bias" frames. A "bias" frame is technically a 0 second exposure. In order for your camera to work, it has to apply a charge to the sensor and then do a readout. The idea is to find out how much of a charge is on the sensor just to make it operate at all. That's the "bias". The point of this is give the software the bias as a baseline. If you subtract the bias from the dark, then you end up with the amount of data in noise over and above the baseline just based on sensor temperature and image duration and ISO. This helps the stacking software do a better job processing out noise and determine what data is "real".
Learning to stack is a process in itself. I'd start with something called "Registax" if what you are imaging are planets. If you are imaging deep space objects then I'd start with DeepSkyStacker. Both are free. They are not as advanced as some other tools... but free is good when you're learning. Some guys use Photoshop (the top imager in our club uses Photoshop -- I find it awkward to work with for astrophotography because of some of the steps involved.) I use PixInsight (PixInsight is not free and it's got a bit of a learning curve... but I think it's very good.)
Once you have stacked data and the preprocessing is done (btw... it can take a LONG time to preprocess the data depending on your computer. On a very high end computer it might crank for 15-20 minutes. On an older slower computer it might work on the data for quite a few hours) you can start 'stretching' the data. This is the process of trying to tease out the detail and further minimize the background noise. This is a bit of a dark art. My astronomy club has two astrophotography groups and quite a few images who meet twice per month. You can spend years working on learning the techniques to process images.