After my initial attempts at M81 (and missing my target), I had a second attempt just a few days later on May 16, 2020. Again, the Camera in use was my Nikon D90 with a Nikkor Zoom Lens set to 135 mm.
This time, the target was the southern region of Leo, specifically the area where I assumed the “Leo Triplet” to be. The “Leo Triplet”, that are three galaxies very closely together: Messier 65, Messier 66, and NGC 3628.
This time, my aim was spot on, which is not too difficult since a bright star, Theta Leonis (also known as “Chertan”) was a good aiming point. And this is the annotated version:
Zooming in to a 1:1 ratio, this time, the structure of the spiral galaxies becomes “visible” (though you still need a good idea of what you are looking at) – but again: just a DSLR and a 135 mm lens.
If you want to try for yourself – here is the finding chart:
By now, it has been 18 months since Covid-19 brought me back into Amateur Astronomy and Astrophotography. Time to recap a little bit and go back to the first sessions taken from my backyard.
I have kept all “not completely messed up” images taken – way back when with my old Nikon D90 – but my post-processing software and skills have developed a little bit.
The single exposure above is one of a set of 33 – and the only information I have about it is that I tried to aim at the Ursa Major area. What exactly the image is showing? I don’t know. The easiest solution to get an idea of what is in that image is http://nova.astrometry.net/ – simply upload the file and wait for the result.
The beauty of Astrometry.net: it blind-solves without you having to install a platesolver locally. Once done, I can continue to work with PixInsight (which from my perspective is a “must have” when it comes to working with astrophotos).
With the center coordinates now known, I run the Imagesolver process in PixInsight, provide the coordinates for RA and Dec as well as the Focal Distance (105mm) and the Pixel size (of the Nikon D90 Sensor – 5.5 micrometer) and hit OK.
Why performing the step (again) in PixInsight? So that I can update the image with the relevant data and re-use the information in other processing steps – such as creating the Finding Chart straight from within PixInsight.
Looking at this, it seems that I had aimed at M81 and M82 but missed by a tiny fraction. Back then, I was simply using my tripod and “visual aiming” – a close miss but still a miss. So what else is in that series of images (after all, I did spend time on it so why simply throw them out?)
First of all, the Platesolver has also “told” us that the image is rotated by 92.1° – but PixInsight can take care of that as well, so I rotate the image by 90° counter-clockwise. And have to repeat the Platesolver because the rotation screwed up the earlier solution. Then, the AnnotateImage process put an layer of object annotations over the rotated image.
Well – I have missed M81 and M82 but I got NGC3359 and and IC2574 (not in the cropped image above) in the overall image. Time to “develop” the image…
Pixinsight – DeBayering and Stacking
Since the images were taken with a color camera, the first thing we got to do now is taking care of the so called “Bayer Matrix“. PixInsight can do this – the process is called Debayer. Given that I do not have any matching Dark Frames, Bias Frames, and Flat Frames, this is the first step prior to aligning the images that can be done. The good thing about PixInsight is: it does not alter the original images but instead creates a set of new images so if something goes wrong (or if you later want to come back to old data) you have unchanged originals to start over with.
Now, we got 33 images of the same area – what we want to do is “placing them over each other” to enhance the signal. However, the Earth rotated during the process of taking these images and therefore, the stars in them do not “align” in each of the frames.
A process called StarAlignment is what solves this “problem” – again, it can be done inside PixInsight.
Again, the aligned images are written to a dedicated location, creating yet another set of images along the line. The process can take a little while, depending on the number of images and your processing power (or rather that of your computer) – in my case, all images were aligned after roughly 2 minutes.
Stacking the aligned Images
With all the images debayered and aligned, we are ready to “stack” them on top of each other. While it is not “as easy as just that”, the ImageIntegration process in PixInsight takes care of that.
Other than adding the input images, the only other setting is the Rejection algorithm – this is what takes care of “unwanted information” such as satellite trails and other disturbances.
The stacked image itself now looks like this:
Noticed that “dreadful” background pattern with the image getting darker towards the edges? This is caused by the Nikon lens and if I would have taken flats, it would probably not show. But again, there is a solution in PixInsight called AutomaticBackgroundExtractor.
It is the “lazy approach” to a “not-so-great” image – I ask PixInsight to automatically place a grid over the image in regular intervals, determine the background brightness, construct a background map and then subtract the background map from the actual image.
Removing the background has a stunning impact on the image…
Even though I used the automated process, the impact is significant – the number of stars visible has suddenly jumped up and the circular pattern is (mostly) gone.
Performing Color Calibration & Noise Reduction
Next steps – although on this image not really required – is color calibration and then (necessary) Noise reduction.
The camera sensors are never providing a clean and clear colorized area – instead, the background noise is fluctuating at a low level, represented by the various shades of dark blacks, blues, greens, etc.
This could be reduced significantly by using dark frames and flat frames (which I do not have to that session) but PixInsight comes with a series of (complex) Nose Reduction algorithms that require a bit of time to understand and test…
Stretching the Image
The data in the image is generally “very dark” – most of what was photographed is “black void”, after all. Which also means that the image’s histogram shows very little “dynamic range”. “Stretching” the image from its linear state to its non-linear state means adjusting the histogram without losing too much information. At the end, some slight transformation of the luminance curve increases the contrast, some morphological transformation can help to reduce the visibility of too many stars (to pronounce the ones that are remaining) and there we go:
And the annotated version as well:
The “rest” is an artist’s freedom of image development – leaving the image “as is” for for example using PhotoShop filters to add star spikes and flares to the stars is up to every photographer herself or himself… there is no “right or wrong”, there is only “taste”.
Conclusions: well, for a very first image, this was not bad. Especially not if you consider that the camera used was a regular DSLR with a 105 mm lens that is usually used for macrophotography. Also, the camera was mounted on a regular tripod, which means no compensation for Earth’s rotation at all.
There are some very faint objects in that image – but IC2574 and NGC3559 are the ones that are clearly visible. Obviously, at 105mm not in any detail. But one should consider: IC2574 is a dwarf galaxy some 13 million light years from us and IC2574 is even further out: approx. 50 million light years…
After weeks and weeks of clouds, I do have some clear nights in the forecast – time to do some planning and also explaining how the planning is done.
Weather is an important part of the planning – no need to plan if there is nothing to see. There are two great web sites that can help with a sneak preview of what is to come for your local place. The first one is clearoutside.com. Simply enter your home location and enjoy the forecast.
The second one is windy.com – you need to tweak the settings a bit (see the bar at the lower edge of the screenshot) but this is my “backup” weather forecast, just to get a different opinion.
The next step is observation planning – to know up front what you will be after means less time during the observation window wasted on deciding on targets, etc.
I am using a software called KStars – but that is really only because I also run KStars/Indi to control my telescope and observation sessions. Any planetarium software will do, preferably one that allows eyepiece and camera-Field of View (FoV) simulation.
The first decision is which equipment to use – I got a choice of running a ZWO ASI 533 with a Nikon Lens of my choice (35mm. 50mm, 105mm, 300mm) or the ZWO ASI 533 with a Skywatcher 72ED Refractor. The latter is equipped with a ZWO EAF Motorfocus and a Skywatcher 0.85x Flattener, giving it an effective focal length of 357 mm.
With this data, KStars is simulating the FoV and I just need to set the time for the observation window. I am looking at three “time boxes”: one that starts as early as it is astronomically dark (which mid of February here is 19:30 hrs. local time), one that is 4h plus (23:30 hrs.) and one that is “early morning”, maybe 02:00 hrs.
My KStars already has a bunch of photo targets marked, at this time, Orion will be high in the southern sector so that is a perfect target. Orion is rich on interesting objects, but I think, I will go after two of them with 4h of data each (I got more than one night, so it will be 2h for the following for the first two nights: M42 (the “Orion Nebula”) and IC434 (the “Horsehead Nebula”).
A rough framing in KStars shows: both targets are fitting the FoV of Camera/Scope nicely.
A much better planning comes with CCDGuide (available at www.ccdguide.com) which is provided by the Astronomischer Arbeitskreis Salzkammergut in Austria for a small price.
CCDGuide allows for two important things: a FoV Calculation with a reference image and calculation of the image center and a look into images provided with the software incl. their image data.
There are round about 30 images of M42 included with the current release of CCDGuide and I have added my own one to the user database.
Being able to keep track of images you have taken yourself and those that others have taken, to compare the exposure times, focal lengths, filters used, etc. is a tremendous help in planning your own sessions.
As the night continues, Orion will move westward and sooner or later it will vanish behind some trees in my garden. I am not even sure I can get the 4 hours between 19:30 and 23:30 but I have a next target: NGC 3189 and a group of galaxies near the head of the Lion.
Again, CCDGuide will help to determine if the target isn’t too small for my focal length… the group of galaxies is actually a target for telescopes with twice and more my focal length, but this is what CCDGuide “predicts”:
This is a good target to spend about 30-60 minutes on at first, just to see if I am capturing enough information to make it worthwhile… I can later add more time, if the verdict is positive.
The last image that I am planning for uses a faint galaxy by the name of IC 3393 as the center. This covers parts of the constellation Virgo and the southern edge of Coma Berenices. This area of the sky is “full of galaxies” and what I am after is referred to as Markarian’s Chain.
For a bunch of projects, I am envisioning the use of the BME/BMP280 Environmental Sensor. After having received a bunch from AZ Delivery, I hooked them up to an Arduino Nano – and failed. So just to make sure that the sensor and the code are actually working, I switched over to an Arduino Uno board.
I decided to use the lower 3.3V power supply, wired this one up and connected GND. The unit is addressed via the I2C-Bus, consequently, the other two wires for SCL and SDA are going to their respective counterparts on the Arduino Uno.
Coding the Sensor
I actually did not expect much issues accessing the sensor through the code – but the battle was a bit harder than anticipated: the sensor was not identified at first.
Using the two libraries Adafruit Sensor (Version 1.1.4) and Adafruit BME280 Library (Version 2.1.2), the initial code is as follows:
It was expected that the only thing that was required now would be to initialize the BME280:
bool status = bme.begin();
But apparently, that was a bit too naive – there was no response from the sensor at all. After a bit of searching and trial & error, it turns out that the circuit board has three solder jumpers next to the chip and they are responsible for defining the I2C Address of the sensor.
Almost all libraries are assuming 0x77 as the default address but the solder jumpers – at least on this circuit board – are set to 0x76.
Which means the only thing that really needed to change (in my case) was the following correction to the last line of code:
bool status = bme.begin(0x76);
With this, the sensor initialized and started providing values immediately. Just three calls are required to read the data – temperature, pressure, and humidity.
The seaLevelForAltiude compensation is required to convert the air pressure appropriately, taking the sensors actual height into consideration. At some later point in time, I hope to combine the BME280 with a GPS Sensor and provide the value automatically. For the moment, the 658m of my home location are hard-coded into the sketch.
…and back to the Arduino Nano V3.0
After I was sure the sensor was working, I wanted to see why I had failed to do the same on the Arduino Nano V3.0 – so I exchanged the Arduino Uno for the Arduino Nano again.
Long story short (and for all those that don’t know: the I2C Pins on the Nano V3.0 are A4 (SDA) and A5 (SCL) and I have no clue why it did not work before… now, it simply compiled and worked…
Ever since I started taking images of the nightly skies, one of my biggest issues was “focus”. Not the focus on the topic itself but literally getting the stars into perfect focus. One of the difficulties was buried in my setup: I am using a Raspberry Pi and I am doing a remote desktop session to control it – unfortunately, setting focus manually also means that there is a significant delay of the image from the ZWO ASI Camera to the Raspberry to the Cell Phone running Teamviewer…
The software I am using on the Raspberry, StellarmateOS, supports an auto-focus feature with a motorized focuser and although I initially planned “to build my own”, I eventually succumbed to simply buying the ZWO EAF (Electronic Automatic Focuser). Here are the images of mounting the device to my Skywatcher Evostar 72ED telescope.
Step 1: Make sure your scope is secure!
You want to make sure that your scope and all your other equipment is secure and cannot be damaged or dropped during the installation. The best possible way: put the scope on a table in front of you and remove any attached equipment. It may rest in its mounting plate, but you want to turn it upside down to see the underside of the manual focuser.
Step 2: Remove the Manual Focus Unit
This might sound a bit awkward at first but trust me, this us nothing more than four screws and a basic mechanical disassembly/assembly.
It is worth paying attention to the use of the screws: the four red ones are the mounting screws that are fixing the unit to the telescope. The blue ones are the actual “fix focus” screw and a blind screw (no use). The three green ones control the pressure used to press the axle against the focus unit, the center one is pressure, the left and right ones are balance.
To remove the unit, unscrew the four red ones and carefully lift the unit from the scope. There are four rubber rings below the red screws, make sure they are staying in place on the telescope!
You are now holding the Manual Focus Unit and you can see just how simple the mechanism really is: the axle is pressed to the underside of the moving tube any by rotating it, it “rolls” the tube in and out.
If you turn the unit 90°, you can also see how the pressure of the axle against the moving tube is controlled.
See? A very simple mechanism (but as long as it works…) – keeps Skywatcher’s prices lower than a more complex mechanism here. But back to the installation of the ZWO EAF unit.
Step 3: Remove the single-speed Focus Knob
In order to attach the ZWO EAF to the Manual Focus Unit, you need to remove the single-speed focus handle.
The knobs are fixed to the axle by a screw you can access through the small hole (red circle above) but in order to find the screw underneath, you need to turn the handle until screw and hole are lining up. Then slightly losen the scew and pull the knob away from the axis.
Step 4: Installing the Flexible Coupling
The removed knob is replaced by the flexible coupling device that came with the ZWO EAF. Pick the one that fits the diameter of the axle best.
Things could have been so easy but unfortunately, the Skywatcher’s focus axle is either too long or too short, pick your choice: in order to fasten the telescope-side screw, it needs to either sit outside the focuser’s mounting or it needs to line up with the hole as the knob’s did. You can push it back in enough, no problem but then the axle also blocks the second screw.
The solution: also loosen the other side of the axle and push the axle out enough to fix the coupling dead center. Then shift the axle back into its original position (the coupling will nicely move inside the housing) and tighten the screws on the other side as well. If you now rotate the focus knob that is left, the flexible coupling should also rotate.
Step 5: Fixing the Bracket to the ZWO EAF Unit
This is a preliminary step so don’t fix it to tight. This is merely to make sure that bracket and focus unit can be attached properly – we are doing some fine adjustment later.
You can also to a “test assembly” with the Manual Focus Unit to see that everything falls in place and the two outer center holes (the one with the blind screw and the one that originally took the Fix Focus Screw) are lining up with the bracket’s mounting holes. But do not attach the two units to each other now!
Step 6: Putting the Manual Focus Unit back onto the Telescope
First, the Manual Focus Unit goes back to the telescope, and you need to put in the four mounting screws. Make sure the rubber rings stay in place!
Step 7: Mount the ZWO EAF Unit
In a final assembly step, mount the ZWO EAF Focuser using the two outer center holes and the two screws mounting the bracket to the ZWO EAF focuser unit. Makre sure all screws are sitting tight also make sure that the flex coupling is not touching the Manual Focus Units’s metal frame (you might have to push the ZWO EAF a little bit “down” before tightening the two screws on the side.
Some additional Notes
When the ZWO EAF Unit is assembled, the manual focus will no longer work! However, for testing (and in case you do need it) you can simply loosen the two screws on the ZWO EAF’s flex coupling device.
My focus unit gave some “eerie sound” once everything was reassembled and that came from the flex coupling device actually had contact with the metal frame of the Manual Focus Unit. So a bit of playing with the screws (including the Manual Focus Unit’s Pressure and Balance screws!) might be required.
Finally, I had everything put back together, connected the camera to my Raspberry Pi and configured by EKOS/INDI Profile to include the ZWO EAF Focuser. Started, connected, and did a manual focus in and out…