Whenever we visit a place as part of our vacation or picnic, we grab the sunsets (most commonly) and the sunrises (rarely and those too with great effort), we end up with pictures that look very similar to many previous sunset pictures.
Recently, I read an article about the Google Pixel and its usage for night time photography. The author started with the statement that he taken a night time picture in USA using a DSLR, followed by shooting the same scene using a Google Pixel and the Google Nexus.
The author did multiple experiments and has detailed the steps taken.
While the images have come out very nice, the most important thing to note from the blog is that the author has taken multiple images using the phones, which were then combined to get a single image. If a similar technique had been used with a DSLR, the images would have been equally nice.
My point is simple. To make an comparison, ALL parameters in an experiment have to be same. Changing the parameters for one of the elements being compared is unfair.
- Experimental Nighttime Photography with Nexus and Pixel, https://research.googleblog.com/2017/04/experimental-nighttime-photography-with.html
Over a recent weekend, I took a bit too many photographs. I had around 560 (in a 16GB card), around 320 (in a 8GB card) and around 200 (in another 16GB card). So, that is nearly 1000 pictures.
The total number of files – RAW and JPG – combined was 2186 files. The size was 31.8 GB.
Now sorting the pictures is a headache.
When looking at prize winning images or the ‘perfect’ images, we read about the number of image uses to achieve the desired shot. Sometimes, this number is in the hundreds, sometimes in the thousands.
Recently, a bird was making nest in a tree near my apartment. I wanted to grab a picture of the bird in mid-flight. The tree lies between two buildings and is in shade most of the day. Only when the sun is overhead does it get direct sunlight.
I tried many settings and got many blurred shots. The shade adds to the problem. Initially using RAW + JPG output. getting less frames. I switched to JPG expecting more frames. The camera is supposed to shoot 5fps but I doubt that. After the initial burst of 3 shots, the camera seems to shoot at 1fps.
The max setting is 1/4000 in shutter priority. I got a few grabs, but in most cases, the frames are empty. I am amazed at the speed with which the bird flies – particularly when leaving the nest. While landing on the nest, the speed is slower (obviously).
Then I had a brilliant idea. I thought of taking a video of the bird with the idea of extracting frames from the video. While shooting at 30fps, I was hopeful of getting some good freeze shots. but it did not work that way. In fact, I found that the bird was a blur in the video. But the video was of the bird leaving the nest. I need to try a video of the bird coming into the nest.
With this experience, I am able to appreciate the effort needed in taking that ‘ultimate’ shot.
- After 6 Years And 720,000 Attempts, Photographer Finally Takes Perfect Shot Of Kingfisher, http://www.boredpanda.com/perfect-kingfisher-dive-photo-wildlife-photography-alan-mcfadyen/
- A Perfect Photo of a Kingfisher, 720K Pictures in the Making, https://www.wired.com/2016/01/alan-mcfadyen-kingfisher-dive/
The third dilemma I am facing: To resize or not to resize.
Given that an 18 mega-pixel image is very large and rarely do I view it at 100% resolution or intend to print them larger than a 4 inch by 6 inch print, I wonder if I should resize the images, this saving on valuable disk space.
The second dilemma I am facing is: To crop or not to crop.
Many of my photos still continue to have the object in the centre. While I do change the composition after getting focus, many photos are ‘centered’.
So, my second dilemma is, should I crop the image to make it more ‘artistic’ or should I preserve the ‘original’ image?
- Dilemma 1 –
After I purchased a DSLR in mid of 2015, I am facing three dilemmas.
The first dilemma is: To delete or not to delete.
Earlier, I never deleted any photo – irrespective of the quality. Now, sticking to that decision is becoming a problem as the size of files has gone up siginificantly with the DSLR.
Recently I read a review of the Canon G3 that has a lens that goes from 24mm to 600mm, my ‘zoom’ grouse immediately came into play. While I am happy with the image quality and flexibility of my Canon 700D, I am still to overcome the fact that it still goes only till 250mm, while my older Kodak could reach up to 380mm.
A part of the review mentions that the picture goes a bit dark at the longer end of the lens (higher zoom). This aspect of lenses using a larger f/stop (smaller aperture) has always been a puzzle for me. Based on what I have read on the Internet, the Panasonic Lumix FZ1000 has a constant f2.8, but is quite expensive. Apparently, maintaining a large aperture on longer mm lenses is hard and very expensive.
Some time later, I thought I got the answer. Here is my explanation for the higher f/stop on longer zoom. Let me know if it is correct.
The reason for a higher f/stop is due to plain simple physics of light. When the aperture is small (large f/stop), light focuses properly on the sensor. When the aperture is large, light from a wider area can enter the camera and its focus will fall short of the sensor, unless a lot of management is done in the camera.