Around two years ago, when I decided to purchase a DSLR (after owning a KodakĀ  digital camera for 10 years), I read material related to photography using DSLR, primarily because a DSLR is a different beast as compared to a high-zoom point and shoot (AKA bridge camera). While my Kodak had APSM functionality, I never explored it, except for those few odd shots, which probably will be less than 50 in a span of 10 years.

One of the points that stuck in my mind was the usage of f/8 as the aperture priority setting. This has become by ‘go to’ aperture setting and I have ended up using it for most of my shots.

Even today, I am puzzled when photographers use an f/2.8 for portraits as I wonder how sharp the image will be. It goes without saying that I need to put together an experiment (with the same scene) and shoot it as various apertures. Only then will I understand the difference made by the aperture as well as the DoF for each aperture setting. Obviously, I will need to use an object with depth and cannot use a ‘flat’ object.

Hard Disk Scare 3


, , ,

About 2 yrs ago my Windows XP desktop system was performing poorly. I decided to upgrade the hardware. I Added a new 500gb HDD for the OS. and Used the old 80GB and 160GB HDDs for data.

Again life was fun.


Using Google PhotoScan technique with the DSLR


, , , , ,

Google launched a new program named Google PhotoScan, specifically targeted towards ‘scanning’ photographs using a mobile camera.

I was wondering if the same technique can be used with a DSLR. Will it give better results? Recently I tried to ‘scan’ older photographs using my DSLR and I have decent results. I used the RAW mode as I found that tweaking the settings helped get a ‘scanned’ image that was matching the actual photograph in terms of colour.

Will taking multiple pictures of the same photograph and then stitching them back as a panorama give better results as compared to a single photograph?

For example, while taking a single picture, due to light reflection, I have encountered a glare in a region of the photo. By using multiple pictures, will I be able to eliminate / reduce this glare? I wonder.

I need to do a few experiments.


Google PhotoScan


, , ,

Google PhotoScan is an interesting Android application.

It claims to take pictures that are from the typical glare / reflection that we come across when trying to ‘scan’ a photograph (more so a glossy finish one) using a mobile.

To do this, it takes asks us to take multiple pictures of the photograph, which it then merges as a panorama. Very interesting technique.

I had tried it initially and got decent results. I need to try out the application now that it has undergone a few updates. I will post some samples in a few days.



The irony of sharing is that many people post pictures on social media for consumption by friends. They get many comments and likes.

Then you post a picture. And it gets a muted response.

Looking at the response, you wonder if the picture conveys the same message as the one you had in mind when you took the picture.