Resizing bitmap or pixel-based images, whether it be scaling up (enlarging/blowing-up) or down (reducing/shrinking), will always affect image quality. Though, degradation of image quality is more evident/obvious when images are scaled-up than when they are scaled down.
Personally, I am not all that sensitive to the degradation in image quality when scaling images down. To tell you frankly, I really don’t notice any degradation of image quality when scaling down images. But there are numerous tutorials on the web that “teaches” you “how to scale down image sizes without losing image quality”. Honestly, I haven’t bothered reading any of them as I don’t feel the issue is all that important to me. And, in that case, I am in no position to tell any of you if those tutorials are any good.
However, despite my “disbelief” in the degradation of image quality resulting from scaling down images, I think I can “explain” what contributes to the “loss” in image quality that supposedly occurs when scaling down images based on my experience and my own personal understanding of how pixels and digital images work.
It’s quite simple, actually. When you scale down an image, the number of pixels that makes up that image is also reduced, and that means some pixels are eliminated/discarded from the image. If the number of pixels is reduced, the number of colors is also reduced. Therefore, the degradation in image quality.
As I’ve already mentioned, the degradation in image quality is a whole lot more severe when scaling up images. For that, I can give you a very simple explanation by means of an allegory.
Let’s say you have a piece of paper, any size of paper. Think of how you can enlarge the physical dimensions of that piece of paper. Obviously, you can’t, at least not without tearing the paper into pieces, laying the pieces down, and arranging them with gaps between them. This is quite similar to what happens when you scale up images. But instead of having gaps, the graphics software you use will put in additional pixels.
I cannot tell you how graphics software decide on the color of the additional pixels, but I’m inclined to believe that it’s probably some sort of an “averaging” kind of thing.
Say, for example, you have two differently colored pixels sitting side by side each other. The software you use (whatever it may be) probably “calculates” for the “average” of the two colors and assigns that color to a new pixel which it places between the two original pixels. You would now have three pixels. So, technically, you have “scaled up” the two pixels into three pixels. From a distance or at a certain magnification percentage, the overall appearance of the three pixels will appear quite similar to that of the two pixels. (This is just a personal hunch. By no means am I claiming that this is indeed what happens, so, you can probably just forget about ever reading this paragraph).
Anyway, to put things simply, you cannot expect your computer or whatever software you are using to produce or modify data where and when there is none.
Take this for instance. If you have a little photo of, let’s say a person. The person’s eyes in the photo are composed of only a few pixels, so few that you cannot make out the individual strands of his/her eyelashes. You should not and cannot expect your computer/software to be able to make those lashes appear when you scale up the image! They just can’t do it, period. (Well, probably, at least not in this day and age. Perhaps, just perhaps, someday when someone develops something like “nanopixels”, that might just become a possibility).
I hope this was helpful to some of you. ‘Til next time.
Funny Animal Videos #1
1 year ago
Comments: