How a Camera or a lens work?

How a Camera or a lens work?

 How a Camera or a lens work?

If you were to guess how many smartphone pictures will be taken throughout 2018, what would you guess? Perhaps a billion?  Or is it closer to a trillion?  Or is it even higher at 50 Trillion or 1 Quadrillion?


  Here’s some stuff to help you out. There are 7.6 billion humans on earth.  The percentage of people across the globe who own smartphones is about 43%. 

 And let’s say each person takes around one photo a day,  thus the answer is around 1.2 trillion photos, so 1 trillion is a pretty good guess. That’s an astounding number of pictures,  but how many different parts of your phone have to work together to take just one of those pictures?

 That’s the question we’re going to explore: How do smartphones take pictures? So let’s dive into this complex system.

     To start we are going to divide the system into its components, or sub-systems, and lay them out into this systems diagram. First of all, we need input to tell the smartphone to load the camera app and take a picture. This input is read via a screen that measures changes in capacitance and outputs X and Y coordinates of one or multiple touches. This input signal feeds into the central processing unit or CPU and random access memory or RAM. Here, the CPU acts as the brain and thinking power of a smartphone while the RAM is the working memory,  it’s kinda like what you are thinking of at any moment.


 Software and programs such as the camera app are moved from the smartphone storage location which in this case is a solid-state drive and into the random access memory. 

 It would be wasteful if your smartphone always had the camera app loaded into its active working memory or RAM.  It’s like if you always thought of what you were going to eat at your next meal.  It’s tasty, but not efficient. Once the camera software is loaded, the camera is activated,  a light sensor measures the brightness of the environment and a laser range finder measures the distance to the objects in front of the camera.
 Based on these readings, the CPU and software set the electronic shutter to limit the amount of incoming light while a miniature motor moves the camera’s lens forwards or backward to get the objects in focus.


 The active image from the camera is sent back to the display and depending on the environment, an LED light is used to illuminate the scene. Finally, when the camera is triggered, a picture is taken and sent to the display for review and the solid-state drive for storage. This is a lot of rather complex components; however, there are still two more critical pieces of the puzzles and that is the power supply and wires. 

 All of the components use electricity provided from the battery pack and power regulator. Wires carry this power to each component while separate wires carry electrical signals to allow the components to communicate and talk between one another. This is a printed circuit board or PCB, and it is where a lot of components such as the CPU, RAM, and solid-state drive are mounted. It may look really high tech, but it is nothing more than a multilayered labyrinth of wires used to connect each of the components mounted to it. 
If you want, you can add other components to the diagram of your system, however, we limited our selection to these. So, now that you have the system layout, let’s make a comparison or analogy between this system and that of the human body. 

Can you think of parts of the human body that might provide a similar function as those we have described for the sub-systems of a smartphone? For example, the CPU is like the brain’s problem-solving area while the RAM is short-term memory. These are some of the comparisons that we came up with.
   It’s interesting to find so many commonalities between two things that are so very different. Like nerves and signal wires both transmit high-speed signals to different areas of the body and smartphone via electrical pulses, yet one is made of copper while the other is made of cells. Also, the human mind has similar levels of memory to that of a CPU, RAM, and solid-state drive. What do you all think? Overall it takes a complete system of complex, interconnected components to take just a single picture. Each of these components has its own set of sub-components, details, a long history, and many future improvements. This layout is starting to resemble the branches of a tree. Each element will be explored and detailed in other episodes however for the rest of this episode we will focus our attention on the camera.

 But before we give you an exploded diagram of the camera, and get into all of its intricate details, let’s first take a look at the human eye. With the human eye, the cornea is the outer lens that takes in a wide-angle of light and focuses it. Next, the amount of light passing into the eye is limited by the Iris.

 A second lens, whose shape can be changed by the muscles around it, bends the light to create a focused image. This focused image travels through the eye until it hits the retina. Here, a massive grid of cone cells and rod cells absorb the photons of light and output electrical signals to a nerve fiber that goes to the brain for processing.




 Rods can absorb all the colors of visible light and output a black and white image. Whereas 3 types of cone cells absorb red, green, or blue light and provide a colored image. Now, this brings us to a key question: If your eyes only have 3 different types of cone cells, each of which can only absorb red, green, or blue, how do we see this entire spectrum of colors? The answer is in two parts. 

 First, each red, green, and blue cone absorbs a range of light and not just a single color, or wavelength of light. This means that the blue cone picks up a little light in the purple range as well as a little in the aqua range. Second, our eyes don’t detect just a single wavelength of light at a time, but rather a mix of wavelengths and this mix is interpreted as a unique color. It’s kinda like cooking a bowl of soup.  It takes many ingredients chopped up and mixed together to make a complex flavor.  If you look closely, individual ingredients can be identified, but these ingredients taste very different on their own compared to the whole soup together. This is why colors like pink and brown which are combinations of colors can be found on a color wheel, but not on the spectrum of visible light.  So, if this episode is about how a smartphone takes pictures, why are we talking about the human eye? Well, it’s because both of these systems share a lot of commonalities. A smartphone camera has a set of lenses with a motor that allows the camera to change its focus. These lenses take a wide angle of light and focus to create a clear image.

 Next, there is an electronic shutter that controls the amount of light that hits the sensor. At the back of the camera is a massive grid of microscopic light-sensitive squares. The grid and nearby circuitry are called an image sensor, while each individual light-sensitive squares in the grid are called a pixel.
  A 16-megapixel camera has about 16 million of these tiny light-sensitive squares or pixels in a rectangular grid Here we have a zoomed-in image of an actual sensor as well as an even more zoomed in cross-section of a pixel. A microlens and color filter is placed on top of each individual pixel to first focus the light and then to designate each one as red, green, or blue, thereby allowing only that specific range of colored light to pass through and trigger the pixel.  The highlighted zone is the actual light-sensitive region, called a photodiode. These photodiode functions are very similar to a solar panel. Both photodiodes and solar panels absorb photons and convert that absorbed energy into electricity.


 The basic mechanic is this: When a photon hits this junction of materials in the photodiode here, called a PN junction, an atom’s electron absorbs the photon’s energy and as a result, it jumps up to a higher energy state and leaves the atom. Usually, the electron would just recombine with the atom and the extra energy would be converted back into the light.  However here, due to an electromagnetic field, the ejected electron is pushed away so that it can’t recombine with the atom. When a lot of photons eject electrons a current of electrons build up and this current can be measured. Massive grids of solar cell panels don’t measure this buildup of electric current but rather use the current to do work. As mentioned before there are about 16 million of these tiny light-sensitive circuits in a camera’s image sensor. 

 For reference, in the human eye there are around 126 million light-sensitive cells and then on top of that eagles can have up to 5x the density of light-sensitive cells as humans! These cameras are indeed amazing, but they still have a way to go. Getting back to the sensor, there is a lot of additional circuitry beyond the grid of photodiodes that are required to read and record each value for all 16 million light-sensitive squares. The most common method for reading out this grid of electric current is rowed by row. Specifically, at a given time only one row is read out to an analog to digital converter at a time. A rolling electronic shutter is timed with the row value reading to turn off the sensor’s sensitivity to light. The analog to digital converter interprets the buildup of electrons and converts it into a digital value from 0 to 4095. This value gets stored in a 12-bit memory location.

 Once all 2,998 rows, totaling 16 million values get stored,  the overall image, gets sent to the CPU for processing. So now that we have gone through some depth, let’s take a step back and think about a few of these concepts. It’s pretty strange that both the human eye and a smartphone camera only have 3 color sensors, red, green, and blue. Why do humans and cameras share the trait that they both only have sensors for these 3 colors, and yet there is a massive range of other colors? Also, why specifically this section of light in the entire electromagnetic spectrum?  Microwaves, X-Rays, and radio waves are all photons, but why aren’t our eyes or our smartphones able to detect these photons, while being great at detecting these photons? Well, the answer all comes down to the Sunlight that we see on Earth. The Sun emits this spectrum of light. 

 The Y-axis is the intensity of the light emitted, while the X is the wavelength or color. After the sunlight passes through the atmosphere, the spectra look like this, because some of the light was absorbed by Ozone, oxygen, and other atoms or molecules in the atmosphere. It makes sense that because these colors of light are most around us, the earliest organisms first developed photoreceptors,  or light-sensitive cells, to pick up on these colors of light.
   And after millions of years, humans evolved with photoreceptors that still react to these same colors of light and following that we designed our smartphone cameras with the intent to produce the same colors of light that our eyes expect to see. It is, however, possible to use other colors in the grid for a color filter, however, the resulting image would look a little bit different. Another fun fact is that if you look at your smartphone display through a microscope, then you will see a similar red-green and blue pattern. So now we will leave you with a final question: Why are there 2x as many green color photocells in this pixel array? Perhaps it is related to why plants are green, or perhaps why at a stoplight, the green light looks a lot brighter than the yellow and red lights? Furthermore, what would life be like on an exoplanet if their star emits an entirely different spectrum of light?


So, guys, I meet you with my next blog and you have any suggestions for me tell on the comment section.
Thank you very much for giving me your important time 
GOODBYE.










2 Comments

Please do not enter any spam link in the comment box

Post a comment

Please do not enter any spam link in the comment box

Previous Post Next Post