Earlier this year I got an email from a fan of my podcast who wanted to help my listeners and readers better understand 8bit vs. 10bit. The topic was prompted by my interview with Jonanthan Yi. I’ve been meaning to post this for months. If you have anything to add, please do so in the comments.
Many thanks to Vasili Pasioudis of Aegean Films for this information.
8bit colour acquisition Vs higher bit rates.
Low and high colour bit rates.
Just about all but the most expensive monitors display 8bit color, so why should you care about acquiring 10bit footage, in short, to see a better image even though you are viewing it on an 8bit monitor. The higher the bit rate, the more colors you are capturing. 8bcp (8 bits per channel, 8 bpc = 24bit image) essentially means you can capture 256 shades of color/luminosity values (0-255) on each of the RGB channels, 256x256x256 = 16million+ colors, is this not enough? is some cases yes, but in demanding jobs not.
In some lighting conditions, where the difference between the brightest and darkest parts of the image (dynamic range) is not huge, and this difference can be easily expressed within an 8 bpc color depth, you may compare an 8bit and 10bit acquired footage and say, there’s not much of a difference. Most of the time though, we shoot beyond the range that can easily and accurately be expressed within an 8 bit depth.
When we use graduated ND filters, we often are trying to limit the dynamic range and bring this range within the capabilities possible by the media we are recording on. If shooting an indoor scene during the day for example, we might add an ND filter to part of the image of the window to limit blowing the highlights. Essentially, when one uses a graduated ND filter, they understand the scene they want to capture is beyond the dynamic range that is capable in the recording medium they are using and so a graduated ND is a way to ‘push’ the colors or exposure within the range your recording media is capable of capturing. The flip side to this is, if you can not use ND filters, (if you forgot to bring them to the shoot) you may be forced to pump more light into the interior scene to balance against the light coming from the window. If you don’t use NDs or extra light in this situation, you will be recording an unsatisfactory image of either dark interior or blown out exterior window scene. Imagine an alternate universe where a 16bit cinema camera capable of recording 20 f-stops of dynamic range in raw uncompressed format, and this camera can be bought for the price of a 5D3, would this be good or bad? both probably, good as it further democratises film making and bad because DoPs would take less care during shooting as the ‘fix-it-in-post’ attitude will increase as technology moves ahead.
Higher bit depths gives us more colors to ‘pull from’ or more ‘latitude’ in post color grading, where we are trying to pull from a larger pool of colors and push those colors into a smaller ‘range’ of colors since that will ultimately be encoded to an 8bit video file, and viewed on 8bit monitors for the masses. (there are higher than 8 bpc video files and monitors but most of us won’t get to see those. The 5D3 and other DSLRs like most of today’s still cameras actually see color at 14bit, and this is recorded at 14bit when shooting raw stills but is only captured as 8bit when shooting video or jpg stills. If we could bypass the h264 video compression and capture the direct stream coming from the chip, then we would have a good argument of why we need to spend 20x time on a digital cinema camera.
Since the advent of colour television 50+ years ago, we have always had the argument of why we should shoot a higher quality image when our audience will only get to see the lower quality image. In the 80s when video was not so ‘good‘ TV ads were being shot on film, even though they were scanned to analogue betacam SP or 1 inch reel-to-reel video tape at about 500 lines of resolution and broadcasted at about 330 lines of resolution and those who had VHS recorders could record the signal at about 220 lines res. An ad shot originally on film looked much better then one shot on a ‘flavor’ of betacam video even though both were recorded at about 500 lines and broadcasted @ about 330 lines, why, because film had a much higher dynamic range, especially compared to video back in the day, and all this dynamic range was ‘squeezed’ into an 8 bit analogue image.
Low Vs higher bit depth comparison: Example 1.
Have you seen ‘banding’ in blue skies? well thats because the number or shades of blue required to ‘describe’ the sky’s color as you currently see it, require more shades of blue then the amount of blue contained within a pallet of blues from an 8bit image/ footage. If we had more colors to describe this sky, we wouldn’t get as much banding.
Yes granted though, some of the banding is also due to the H264 compression as compression schemes will naturally target flat colors more aggressively than more detailed areas of the the image.
Low Vs higher bit depth comparison: Example 2.
Another way to look at 8bit Vs 10 or more bits is to compare a ‘regular’ image/footage Vs a HDR image which is more ‘smooth’ because there is a greater pool or colors available to describe the image.
Low Vs higher bit depth comparison: Example 3.
Here is another way to compare a black and white image made from just two colors, yep, two colors, black and white (1bit) Vs an image made from a ‘grey-scale’ or black, white and a bunch of grey colors in between the black and white. So the 2 color BW image would look ‘rough’ because shades of grey in the image would need to be created from dots of just black or white arrange densely or sparsely depending on the shade of grey you are trying to imitate. This arrangement of dots and colour is effectively what an ink jet printer driver does, although your printer may have 4 to 8 different ink colours, the driver tells the printer how to arrange the dots of colour to achieve all the other colours in the image. An 8 ink printer will produce better results because it has more colors (dynamic range) to use to describe all the other colours. With an electronic image, different brightness levels are achieved by applying different amounts of voltages to each of 3 sub-pixels (RGB) which when combined makes one pixel. A black and white image produced from say 256 shades of grey (8 bit) would be much ‘smoother’ then one created from just two shades of grey. The same can be said via an image made from say 16 colors rather than 16million colors. Well… guess what… an 8 bit image has 256 shades of grey (0-255) Versus 1024 shades of grey in a 10bit image, which has 1billion colors in 10 bpc (1024 Red x1024 Green x1024 Blue) = 30 bit color + 2 more bits for alpha channel. sometimes 10bit has also alpha so = 32bit total.
Low Vs higher bit depth comparison: Example 4.
Have you done some graphics in AE? and primarily used more of one background color then the other, switching your project from 8 bpc to 16bpc or 32bpc (also called floating point) will give different smoother result in most cases but will require a faster processor and more ram.
Depending on which photographer you talk to, if you talk to a photographer who is a technician as much as he is an artist, he/she will say shooting raw is better because the images look much better under other than normal shooting conditions. Still cameras today actually shoot 14bit images Vs 8 bit when shooting jogs so with still raws you have a huge dynamic range but also get about 2 extra stops of latitude and better ability to reduce noise in post as noise reduction algorithms in Lightroom work much better when working with a 3 layered image raw file Vs a single layer 8bit jpg image.