HDR technique applied to Super 8

Forum covering all aspects of small gauge cinematography! This is the main discussion forum.

Moderator: Andreas Wideroe

christoph
Senior member
Posts: 2486
Joined: Fri Jul 25, 2003 2:36 pm
Location: atm Berlin, Germany
Contact:

Post by christoph »

VideoFred wrote:Color depth is the most important factor for HDR.
Blown out whites is something different, then.

But then the analog signal must have the most information..
The bottleneck is the A/D converter.
dynamic range of the original scene (or film) and bit depth of the digital image are two different things..

in a digital signal, you need a number for black and a number for white (and some numbers in between if you like midtones ;).

with 8bit, black is mapped to 0 and white to 255 (2 to the power of 8)
with 16bit, black is 0 and white is 32768 [edit: fixed a mistake here]

or in scientific terms, black is 0 and white is 1

true HDR images are stored in so called "float" color space, those are usually 32bit and can use numbers lower than 0 and higher than 1 to store additional information, this leaves a lot of room for manipulation. but to display them, eventually they will have to converted to a normal digital image where 0 is the lowest number you can use and 1 is the highest, everything higher or lower will clip back to 1 or 0 respectively (which usually looks ugly if you dont take proper routines).

you could theoretically use a 8bit storage system to hold float HDR image, but due to the nature of those images you waste a lot of precision in areas that you will never use, which means that very few numbers are left to store the important part of the image, so once you adjust them to an image with normal contrast, chances are you'll get heavy banding and the image breaks up.

on the other hand, there's nothing wrong with using 8bit images as a source for HDR images if you use a 16bit or 32bit float container to store it in, since the originals will usually have already a viewing gamma applied meaning that they store the image close to what we want to see.

also note that working in float space can be preferable even with normal images in some situations, however, since it takes quite a bit of digging to understand what's going on it's generally only used in high end film or photo work.

so in short, the float space is needed for holding values outside the black/white area, the high bit depth is needed to get enough precision in the conversion because the pixel color values get changed so massively.

++ christoph ++
User avatar
VideoFred
Senior member
Posts: 1940
Joined: Tue May 25, 2004 10:15 am
Location: Flanders - Belgium - Europe
Contact:

Post by VideoFred »

christoph wrote:
true HDR images are stored in so called "float" color space,
Very fine!
But euch.... how do we do this?
Is it enough to set the software to 16 or 32 bits?


Fred.
my website:
http://www.super-8.be

about film transfering:
https://www.youtube.com/channel/UC_k0IKckACujwT_fZHN6jlg
christoph
Senior member
Posts: 2486
Joined: Fri Jul 25, 2003 2:36 pm
Location: atm Berlin, Germany
Contact:

Post by christoph »

VideoFred wrote:
christoph wrote: true HDR images are stored in so called "float" color space,
Very fine!
But euch.... how do we do this?
Is it enough to set the software to 16 or 32 bits?
no, usually 16bit is mapped to a color space with fixed black and white levels. you can work around this by using some conversion, think for example what happens if you squeeze all 0-1 pixel into 0-0.5 pixels .. suddenly you have a lot of headrooms for whites brighter than 0.5 (originally 1). unfortunately it's not quite that simple because of the gamma applied to normal images, if you're really interested, this is a good starting point: http://generalspecialist.com/2006/05/fl ... no-one.asp (also stu's old articles about elin cover good points about using linear 16bit space to get some float benefits).

most software that have a 32bit option deal with the problems more elegantly.. photoshop has improved HDR editing with CS3, but they hide the math from you (which can be good or bad, depending on how you look at it). high-end film software has been used in float space for a long time.

as a final note, working in float only really is worth it if you work with operations that benefit of superwhites, usually the added processor load and workflow complications add more downsides than benefits.
++ christoph ++
TheoRI
Posts: 2
Joined: Thu Jan 26, 2012 9:08 pm
Real name: Theo
Contact:

Re: HDR technique applied to Super 8

Post by TheoRI »

Dear Fred and other member,

I am using a modified projector with a 5MP industrial CMOS camera to capture our super 8 film. As part of the development process it is evident that the camera does not have the range to capture all the detail available on the film. So I am contemplating running 2 capturing instances at different exposure - light intensities and then "combining the frames" using the fusion script:
http://forum.doom9.org/showthread.php?t=152109

I capture a Bright and a Dark sequence and then would like to join them together.
The avisynth scrip I use below.

The problem I have is that due to small position difference each frame is not aligned in the x-y as they are captured at different times. This results in some "ghosting" around objects. Is it possible to align the frames using an avisynth script before "fusing" them? IF so how? Please update the script below.

The output of this script will then be fed into the script that videoFred developed for super 8 restoration (Thanks a million!)

Kind Regards
Theo

# HDR script using Fusion plugin
# http://forum.doom9.org/showthread.php?t=152109
#
#=============================================================================================


Dark="e:\telecine\Dark input.avi" # source clip, please specify the full path here
Bright="e:\telecine\Bright input.avi" # source clip, please specify the full path here

exposure = 0.5

#----------------------------------------------
loadplugin("fusion.dll")
SetMemoryMax(1024)

a= avisource(dark).converttorgb32().trim(0,0)
b= avisource(bright).converttorgb32().trim(0,0)

# horizontal stack of Dark, Fuse, Light
# Stackhorizontal(a.subtitle("Dark"),fuse(a,b,fusemask(a,b,exp=exposure)).subtitle("Fuse"),b.subtitle("Bright"))

fuse(a,b,fusemask(a,b,exp=exposure))
wado1942
Posts: 932
Joined: Fri Dec 15, 2006 5:46 am
Location: Idaho, U.S.A.
Contact:

Re: HDR technique applied to Super 8

Post by wado1942 »

Unfortunately, I don't think there's a very reliable way to do this. If you capture the image wide enough to get some of the perforations in the transfer, you can try using "Deshaker" for Virtualdub, focused solely on the perfs. You'd want to set the smoothing to -1 (infinity). Then theoretically, you should be able to line up the transfers manually and they'll stay together. I say "theoretically" because nothing works as well in the real world as on paper.
How are you doing these transfers? If you're shooting off of some real object like a screen or piece of paper etc. you can turn on a light nearby to bring up some of the shadow information a little. You'll loose saturation though.
I may sound stupid, but I hide it well.
http://www.gcmstudio.com
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: HDR technique applied to Super 8

Post by carllooper »

wado1942 wrote:Then theoretically, you should be able to line up the transfers manually and they'll stay together. I say "theoretically" because nothing works as well in the real world as on paper.
This idea should work fine and will be at least better than not doing so. Any errors in the registration algorithm should be at worse, subpixel, but in relation to such errors we can note:

1. The physical/geometrical relationship between an image and it's sprocket remains the same relationship irregardless of the algorithm being applied.

2. The same algorithm applied to the same image/sprocket (that differs only in x,y) should not just yield the same approximate registration but also the same error in that approximation.

In other words the passes should line up exactly.

One possible error that won't be the same in each pass, and would therefore interfere with this cosy picture, is lens distortion. However such will typically be so small as to be imperceptible. If perceptible it is also correctable using a fine tuned lens undistort algorithm prior to registration. A related dissimilar error (but also typically very tiny error) is if the scanning camera is not precisely front on. This would require it's own correction prior to lens distortion correction.

Another subtle error that won't be the same in each pass is if there is any distortion in the illumination. If the illumination is not even across the frame then the relationship between illumination and image will be different on each pass. The resulting effect (following registration) is a jittering of the light source. This is best corrected by ensuring the illumination is even in the first place. But if this proves impossible then capturing the illumination without an image to use as a template for "undistorting" the illumination when an image is in place. You would do this prior to registration.
Carl Looper
http://artistfilmworkshop.org/
TheoRI
Posts: 2
Joined: Thu Jan 26, 2012 9:08 pm
Real name: Theo
Contact:

Re: HDR technique applied to Super 8

Post by TheoRI »

Thanks for the quick reply to my question.

I have used DeShacker in VDub but only for a single video file

How would I use Deshaker in this situation where there are two files, with "identical" frames exposed a bit different and with a small registration difference?
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: HDR technique applied to Super 8

Post by carllooper »

TheoRI wrote:Thanks for the quick reply to my question.

I have used DeShacker in VDub but only for a single video file

How would I use Deshaker in this situation where there are two files, with "identical" frames exposed a bit different and with a small registration difference?
I haven't used "Deshaker" but if it involves indentifying an area to use as a reference for stabilisation you would select the sprocket hole. Do the same thing for both scans. You'll have two outputs. You then need to manually allign (by eye) one stabilised scan with respect to the other stabilised scan (using any frame pair for the purpose). This manual allignment on one pair for frames will be automatically the same allignment required for all frames. You can then blend the two streams together.
Carl Looper
http://artistfilmworkshop.org/
User avatar
VideoFred
Senior member
Posts: 1940
Joined: Tue May 25, 2004 10:15 am
Location: Flanders - Belgium - Europe
Contact:

Re: HDR technique applied to Super 8

Post by VideoFred »

TheoRI wrote: The problem I have is that due to small position difference each frame is not aligned in the x-y as they are captured at different times. This results in some "ghosting" around objects.
Hi Theo,

There's only one good solution for this: capture the dark and bright frames at the same position. This is possible with a machine camera with trigger input, and some modified software. The AVI file will then contain a dark and a bright version from the same frame. It can easily be separated in Avisynth with selectevery() or selecteven() and selectodd().

Please have a look at Frank's website:
http://www.cine2digits.co.uk/

With his CFLed system, this will be all possible, and much more. Triple RGB exposure with a B/W camera for example.

many greetings,
Fred.
my website:
http://www.super-8.be

about film transfering:
https://www.youtube.com/channel/UC_k0IKckACujwT_fZHN6jlg
DonFito
Posts: 69
Joined: Tue Jan 08, 2008 4:08 am
Location: San Jose, CA
Contact:

Re: HDR technique applied to Super 8

Post by DonFito »

Although not HDR, DPX 10-bit log files are regularly used when scanning film as this allows for extremely wide Color Grading.
Cheers,

Rafael Rivera
www.donfito.com
Post Reply