The $3,000 open source multi-format 8/16/35 scanner project

Forum covering all aspects of small gauge cinematography! This is the main discussion forum.

Moderator: Andreas Wideroe

User avatar
Nicholas Kovats
Posts: 772
Joined: Sat Mar 25, 2006 7:21 pm
Real name: Nicholas Kovats
Location: Toronto, Canada
Contact:

The $3,000 open source multi-format 8/16/35 scanner project

Post by Nicholas Kovats »

This is an another amazing open source scanner project ...this time by Matthew Epler.

Watch the presentation video here, i.e. http://mepler.com/Kinograph. Here is his thesis presentation where he details his work and amazing antidote whereby the King of Jordan injects $10K in startup funds due to the discovery of historical 35mm film of his predecessor, his father, i.e. https://vimeo.com/66766340#. The official Kinograph web site, i.e. http://kinograph.cc/
Nicholas Kovats
Shoot film! facebook.com/UltraPan8WidescreenFilm
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

Yeah - that's way cool.

That was my next project - 3D printing the film transport system.

I use OpenCV as well, for doing the sprocket and image registration (amonst other things such as optical flow computation).

But it's the film transport system they've done which is the inspiring thing for me.

One of the questions on their site is:

> Why are you using a DSLR? Won't the camera mechanics break eventually?

>> It was important in the first design to show that digitization could be thought of in a "worst-case scenario" instead of the "best-case really expensive only institutions flush with cash can afford it" scenario. Also, this project is inspired by a collection of films I found in Jordan where I used to teach film history. I know that my former students have DSLRs and the design is built to work with materials that are readily available. I am currently testing electronic shutter options which should address this problem.

I've solved this problem. The mechanical shutter can be programmed to just stay open for the entire capture session, ie. not opening and closing on each frame. It opens at the start of a session. All the frames are captured. And then it is closed - just like when you are shooting video on a DSLR.

C
Carl Looper
http://artistfilmworkshop.org/
hirudin
Posts: 49
Joined: Fri Feb 24, 2012 6:58 am
Real name: Jesse Andrewartha
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by hirudin »

Fantastic work... I like how $2000 of that cost appears to be the camera, so in reality it's a $1200 printer and you use your family camera. Though I'd have to get a 3D printer, so that's an added cost.
milesandjules
Posts: 258
Joined: Wed Mar 09, 2005 11:22 am
Location: brisbane australia
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by milesandjules »

That is a cool project.....but yeah that camera shutter will only last 100k to 200k frames before it will die.....thats a $2000 amera dead after two hours of footage....why doesnt he just capture the hd video feed?

Did anyone see where his sprockets stl files were?...I want to print them. :D
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

I've sent him an email. No reply just yet.

C
Carl Looper
http://artistfilmworkshop.org/
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

Mathew's efforts are commendable. The best part of his story is actually the project on which he is working (cultural memory in Jordan) more so than the tech.

One of the benefits of a flash system - which is why it was introduced in modern systems - is to avoid using sprockets/claws. This is particularly important when digitising archival material rather than fresh material. Over time, film shrinks, so the distance between sprockets shrinks, to the point where it won't work on a sprocket/claw system, or worse, it works but a sprocket/claw system will tear the film. Avoiding a sprocket drive also avoids the problem of film with torn sprockets.

So Mathew has adopted a flash system, but not adopted the real purpose of it. He'll need to eventually address this if the archive is to be his ongoing interest.

The reason he is currently using a sprocket drive is to actually syncronise camera capture to film frame position, but there are other ways he could do that, either mechanically and/or in the digital domain, at open source prices. For example, if the film is going to be digitally registered anyway, then there is no need to mechanically register it, or rather, one can be a little more relaxed on the mechanical side (so cheaper), for example, having the camera capture rate at roughly twice the frame rate of the feed, and doing a stitch in the digital domain. Digital stitching is well researched and there are OpenCV (open source) routines that can be used to perform such stitching (not just roughly but exactly). But of course it's a lot more work on the digital side (programming/maths/etc). But in the open source, low cost, DIY domain, work is what you end up doing (on the smell of an oily rag etc). The main pay off is kudos, one might say. Mathew's real kudos is in the identification and rescue of cultural history in a difficult environment. The news story should have spent more time on this. And ending the story on the unregistered material doesn't really do justice to what Matthew has achieved so far.

Now a flash system requires some thought on what the results will be.

For example, suppose the flash is 1/500th of a second. This means the film should not move any faster than 1 pixel per 1/500th second otherwise motion blur will start to build up (the faster you move the film, or the slower the flash interval).

For example, for a 5K capture of 16mm, (as I'm otherwise doing) 1 camera pixel sees 0.002mm of 16mm film, so on a continuous feed system, if 5K definition was to be maintained, the motion of the 16mm film would have to be no more than:

0.002mm per 1/500th sec = 1mm per sec = 1 frame per 7.49 secs (for 16mm)

Using a shorter flash interval would make it possible to feed the film faster. Or reciprocally, using a longer duration flash, would require feeding the film more slowly. Or living with a lower definition scan. Interestingly motion blur in one dimension can be computationly factored out, recovering quite a good signal, so that's another technique that can be taken into account.

C
Carl Looper
http://artistfilmworkshop.org/
aj
Senior member
Posts: 3556
Joined: Thu Oct 02, 2003 1:15 pm
Real name: Andre
Location: Netherlands
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by aj »

I think it is not correct to assume that the allowable movement is one pixel length in the flash duration. It will then be unsharp, if it were half a pixel the opposite film part will be in two pixels. Causing softening unsharpness contrast decay and whatmore.

And 1/500 isn't really a flash duration. A big tube flash will unload fully in 1/300. Partial unload is more like 1/20000.
Kind regards,

André
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

aj wrote:I think it is not correct to assume that the allowable movement is one pixel length in the flash duration. It will then be unsharp, if it were half a pixel the opposite film part will be in two pixels. Causing softening unsharpness contrast decay and whatmore.

And 1/500 isn't really a flash duration. A big tube flash will unload fully in 1/300. Partial unload is more like 1/20000.
Quite right. Half a pixel would be a much better figure to use. The resulting motion blur would then be only half a pixel rather than one pixel. Of course any motion, even the smallest, must add some motion blur, so it's really a question of what you want to live with (feed speed/definition tradeoff). The best results are, of course, when the film isn't moving at all. A stepper motor could be used to do that. Or using motion blur deconvolution can be really quite extraordinary, especially if it's only in the vicinity of a pixel or so, and you know, in advance (or "a priori" to use Kantian logic) exactly what the blur length is. Indeed one could record some feed rate signal during capture to subsequently drive a digital deconvolution pass on the data.

The figure of 1/500 sec for the flash was purely as an example. LED panels can be electronically controlled to turn on/off in very short intervals. Indeed LEDS are typically controlled in terms of very fast on/off rates to control their brightness. Certainly if you can unload enough photons for exposure, in 1/20000 sec, then 1/20000 sec would be the duration to use when calculating film feed speed.

C
Carl Looper
http://artistfilmworkshop.org/
User avatar
Nicholas Kovats
Posts: 772
Joined: Sat Mar 25, 2006 7:21 pm
Real name: Nicholas Kovats
Location: Toronto, Canada
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by Nicholas Kovats »

Fantastic exchange, people! I can't remember where I saw it but I commend you Carl for making first contact with Matthew Epler. Eventually the apostles of small film format shall debut a low cost DIY hiz rez scanner for their numerous formats.

I myself would like to see a 3D printed 14 perf 70mm sprocket for my forthcoming footage in that specific format. Otherwise it is prohibitive. And of course the 2 perf 16mm width of UP8 2.8 and 3.1.

Onward!
Nicholas Kovats
Shoot film! facebook.com/UltraPan8WidescreenFilm
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

Regarding the mirror issue on Canon EOS cameras.

On almost all of the EOS cameras the mirror shutter has to be software programmed to stay open for a session if one wants to avoid it flipping in and out on each and every frame capture.

However Canon have also released a compact camera (the last of the major brands to do so) called an EOS-M camera, that does away with the mirror altogether, and is supported by the same EOS Software Dev Kit as the other EOS cameras . The EOS-M has the same 18 MP sensor (5K width) as a 60D and same processing chip. It can be used with the same EF lenses that are used on the other EOS cameras, via an adapter ring.

Reviews of the camera give it some downgrading in terms of battery life and autofocus, but for film scanning purposes these features are irrelevant.

Carl
Carl Looper
http://artistfilmworkshop.org/
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

freedom4kids wrote:Fantastic exchange, people! I can't remember where I saw it but I commend you Carl for making first contact with Matthew Epler. Eventually the apostles of small film format shall debut a low cost DIY hiz rez scanner for their numerous formats.

I myself would like to see a 3D printed 14 perf 70mm sprocket for my forthcoming footage in that specific format. Otherwise it is prohibitive. And of course the 2 perf 16mm width of UP8 2.8 and 3.1.

Onward!
Good idea. I'll create an STL file for you. Found this place where you could get it printed: http://3dprintingservicescanada.com/

C
Carl Looper
http://artistfilmworkshop.org/
RCBasher
Posts: 456
Joined: Thu Jan 11, 2007 9:27 am
Location: UK
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by RCBasher »

The calculations are wrong ;) As you are talking about a Canon SLR, then you are talking about a sensor with a Bayer RGBG layout. For an 18Mp APS-C sized sensor, you will only have a vertical true resolution of around 865 pixels for R & B and 1730 for G. DeBayer will interpolate some apparent resolution back into the image and this may actually be assisted rather than hindered by some small amount of blur. In reality, there is seldom enough image detail in the film to cause a practical problem when scanning. Not because there can’t be in theory, but because the likely hood is that the taking camera and/or the subject matter will not have been all clamped down to a bed of granite and exposures will have been in the milliseconds of time, not microseconds. The image data will be blurred to some degree.

As some here will know, I developed a very high power adjustable RGB system for use with continuous film transport systems. During my tests, I was rather surprised how long the exposure could be before I could detect any difference between a stationary frame and one moving at 24fps. My camera only has 1036 vertical pixels (518 G, 259 B/R) but with BiLinear HQ de-Bayer it doesn’t do a bad job on an SMPTE R32 S8 frame. I found that at 100us exposure (1/10000) the moving frame was, for all practical purposes, indistinguishable from the static frame. I settled on that as a top limit for exposure time, with my system going as short as 10us (1/100000) when the film density allows it.

As has been well discussed before, by far the bigger problem in digitising is the dynamic range of the sensor, limited by the noise floor. Theoretical film resolution goes many orders higher than any current digital sensor, but the intensity of the higher frequencies gradually drops off, requiring a senor not just with an enormous pixel count (think Gpixels for 35mm film) but a significantly higher dynamic range than anything we have available today in order to pull out these high frequency / low amplitude features. Digital, by comparison, gives a very “abrupt” frequency response drop-off, which in turn gives the appearance of sharpness but not of extra-fine detail.

Anyway, the point of what I’ve written above? Don’t get too hung up whether you will have one or two pixels of theoretical motion blur (assuming a reasonable sized sensor) because it will not be the true limiting factor in image quality.

Frank
Off all the things I've lost, I miss my mind the most.
JeremyC
Posts: 153
Joined: Fri Jun 01, 2012 2:51 pm
Real name: Jeremy Cavanagh
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by JeremyC »

RCBasher,

Very informative post as ever. Just wanted to ask, in your opinion do you think the high end industry film scanners 'miss' any significant details or aspects when scanning 35 mm film or similar?

Perhaps off topic but I am finding the advertising about '4K' TV sets here in London quite ironic. For source material to show on these sets in stores and push them they are having to use a 55 year old film in Lawrence of Arabia made with 100 year old technology.
RCBasher
Posts: 456
Joined: Thu Jan 11, 2007 9:27 am
Location: UK
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by RCBasher »

I only touched the surface…there are many other factors to consider too, like the slight distorting effect of the microlenses above each pixel!

Hard to answer whether “significant” detail is lost. A lot is subjective and of personal opinion. If I watch a film, I’m primarily interested in the story content. If the image quality is so bad it can’t resolve the knife used to kill someone, it would have a significant impact on my viewing experience. If on the other hand I could see the knife very clearly, but perhaps not the hairs on the back of the assailant’s hand, this would not have a significant impact. Some exaggerations obviously, but I’m trying to make the point that we are only deciding what quality level we are happy with. Theoretically, Fuji Velvia 50 will resolve to 160 lpm with a high contrast target (1000:1), at which point the MTF will be close to zero. To recover all of that would require a true RGB sensor of about 47 Mpixels for 35mm format cine. Now scale that up for a Bayer sensor…..

4K TV? Ha! Even with today’s HD, the hell is compressed out of it to make it broadcastable. The images may have 1920x1024 pixels, but how many of them are useful and an accurate representation of the source? So again, it is all about at what quality level do we get in practice and we want to settle for, not what is theoretically possible. Just my humble opinions of course…..
Off all the things I've lost, I miss my mind the most.
Post Reply