The $3,000 open source multi-format 8/16/35 scanner project

Forum covering all aspects of small gauge cinematography! This is the main discussion forum.

Moderator: Andreas Wideroe

carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

RCBasher wrote:The calculations are wrong ;) As you are talking about a Canon SLR, then you are talking about a sensor with a Bayer RGBG layout. For an 18Mp APS-C sized sensor, you will only have a vertical true resolution of around 865 pixels for R & B and 1730 for G.
Well, I don't know if they are "wrong" per se. It just depends on what your application happens to be and what limits you are looking at. The bayer filter just means the colour component of the resulting digital signal isn't quite as good as the monochromatic aspects of the resulting signal. That doesn't suggest, in any way whatsoever, that you should just ignore any of that monochromatic information. To put it another way, what if the source material was black and white? Would you still use 865 as your figure? And if not then why use it in general?
RCBasher wrote:DeBayer will interpolate some apparent resolution back into the image and this may actually be assisted rather than hindered by some small amount of blur.
Debayering algorithms do a far batter job of interpolating or mixing the colour information than motion blur would do. For one thing the algorithms leave the higher def mono signal intact. Why allow some blurring on it? One can always blur it later if that is required. The proverb of the fox and the grapes he couldn't reach spring to mind here. He ended up arguing that he didn't like grapes.
RCBasher wrote:In reality, there is seldom enough image detail in the film to cause a practical problem when scanning. Not because there can’t be in theory, but because the likely hood is that the taking camera and/or the subject matter will not have been all clamped down to a bed of granite and exposures will have been in the milliseconds of time, not microseconds. The image data will be blurred to some degree.
Very true - but it all depends on what you are working with or what you are prepared to work with. When working out the theory you can leave the practical limits as variables to be tweaked in practice. You can leave decisions on practice to what you can make in practice (economics of production etc). Getting "hung up" in the theory allows you to think about how you might do the tradeoffs in practice. Or what 'tradeoffs' have already been made in practice (mounts, camera exposure time etc). Do I get a brighter LED panel? Do I slow the feed? Can I afford the time it takes to scan if I use this motor rather than that one, and so on.
RCBasher wrote:As some here will know, I developed a very high power adjustable RGB system for use with continuous film transport systems. During my tests, I was rather surprised how long the exposure could be before I could detect any difference between a stationary frame and one moving at 24fps. My camera only has 1036 vertical pixels (518 G, 259 B/R) but with BiLinear HQ de-Bayer it doesn’t do a bad job on an SMPTE R32 S8 frame. I found that at 100us exposure (1/10000) the moving frame was, for all practical purposes, indistinguishable from the static frame. I settled on that as a top limit for exposure time, with my system going as short as 10us (1/100000) when the film density allows it.
Yes, I find actual emperical experiments, ie. in reality (so to speak) end up being far more interesting than the numerical theory. And using an eyeball measurement for setting limits is quite a legitimate way of doing it. Indeed one could say that using an eyeball is just as much a good theoretical approach to this sort of thing as using what is normally meant by "theory". But it depends. For example in my work the data has to pass through an algorithm pipeline which can "see" more information than I can see with an eyeball measurement. It is in relation to what the algorithm sees, rather than what my biological apparatus sees, that the data needs to be theoretically framed a bit more closely, or acquired in a way that is as good as it can practically get. For example, some of the algorithms I use can factor out motion blur. Allowing more blur during transfer just because the camera has introduced blur, just makes it harder for the algorithm. And it's a strange position to adopt anyway, even without algorithms. Blur + blur = more blur. My application is still, in the end, what I or an audience sees, but only after it has made its way through a machine to screen, rather than what it might be at any particular point during that process.
RCBasher wrote:As has been well discussed before, by far the bigger problem in digitising is the dynamic range of the sensor, limited by the noise floor. Theoretical film resolution goes many orders higher than any current digital sensor, but the intensity of the higher frequencies gradually drops off, requiring a senor not just with an enormous pixel count (think Gpixels for 35mm film) but a significantly higher dynamic range than anything we have available today in order to pull out these high frequency / low amplitude features. Digital, by comparison, gives a very “abrupt” frequency response drop-off, which in turn gives the appearance of sharpness but not of extra-fine detail.
Yes film has a lot of information, so why sacrifice any of that when digitising? Even if film had little information why sacrifice any of it? Usually the reasons are purely practical (such as what can one afford) rather than any theoretical reason. The grapes are out of reach rather than unattractive. But its not a simple mapping between quality and cost. One can get better results more cheaply if the right strategy is adopted. Theory allows one to think through different strategies. For eyeball applications it may not be necessary to get "hung up" in the theory, but neither does it hurt. I don't seem to require any panadol after a session with the calculator. But that could just be me. But one additional point I'll make with respect to eyeball measurement is that you can sometimes get fooled by what is "good enough" when looking at single frames. In a motion picture there is perceivable information in the difference between one frame and the next. For example movement is a perceivable attribute of motion pictures. But you can't see movement in a single frame. Bad theory concludes that movement is therefore an illusion. But it's not. Or it's not just an illusion. It is a visible phenomena (can be seen) but also has a corresponding affect on the film (can be captured or encoded) and it turns up in the numerical data derived from such. To put it another way what might look good enough in a single frame can look somewhat dodgier when animated. Every time I've increased the definition of a scan from a previous one, it has always looked better (when animated), even when the single frame of a lower definition scan appeared to the eye (or algorithm) to offer next to no difference from the next higher def scan. To the eye cinematography of an MTF chart can depict the amplitude as next to zero in a still (ie. noisy grey), yet when animated one can see (with one's eyes) what was not visible in the still - one can see the faint impression of lines. That information is there in the data. From where else would it become visible, ie. become a phenomenological experience, if not from information in the film/data? It's just not necessarily visible in a single frame. Strange as that might seem. Basically the movement image is not the animation of stills. Stills are "de-animations" of the movement image, or what can be understood as only a partial decryption of the image proper. They are not the best guide to what the motion picture result will look like. When eyeballing limits one should do so in terms of the movement image proper rather than a still. Time is an important and peculiar frame of reference. The attributes of an image are statistical in both space and time. The image proper is in both space and time. In the context of motion pictures I haven't personally found where the "good enough" threshold exists. It seems more like my equipment is doing the deciding on that. The equipment can be changed.
RCBasher wrote:Anyway, the point of what I’ve written above? Don’t get too hung up whether you will have one or two pixels of theoretical motion blur (assuming a reasonable sized sensor) because it will not be the true limiting factor in image quality. Frank
Yes the limiting factor will be the assumptions that you make, whether in theory, or in practice.
Carl Looper
http://artistfilmworkshop.org/
PyrodsTechnology
Posts: 64
Joined: Sat Dec 11, 2010 10:45 pm
Real name: Roberto Pirodda
Location: Italy
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by PyrodsTechnology »

since the film is in continuous movement, why do not use a line scan sensor ?
User avatar
Andreas Wideroe
Site Admin
Posts: 2276
Joined: Tue Apr 30, 2002 4:50 pm
Real name: Andreas Wideroe
Location: Kristiansand, Norway
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by Andreas Wideroe »

PyrodsTechnology wrote:since the film is in continuous movement, why do not use a line scan sensor ?
Exactly!
Andreas Wideroe
Filmshooting | Com - Administrator

Please help support the Filmshooting forum with donations
RCBasher
Posts: 456
Joined: Thu Jan 11, 2007 9:27 am
Location: UK
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by RCBasher »

Too much vertical weave in the scan. The are some seriously expensive scanners with line sensors out there which exhibit this problem.
Off all the things I've lost, I miss my mind the most.
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

You can think of a 2D sensor as a large array of line sensors and treat how you would capture and process information from a line sensor to how you might capture and process data with a 2D sensor.

This leads to a design where you might have the 2D sensor at a slight angle to the film frame (as a mechanical debayering strategy) and taking a number of 2D captures per frame rather than just one. What one line misses, another captures. What two lines see of the same information, you merge.

The 2D sensor as a single line sensor in parallel universes.


Capture is a process by which an image is encoded. Encrypted. Hidden. Murdered. Buried in a crypt. A photomechanical and/or digital tomb. Decoding is that process by which the dead is then reconstructed, decoded, de-crypted, ie. producing an image proper. Return from the crypt. As a ghost. An apparition. Visibility. An image is what you experience with your brain. The image (a ghost or poltergeist since it can alter materials) is the strangest part of the entire thing. Until the dead is used to modulate available light, or the more controlled light of a projector, or the subdivided light from a digital screen, and ultimately touch and affect the retina/brain, the dead remains dead - in an encoded state, one might say hybernating, or in suspended animation, photo-mechanically and/or digital electronically. In an encoded state it can occupy any kind of state and any kind of material. The world of the invisible or hidden. The dead. Reversability is the main thing. You can encode (kill) an image in any way you like using any materials you like, and so long as you can reverse the process in some sort of way, you can restore the image: create a photographic or cinematographic image. A ghost.

But is it right to speak of encoding or killing an image? Does the image exist or live prior to the moment it is supposedly encoded by a camera or an eyeball? The image is the strangest thing. Where does it live? The easiest part of the process is how one conceives it while it's dead, encoded, in materials. But encoding an image, properly speaking, results in something that is not an image. It results in data. Whether photomechanical or digital. The data is not an image. It is some sort of transformation of the image. And an image that might not yet exist if the transforms are assymetrical. Prior to encoding an image, and after decoding such, is where the image proper lives and remains perfectly strange but it is the image proper (the visible), whether high definition, or grainy, that is ultimately the more fascinating thing. Where it is, while the data sleeps in a crypt, can defy comprehension. If the image is that which is visible (and it is), then there is no image during the transcoding process. There is only the promise of such. A strange link back to the image it was and/or the image it will be. During transcoding the image is removed from the universe. Gone somewhere else for a while. On a vacation. Or begins on vacation. Was born on holiday. It is then invoked. Restored. Or created. Brought back from where it went, or asked to come over, into our universe. Now the grainier an image the more fascinating it can be. It can exhibit it's ghostly aspect more and it's weird relationship to the materials that would act as it's substitute or method of invocation. A grainy image is spookier. Watching a film projected on a wall taps right into this peculiarity. It remains peculiar. To reconstruct this peculiarity in a digital channel always seems very difficult. But it stands to reason that if one samples the film at ever increasing definition, beyond any assumed MTF limits, one should get closer to this raw peculiarity of film projected on a wall ... where each pin prick of light, of nanometer proportions and sub-atomic variability, whether one calls it noise or not, attacks the optic nerve and delivers it's tiny statistical part of the collective spatio-temporal punch.

C
Carl Looper
http://artistfilmworkshop.org/
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

Conventional philosophy, of the idealist Platonic Kantian variety opposes that philosophy born of empericism (the stoics and so on). In Kant the image would be the data. The data would be the image. they would be the same thing. In Kant the image (the sensory) is to be regarded as synthetic. A composite of otherwise a prioi components, the reality behind which is inaccessible (to both reason and sense). But from the point of view of empericism, the image is to be identified with reality. It is the image and reality that are the same thing. Plato mocks empericism, in the parable of the cave. "They think the image is real", he says. Yes. The image is real. But if by "image" we mean that which is real then there is no basis for the implied critique. Images are real. This emperical use of the word "reality" is echoed when we say the 'theory' says this, but in 'reality' we see that. By 'theory' here we are meaning that conventional philosophy of Plato, that Kant inherits, and that theoretical physics inherits, where it would be mathematics and geometry which, if not real, were closer to the real. than is the sensory. And in the same breath, by 'reality' we mean the visible, ie. the image, the image proper. We mean the sensory. This way of speaking we owe to empericism and the Stoics. Not to Kant. But this does not mean empericism is not theoretical. It is only a particular history and it's language, which teaches us to mean, by "in theory", something other than the sensory. But against this history, empericism can be regarded as a form of theorisation that follows a different trajectory from Kantian reason (or against theory in the conventional sense), where the sensory has it's own strange intelligence, that operates at a tangent to the world of Kantian reason (mathematics etc), where that which is visible, in a sensory sense, guides a different form of reasoning, one that we might even call "unreasonable" out of respect for Kant, but not necessarily, therefore, just something to be dismissed. We can call it an unreasonable tangibility, with it's own specific and peculiar unreasonable logic, for which artist/philosophers are sometimes able to find the right sensory concepts, or the right words, or the right paint, or the right chemistry, to speak in a language peculiar to it, of the senses, of images and sounds, of experience.

There is a pivotal brief moment, in Bazin's Ontology of the Photographic Image, where he crosses the dividing line, from Kantian reason to emperical sense:

"No matter how fuzzy, or lacking in documentary value the image may be, it shares, by virtue of the very process of it's becoming, the being of the model of which it is the reproduction; it is the model." (page 14. What is Cinema, Vol 1, Andre Bazin, University of California Press)

It is the last three words that guarantees Bazin's escape from any simple criticism of the realism he is posing. There is a quantum jump here, so to speak, from the image as a reproduction of the model (the image as copy of reality), to the image as the model. As reality. Without an understanding of the logic of empericism, this would sound like some sort of flourish or poetic license. But it's not. This is literally what Bazin means. The photographic image can be regarded as a model for a particular type of painting (baroque painting) that had been evolving prior to photography. The photographic image is that which painting was painting. The photographic image is the model. It is not the reproduction of the model. It becomes a reality in it's own right.

Idealist criticism of the realism Bazin is elaborating, assumes some correlation Bazin is suggesting between an image and a reality that was not an image. But that is an assumption on the side of idealism. It is also, admittedly an assumption to which Bazin himself often succumbs. But what motivates his writing is the way in which the photographic image exceeds this concept (of image on the one hand and reality on the other). This is because bazin's image is not in the materials. The image is a reality that occupies it's own strange universe, and to which the materials of photography are capable of interacting. The photographic image is not in the materials. An image (or it's encryption) can move from one material to another.

Modernist idealist criticisms of Bazin's realism says things like "reality is not flat, black and white or ninety minutes long". As if this was all that was required to knock down Bazin.

But if that which is flat, or black and white, or ninety minutes long, are not real, then what are they? Or more to the point, what is reality? If we go back to Kant we see that reality is actually kept as an inaccessible. An unobservable. It is not until materialism surfaces that reality is then relocated to materials and their theoretical models. What gets missed, ignored, or unable to be understood (if discovered), is the idea of an equation between reality and the image. It is only authors like Bazin that are able to see their way through to this idea (the image as a reality, rather than of a reality). They are not as hindered by the duopoly between reason and materialism. They can see something different. The visible.

C
Carl Looper
http://artistfilmworkshop.org/
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

Some References for context:
carllooper wrote:criticisms of Bazin's realism says things like "reality is not flat, black and white or ninety minutes long". As if this was all that was required to knock down Bazin.
More importantly for our discussion, semiotics hits a problem with the theory
of representation. If we accept that any particular utterance -- be it a sentence,
a photograph or a website -- is the product of the language system -- of
writing, photography or web-design -- we should also accept that it is subject
to certain kinds of structuring constraints: the sentence will be grammatical,
the photograph will be flat, the web-page will be rectangular. Let's take the
example of film which for some critics of the 1940s and 50s (Bazin 1967, 1971;
Kracauer 1960) had been seen as the exemplary medium for the depiction of
reality. The ability to capture light mechanically, to record movement and
sound and to synchronise them, and in Bazin's case the specifically cinematic
techniques of deep-focus and the long take, contributed to film's destiny, the
revelation of reality stripped of the banal familiarity with which we normally
view it. But from a semiotic perspective, the system of film -- its techniques of
camerawork and editing, its tricks of the trade, its codes of story-telling and
comprehension -- mean that it can never capture reality. Reality is not flat, or
black and white, or ninety minutes long
; it does not have a story; crime isn't
always punished, nor virtue rewarded.

Sean Cubbit, Simulation and Social Theory, page 12
http://www.academia.edu/217836/Simulati ... ial_Theory

carllooper wrote:But if that which is flat, or black and white, or ninety minutes long, are not real, then what are they? Or more to the point, what is reality?
The question "what is reality" has the same rhetorical purpose as "what is truth" to which Kant is not without an answer:

The old question with which people sought to push logicians into a corner,
so that they must either have recourse to pitiful sophisms or confess
their ignorance, and consequently the vanity of their whole art, is this:
“What is truth?” The definition of the word truth, to wit, “the accordance
of the cognition with its object,” is presupposed in the question; but we
desire to be told, in the answer to it, what is the universal and secure
criterion of the truth of every cognition.

To know what questions we may reasonably propose is in itself a strong
evidence of sagacity and intelligence. For if a question be in itself absurd
and unsusceptible of a rational answer, it is attended with the danger—
not to mention the shame that falls upon the person who proposes it—of
seducing the unguarded listener into making absurd answers, and we are
presented with the ridiculous spectacle of one (as the ancients said) “milking
the he-goat, and the other holding a sieve.”

If truth consists in the accordance of a cognition with its object, this
object must be, ipso facto, distinguished from all others; for a cognition is
false if it does not accord with the object to which it relates, although it
contains something which may be affirmed of other objects. Now an universal
criterion of truth would be that which is valid for all cognitions,
without distinction of their objects. But it is evident that since, in the case
of such a criterion, we make abstraction of all the content of a cognition
(that is, of all relation to its object), and truth relates precisely to this
content, it must be utterly absurd to ask for a mark of the truth of this
content of cognition; and that, accordingly, a sufficient, and at the same
time universal, test of truth cannot possibly be found. As we have already
termed the content of a cognition its matter, we shall say: “Of the truth of
our cognitions in respect of their matter, no universal test can be demanded,
because such a demand is self-contradictory.”

Immanuel Kant, Crtique of Pure Reason, III. Of the Division of General Logic into Analytic and Dialectic.
http://www2.hn.psu.edu/faculty/jmanis/k ... son6x9.pdf

carllooper wrote:Idealist criticism of the realism Bazin is elaborating, assumes some correlation Bazin is suggesting between an image and a reality that was not an image. But that is an assumption on the side of idealism. It is also, admittedly an assumption to which Bazin himself often succumbs. [...] They [Bazin and others] are not as hindered by the duopoly between reason and materialism. They can see something different. The visible.
If Cubitt has Bazin in a camp opposed to Kantian reason (Idealism), there is the explicit assumption that Bazin must therefore be speaking from the point of view of materialism, ie. that this would be the case before anything Bazin might say, or not say, in support of such an assumption:

Idealism (I will use the capital letter to distinguish the philosophical usage
from the everyday usage as the opposite of selfishness) is that school of
philosophy that believes that the material world, for one reason or another,
cannot prove or explain its own existence. For the Idealist, the world is a
result of something else that is not the world: either an act of Divine Creation,
the product of a universal Mind, the unfolding of an immaterial Reason, or
the visible form of an invisible Idea. The opposite mode of philosophy,
Materialism, refuses to look beyond the material world for explanations and
causes. Instead it follows the scientific model, and restricts its enquiries to
what can be physically accounted for, without recourse to the capital letters
that tend to decorate Idealism's roster of Mind, Idea, Reason and God. As we
shall see, Materialism has its own problems, not least in defining what it
means by physical or material reality (for example, is something like the law
of gravity physical and material?).

Sean Cubbit, Simulation and Social Theory, page 3
http://www.academia.edu/217836/Simulati ... ial_Theory


This last reference is perhaps the most important. If we consider Cubitt's duopoly between Idealism and Materialism Bazin's realism fits neither. Bazin's image is not the "visible form of some invisible idea". Nor is it the causeless world of materialism. The image (the visible) is the cause. And effect. Bazin's image is, for want of another way of saying it: the visible form of a visible idea. Bazin's realism requires that which Deleuze unearths and elaborates, the image as a reality in it's own right, a concept in addition to that of the triptych: God, world, mind. The difficulty of this alternative is not to be underestimated. It's philosophical traces have been few and far between. There is not a ready made framework into which one can readily relax. Deleuze proposes "Transcendental Empericism" (in deference to Kant) which might fulfill this role. It would concern the visible, but without any requirement for whom, or what, or to which it was visible.
Carl Looper
http://artistfilmworkshop.org/
aj
Senior member
Posts: 3556
Joined: Thu Oct 02, 2003 1:15 pm
Real name: Andre
Location: Netherlands
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by aj »

Anybody familiar with OpenCV? From a brief scan I make up that is suitable for doing analysis of moving images. Rather than judging/processing a collection of separate stills by some sort recipe.
Kind regards,

André
User avatar
Andreas Wideroe
Site Admin
Posts: 2276
Joined: Tue Apr 30, 2002 4:50 pm
Real name: Andreas Wideroe
Location: Kristiansand, Norway
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by Andreas Wideroe »

RCBasher wrote:Too much vertical weave in the scan. The are some seriously expensive scanners with line sensors out there which exhibit this problem.
I see your point, but I don't agree. Today there are encoders available than can resolve thousands of lines. These are synced with the camera. Also there are available off the shelf capstan motors and other parts that can be used for filmtransport that are better then what was on the older telecine machines - to a fraction at the cost.

There are many advantages with linescan cameras, but there are also occasions where an areascan camera would work better.

/Andreas
Andreas Wideroe
Filmshooting | Com - Administrator

Please help support the Filmshooting forum with donations
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

Matthew Eple has replied to my email.

He has a contact list if you want to write him. He'll have design plans and software code up sometime in August and will be posting update to those on contact list.

C
Carl Looper
http://artistfilmworkshop.org/
RCBasher
Posts: 456
Joined: Thu Jan 11, 2007 9:27 am
Location: UK
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by RCBasher »

awand wrote:
RCBasher wrote:Too much vertical weave in the scan. The are some seriously expensive scanners with line sensors out there which exhibit this problem.
I see your point, but I don't agree. Today there are encoders available than can resolve thousands of lines. These are synced with the camera. Also there are available off the shelf capstan motors and other parts that can be used for filmtransport that are better then what was on the older telecine machines - to a fraction at the cost.

There are many advantages with linescan cameras, but there are also occasions where an areascan camera would work better.

/Andreas
I look forward to seeing your design for a film transport synchronised to an encoder which is stable enough to generate around 2000 clock signals per frame ;) Even if you could do it, you then have to take into account differential movement (vibration) between the camera, lens elements and the film transport. This would not be an amateur project I fear. Also, there is no real advantage over a full frame sensor other than perhaps using one of the 9-line RGB sensors which would allow for on-the-fly HDR.

Frank
Off all the things I've lost, I miss my mind the most.
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

aj wrote:Anybody familiar with OpenCV? From a brief scan I make up that is suitable for doing analysis of moving images. Rather than judging/processing a collection of separate stills by some sort recipe.
OpenCV is a library of useful computer vision ("CV") algorithms written in C++. To use the library you will need to know how to write and compile programs in C++. The algorithms in the library are basically recipes for performing specific computer vision tasks. Generally you will combine the tasks in different ways to obtain a particular result.

For example, one algorithm I use, is one which can be used to calibrate a lens. You hold up a chessboard in front of a camera and shoot the chessboard from different angles. You then run the captured image through the algorithm, which is able to recognise the squares of the chessboard (in the image) and uses that information to compute the particular distortion of the lens that was used. Once the distortion of a lens is known in this way you can then use the known nature of the lens to undistort anything else subsequently shot through the same lens.

http://docs.opencv.org/doc/tutorials/ca ... nt-calib3d

By "distortion" here is just meant in what way a real camera lens differs from an ideal pinhole camera. By transforming a real camera shot to fit this assumption (called "undistortion") you can then make use of other algorithms that assume this ideal camera model. The algorithms that assume this ideal camera model are easier to write. In much the same way that planetary motion is a lot easier to calculate if you assume planets rotating around the sun rather than around the Earth. Assuming planets rotating around the Earth is not really wrong as such - it just makes calculating planetary motion so much more difficult. Medieval astronomers, despite using an earth centric model, were still able to predict quite a lot about the motion of heavenly bodies. Anyway ...

In a film transfer system, where you might have a continuous feed, you could design the system to take a couple of shots per film frame as it passes the lens, ie. captures of the film in different positions with respect to the lens. If the lens of the transfer system is calibrated it means you can undistort the captured images to ensure the only difference between the images is purely translational (in x and y) - ie. no lens curvature. By allowing for movement in two dimensions one solves for not only the direction of the feed, but any sideways motion produced as a function of a less than ideal transport setup. The captures then become possible to stitch together, in 2D space, to sub-pixel accuracy, using fairly straightforward algorithms such as phase correlation, which assumes only a 2D translational difference between captures:

http://docs.opencv.org/modules/imgproc/ ... ecorrelate
http://en.wikipedia.org/wiki/Phase_correlation

The result effectively becomes a one to one reproduction of the entire film strip. From there one can run this virtual film strip through a virtual projector in order to spit out a digital movie file. The virtual projector would use algorithms capable of recognising the image of sprocket holes, in the virtual film strip, to align each frame for subsequent output to a digital movie file. Or it could use algorithms to recognise frame lines instead. Or use image content itself.

Either way you can build up an particular algorithmic pipeline that allows the mechanical side of the system to be much more relaxed than it otherwise might, and correspondingly cheaper to put together. You can use a cheaper lens, for example, because your lens calibration algorithm will allow you to digitally compensate for any distortions a cheaper lens might introduce. Indeed, even the most well engineered lens can have some measure of distortion (with respect to an ideal camera model), and arguably require some calibration, and if you were to decide you would have to calibrate a lens anyway, then why bother with a more expensive lens - one could just use a cheaper one instead.

C
Carl Looper
http://artistfilmworkshop.org/
wado1942
Posts: 932
Joined: Fri Dec 15, 2006 5:46 am
Location: Idaho, U.S.A.
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by wado1942 »

This is all fun and interesting but the sound is not good at all. The lack of optical resolution, the click every 1/24th of a second from slight misalignment of each frame etc. might leave this in a "find out what this film is" scenario but not good enough for proper restoration. Speaking of which, why not turn the camera sideways and get some more resolution? Would the microlens angular refraction issue be too obvious if the film frame took up more of the digital frame?
I may sound stupid, but I hide it well.
http://www.gcmstudio.com
carllooper
Senior member
Posts: 1206
Joined: Wed Nov 03, 2010 1:00 am
Real name: Carl Looper
Contact:

Re: The $3,000 open source multi-format 8/16/35 scanner proj

Post by carllooper »

wado1942 wrote:This is all fun and interesting but the sound is not good at all. The lack of optical resolution, the click every 1/24th of a second from slight misalignment of each frame etc. might leave this in a "find out what this film is" scenario but not good enough for proper restoration. Speaking of which, why not turn the camera sideways and get some more resolution? Would the microlens angular refraction issue be too obvious if the film frame took up more of the digital frame?
The transport system is adequate. Just needs more work on the camera used to do the capture, and the algorithms used to re-register the film. And using a matrix of RGB LEDs rather than white LEDs. Turning the camera sideways is a good idea, as well as capturing more than one digital frame per film frame. The more digital frames captured per film frame the greater the defintion of the integrated result. Multi-capture won't improve pixel resolution but it will improve apparent resolution. Multi-capture + integration improves three things - one is artifacts introduced by the Bayer filter of the digital capture decice; one is the dynamic range (transferring more of what the film has) and thirdly (and most interestingly) is the signal to noise ratio. Integrating two or more digital images of the same film, taken at different random locations, and then realigned together, considerably improves the reconstruction of the film signal - there is less noise in the result. It is not less than that of the original film, but it is less than that which a single capture acquires. For mathematical/statistical reasons, a single capture amplifies the grain of the transfer. And the lower the resolution of the digital capture device, the more amplified this transferred grain becomes. It has to do with interference between the AM (amplitude modulation) signal assumption and regularly spaced pixels of digital capture devices, versus the SFM (spatial frequency modulated) signal and irregularly located silver particles (or dye clouds)of the film. There is a mathematical/statistical mismatch between the two systems which results in digital transfers producing (strangely) a grainier version of the original film. This has nothing to do with any noise in the capture device. It is the difference in sampling theories underwriting the two systems. A similar kind of thing occurs in computer generated graphics, where you have to apply an "anti-alias" function to remove aliasing ("staircasing"). In the film to digital transfer the aliasing is a little different. However the remedy is simple: higher resolution (eg. 4K) capture plus multi-capture/integration, brings the grain levels of the transfer back towards that of the original film (ie. towards what you would otherwise appreciate when projecting original film on a wall). It doesn't take too many frames to get the grain levels back to levels very close to that of the original film. Once you have a good signal in the 4K domain you can then downscale the digital signal to 2K or 1.5K for delivery, and it will exhibit far less noise, and far more apparent rez, than single frame captures done originally at those lower def rezes. And this is before one has done any clever algorithmic enhancement of the signal. Indeed it can avoid the need to do anything more at all.

C
Carl Looper
http://artistfilmworkshop.org/
Post Reply