Well, I don't know if they are "wrong" per se. It just depends on what your application happens to be and what limits you are looking at. The bayer filter just means the colour component of the resulting digital signal isn't quite as good as the monochromatic aspects of the resulting signal. That doesn't suggest, in any way whatsoever, that you should just ignore any of that monochromatic information. To put it another way, what if the source material was black and white? Would you still use 865 as your figure? And if not then why use it in general?RCBasher wrote:The calculations are wrong ;) As you are talking about a Canon SLR, then you are talking about a sensor with a Bayer RGBG layout. For an 18Mp APS-C sized sensor, you will only have a vertical true resolution of around 865 pixels for R & B and 1730 for G.
Debayering algorithms do a far batter job of interpolating or mixing the colour information than motion blur would do. For one thing the algorithms leave the higher def mono signal intact. Why allow some blurring on it? One can always blur it later if that is required. The proverb of the fox and the grapes he couldn't reach spring to mind here. He ended up arguing that he didn't like grapes.RCBasher wrote:DeBayer will interpolate some apparent resolution back into the image and this may actually be assisted rather than hindered by some small amount of blur.
Very true - but it all depends on what you are working with or what you are prepared to work with. When working out the theory you can leave the practical limits as variables to be tweaked in practice. You can leave decisions on practice to what you can make in practice (economics of production etc). Getting "hung up" in the theory allows you to think about how you might do the tradeoffs in practice. Or what 'tradeoffs' have already been made in practice (mounts, camera exposure time etc). Do I get a brighter LED panel? Do I slow the feed? Can I afford the time it takes to scan if I use this motor rather than that one, and so on.RCBasher wrote:In reality, there is seldom enough image detail in the film to cause a practical problem when scanning. Not because there can’t be in theory, but because the likely hood is that the taking camera and/or the subject matter will not have been all clamped down to a bed of granite and exposures will have been in the milliseconds of time, not microseconds. The image data will be blurred to some degree.
Yes, I find actual emperical experiments, ie. in reality (so to speak) end up being far more interesting than the numerical theory. And using an eyeball measurement for setting limits is quite a legitimate way of doing it. Indeed one could say that using an eyeball is just as much a good theoretical approach to this sort of thing as using what is normally meant by "theory". But it depends. For example in my work the data has to pass through an algorithm pipeline which can "see" more information than I can see with an eyeball measurement. It is in relation to what the algorithm sees, rather than what my biological apparatus sees, that the data needs to be theoretically framed a bit more closely, or acquired in a way that is as good as it can practically get. For example, some of the algorithms I use can factor out motion blur. Allowing more blur during transfer just because the camera has introduced blur, just makes it harder for the algorithm. And it's a strange position to adopt anyway, even without algorithms. Blur + blur = more blur. My application is still, in the end, what I or an audience sees, but only after it has made its way through a machine to screen, rather than what it might be at any particular point during that process.RCBasher wrote:As some here will know, I developed a very high power adjustable RGB system for use with continuous film transport systems. During my tests, I was rather surprised how long the exposure could be before I could detect any difference between a stationary frame and one moving at 24fps. My camera only has 1036 vertical pixels (518 G, 259 B/R) but with BiLinear HQ de-Bayer it doesn’t do a bad job on an SMPTE R32 S8 frame. I found that at 100us exposure (1/10000) the moving frame was, for all practical purposes, indistinguishable from the static frame. I settled on that as a top limit for exposure time, with my system going as short as 10us (1/100000) when the film density allows it.
Yes film has a lot of information, so why sacrifice any of that when digitising? Even if film had little information why sacrifice any of it? Usually the reasons are purely practical (such as what can one afford) rather than any theoretical reason. The grapes are out of reach rather than unattractive. But its not a simple mapping between quality and cost. One can get better results more cheaply if the right strategy is adopted. Theory allows one to think through different strategies. For eyeball applications it may not be necessary to get "hung up" in the theory, but neither does it hurt. I don't seem to require any panadol after a session with the calculator. But that could just be me. But one additional point I'll make with respect to eyeball measurement is that you can sometimes get fooled by what is "good enough" when looking at single frames. In a motion picture there is perceivable information in the difference between one frame and the next. For example movement is a perceivable attribute of motion pictures. But you can't see movement in a single frame. Bad theory concludes that movement is therefore an illusion. But it's not. Or it's not just an illusion. It is a visible phenomena (can be seen) but also has a corresponding affect on the film (can be captured or encoded) and it turns up in the numerical data derived from such. To put it another way what might look good enough in a single frame can look somewhat dodgier when animated. Every time I've increased the definition of a scan from a previous one, it has always looked better (when animated), even when the single frame of a lower definition scan appeared to the eye (or algorithm) to offer next to no difference from the next higher def scan. To the eye cinematography of an MTF chart can depict the amplitude as next to zero in a still (ie. noisy grey), yet when animated one can see (with one's eyes) what was not visible in the still - one can see the faint impression of lines. That information is there in the data. From where else would it become visible, ie. become a phenomenological experience, if not from information in the film/data? It's just not necessarily visible in a single frame. Strange as that might seem. Basically the movement image is not the animation of stills. Stills are "de-animations" of the movement image, or what can be understood as only a partial decryption of the image proper. They are not the best guide to what the motion picture result will look like. When eyeballing limits one should do so in terms of the movement image proper rather than a still. Time is an important and peculiar frame of reference. The attributes of an image are statistical in both space and time. The image proper is in both space and time. In the context of motion pictures I haven't personally found where the "good enough" threshold exists. It seems more like my equipment is doing the deciding on that. The equipment can be changed.RCBasher wrote:As has been well discussed before, by far the bigger problem in digitising is the dynamic range of the sensor, limited by the noise floor. Theoretical film resolution goes many orders higher than any current digital sensor, but the intensity of the higher frequencies gradually drops off, requiring a senor not just with an enormous pixel count (think Gpixels for 35mm film) but a significantly higher dynamic range than anything we have available today in order to pull out these high frequency / low amplitude features. Digital, by comparison, gives a very “abrupt†frequency response drop-off, which in turn gives the appearance of sharpness but not of extra-fine detail.
Yes the limiting factor will be the assumptions that you make, whether in theory, or in practice.RCBasher wrote:Anyway, the point of what I’ve written above? Don’t get too hung up whether you will have one or two pixels of theoretical motion blur (assuming a reasonable sized sensor) because it will not be the true limiting factor in image quality. Frank