MovieStuff wrote:SD technology/quality comes in many varieties and I've never seen any that looks as good as oversampling in HD and down converting to SD.
Referring to HD scanning as "oversampling" is interesting.
This is because even the best SD cameras have a massive sharpening circuit that all imagery is processed through otherwise it will be incredibly soft and out of focus.
No matter how good the SD sharpening circuitry it won't be able to get the image any sharper than the cutoff frequency of the sensor cells. All it can ever do is sharpen any information that was above the cutoff frequency of the sensor cells.
Thus a large degree of perceived detail in an SD image is totally artificial. It works much better than it should, honestly, and to be fair HD and even DSLR imaging also use sharpening but not as much because they have more pure resolution to start with.
Indeed. An HD scan starts off with more "pure resolution" (more of the original image) than the SD scan. The HD scan can see more of the original image than the SD can, no matter how good the sharpening circuitry of the SD scan.
However, the bottom line is that an SD image starts "soft" and is then artificially sharpened to SD quality.
Absolutely.
That is going to look markedly different than starting with something already "sharp" like an HD image and then down-rezzing it to SD resolution.
Yes. That's right. The SD scan, no matter how good the sharpening circuitry, can't ever be any
sharper than the down-rezzed HD. The down-rezzed HD acts as an image of that point beyond which an SD scan can't get any better. That is why using a down-rezzed HD as a reference is a good reference.
The down-rezzed HD represents the best that SD could ever do.
If anyone thinks that's really the same, then capturing in HD will look exactly the same as capturing with a DSLR and down converting to HD. Obviously that isn't true, either.
As if that was ever the argument. Kent makes the following proposition: "Yes I compared a SD transfer to a HD transfer. You have not done this, and that is what makes your test flawed." He then proceeds to argue why they are not the same (and they are not) but doesn't argue why that makes my test flawed. He is changing the argument into whether SD and downrezHD they are the same or not, rather than taking on what exactly is flawed about how I did it.
There are inherent flaws in any capture format, no matter how large the image. But, the smaller the image, the more these flaws must be artificially compensated for through real time digital processing to make it acceptable. The subtle nuances created by this processing throughout the SD image is unique to that format and impossible ( nor desirable) to recreate via down converting from HD.
All those nuances become irrelevant in relation to the debate. If the down-rezzed HD represents
the best that SD could ever do (ever) then it can be used instead of any real world SD scan.
Not only does a larger image start with fewer flaws, but down converting via software benefits from zero time constraints versus quality whereas the original SD imaging must attenuate and process the soft raw image in real time, which is never ideal.
Yes. A larger image (HD) starts off with fewer flaws. An SD scan can't look any better than an HD scan no matter how good it's circuitry.
That's also why SD footage unconverted to HD via a software render always looks better than a real time uprezz during display.
Another good point.
So when someone says they are going to do an SD transfer to see what it looks like, it should be assumed they are doing an actual SD transfer with SD technology and not an oversampled HD transfer that is then down converted using software to SD.
Yes. There is a difference. The problem is that this difference is not the issue. When someone says they are comparing an HD signal with a down-rezzed version of such one can assume they are comparing an HD scan with what would be
better than an SD scan could ever achieve (or at worse, the same). They are not proposing that down-rez HD is the same as SD. They are proposing that the down-rez HD represents
the best that that SD could ever achieve (or at worse the same).
Which is the actual argument. The argument that HD is not overkill is based on the proposition that downrezHD = best/ideal SD. And that if there is a difference (other than noise), between an HD scan and the best SD could ever do, then it is the same as saying that the HD scan gives us something more than the
best that SD could ever achieve.
And if HD does this - if HD is better than
the best that SD could ever do, then HD scanning is something one can consider. It provides something more than what
SD could ever achieve. It answers the thread question in a perfectly valid way.
We arrive at the same conclusions as Kent, (that HD scanning has something to offer) without ever doing a real world SD test, precisely because we can just recreate
the best that SD could ever do from the HD scan. Kent's proposition that my test is invalid, doesn't hold.
Or to put it another way, if there were no difference (other than noise) between HD and
the best that SD could ever do then and only then might HD be possibly overkill. But since that isn't the case there is no need to do further testing in the lower definition range (whether real world or artificially). It is towards higher definition scans one would have to go to find the point at which a higher definition scans became overkill.
And so far I haven't found where that limit could be, and I've scanned up to the equivalent of 16mm at 24K, and there is still a real world signal to be found in such scans (not just the noise of the medium) ie. when comparing such with
the best a lower definition scan could ever achieve.
Now if the signals that exist at these ultra high definitions don't matter to you then you can treat those signals as irrelevant and the ultra high def scans as overkill, but it won't be because those signals aren't there. They are.
But one has to analyse what those signals are. They certainly belong to the real world signal but if the real world signal is "out of focus" all one would be seeing in increasingly higher defintion scans is simply the difference in intensity value along an otherwise smooth gradient, something the mind can interpolate anyway. It is only those signals the mind can't interpolate that would be relevant. That's where resolution charts come in handy. They provide a pattern that if interpolated (by the brain or the lens, the transfer, etc), would result in a smeared signal. It is where a resolution chart fails to cut through a signal (becomes smeared) that one can then draw a line and say that scanning any higher won't get a signal any better than what the brain would interpolate anyway. That's where one can say the scan threatens to become overkill. However what we now have in the universe, that our forefather's didn't, is the ability to digitally analyse these signals with increasingly useful algorithms, to tease out more information than we otherwise could.
For example, an "out of focus" signal can be brought back into focus (to a certain extent). And the higher the definition of the scan the better such algorithms work. And denoising algorithms work better with higher defiintion scans than lower ones. So there's another reason (if you don't like noise) to do a higher definition scan. And there is information to be extracted in the time domain as much as the spatial that benefits from higher definition scans. So even when one has found the limit according to a resolution chart (and I failed to find such a limit scanning Super8 @ 2.5K, or 16mm @5K, or 35mm @10K) even that won't be the actual limit.
Carl