Maniphest T21889

YUV->RGB: Color Clamping 16-235 fix
Closed, ArchivedPATCH

Assigned To
Peter Schlaile (schlaile)
Authored By
Troy Sobotka (sobotka)
Apr 3 2010, 1:28 AM
Tags
  • Nodes
  • BF Blender
Subscribers
Brecht Van Lommel (brecht)
Matt Ebb (broken)
Peter Schlaile (schlaile)
Troy Sobotka (sobotka)
Xavier Thomas (xat)
yellow .... (yellow)

Description

Summary: This patch corrects an issue regarding YUV->RGB import of motion picture files. The problem manifests itself in both the Sequencer and the Nodal Compositor. It sets colorspace details so that subsequent calls to sws_scale pull full 0-255 values out of the translation table.

If you compare input of all motion picture files (likely including textures) Blender uniformly implements a clamping of values from their original 0-255 values to a broadcast 16-235 mapping. This reduces the quality of import and export.

Manifests in:
* Sequencer
* Nodal Compositor
* Anywhere else motion picture files are used including textures.

Addional Information: The patch was taken from ffmpeg code to work around similar issues in PNG output. Link is here http://www.pasteall.org/12035.

Event Timeline

Troy Sobotka (sobotka) edited a custom field.Apr 3 2010, 1:28 AM
Troy Sobotka (sobotka) attached 1 file(s): F13049: yuvtorgb-patch.txt.

More information to be found at http://yellowsblog.wordpress.com/2009/09/10/video-import-into-blender/

Matt Ebb (broken) added a comment.Apr 3 2010, 8:40 AM

Summary of my discussions with troy about this patch:

* FFMPEG/swscale is used in Blender to import files such as H264, stored internally in YUV format, into Blender which uses RGB. It converts from YUV to RGB in the process.

* By default (as some bizarre historical artifact), the YUV->RGB conversion only takes the 'broadcast safe' colours (8bit range 16-235), and scales that to the full RGB 0-255. While perhaps better for analog monitors etc, this is nasty for Blender - in effect it's throwing away all the colours in the range of 0-16 and 235-255, and stretching the remainder to fill 0-255. This basically discards 15% of the already limited dynamic range of 8bit imagery, which is not nice at all.

* New (svn) versions of FFMPEG contain new commands/flags to tell it to retrieve the full 0-255 range of the YUV source, to convert it to 0-255 RGB. The patch essentially enables this.

* However this functionality is not in the current 0.5.1 FFMPEG release, which is what has been precompiled for OSX and perhaps Windows too in blender's /lib svn. Linux users can compile blender against the FFMPEG version on their system, which may well be a newer version.

* I'm not really involved (nor have i ever been) in the process of deciding what ffmpeg libraries to use in svn etc, but I presume it's not a good idea to rely upon a development svn version of FFMPEG for blender.

* Therefore, the best solution I can come up with for the time being, is to:
a) get someone (on linux, who can test it, etc) to commit the patch, but ifdefed out for now, so people with newer ffmpegs can manually enable it themselves to get this fixed functionality.
b) keep waiting for a new ffmpeg release and update our precompiled versions when that happens

yellow .... (yellow) added a comment.Apr 3 2010, 10:26 AM

Ok

A bit more history. Peter Schaile is also aware of this problem, I submitted a bug report for 2.49b about 12 months ago. Peter pointed to swsscale being hard coded.

I built FFMPEG from git last night including a seperate swsscale folder that I placed inside ffmpeg folder however due to problems with Blenders scons building when trying to use a localised FFMPEG I had to make it system. So I have unstable git FFMPEG system wide now. Already reported bug on scons problems, Nathan and Brecht are aware.

I applied the pasteall patch, applied this patch to 2.5 and left scons script pointing to /usr for FFMPEG.

I get confirmation that I'm using git version of FFMPEG on the command line but when testing using Troys FFMPEG line I don't see any change in the png luma range testing mpeg2 HD source. I'll can test DV etc later. :-(

In 2.5 I see no change via Nodes or VSE either. :-(

But I'm not that familiar building FFMPEG, Troy can you suggest the command line config. Do I need to build swsscale separately, I didn't see any error messages and FFMPEG appears to have been built and installed.

The conversion matrix is an added consideration. Xaviers recent patch for Rec709. Blender has all ways been Rec601 but choice of matrix should really be a choice in the YCbCr -> RGB conversion as well.

Great work Troy. Looking forward to using this.

yellow .... (yellow) added a comment.Apr 3 2010, 10:29 AM

And sorry for the .png heavy post on my blog that Troy points to really bad show on my part. Must change them to .jpg. :-)

Although probably all history very soon when this and FFMPEG patches take over. :-)

yellow .... (yellow) added a comment.Apr 3 2010, 10:40 AM

Last comment. :-)

The scaling 16 - 235 is typical in a conversion to RGB. Otherwise RGB black will be at 16 (more a grey) not 0.

I think a decent levels tool would be beneficial where a user can map the YUV range to RGB range. I know the Colour Balance exists. But all apps have that and a basic levels tool.

If users want broadcast range they can set levels accordingly, if users want to map there minimum populated YUV black level to RGB 0 they can. Video cams don't necessarily capture as low as YUV 0 and 255 typically YUV 10 to 16 are the lowest reached.

If user wants full range where black is RGB 16 there will be problems compositing with other graphics for example where black is 0.

Troy Sobotka (sobotka) added a comment.Apr 3 2010, 5:17 PM

WRT to cameras - most newer HD cams that record x264 use full range. It is a legacy issue bumping up against current, but in this discussion it is entirely moot. For post processing, grading, etc., it is entirely plausible and expected that you are granted full dynamic range of your source material - whether or not it is properly scaled for a digital delivery format.

ffmpeg on Linux will always default to the system libs. This means that if you have _any_ traces of ffmpeg in your default libraries, it will use them.

In the end, there is odd mojo there.

I'll try my best to pop a screenshot of the histogram and submit it here.

Troy Sobotka (sobotka) added a comment.Apr 3 2010, 5:53 PM

I am attaching a PNG here for sampling.

The main issue here now is whether or not ffmpeg is making an optimal decision on the YUV to RGB translation, and likely that analysis would only be possible via output in something like Nuke.

For the time being however, this ffmpeg induced via this behavior is preferred by an order of a magnitude.

Unique colours from the previous YUV transform in the selected frame were 145081. After the YUV color space transform patch the values are 157217.

I believe the before and after quick image supports that claim via the scopes and internal histogram.

Troy Sobotka (sobotka) attached 1 file(s): F13052: samp-a.png.Apr 3 2010, 5:53 PM
Troy Sobotka (sobotka) attached 1 file(s): F13054: samp-b.png.
Troy Sobotka (sobotka) attached 1 file(s): F13055: b-a.png.
Troy Sobotka (sobotka) added a comment.Apr 3 2010, 6:06 PM

You can download the source footage here.

http://sample-images.s3.amazonaws.com/MVI_1394.MOV

Troy Sobotka (sobotka) added a comment.Apr 3 2010, 6:09 PM

Oh drat.

Re-read your last comment.

You _only_ need to apply the patch to Blender that is attached to this thread.

There is _no_ need to apply the PasteAll patch. This is specfically Blender related.

The issue Matt cites is that the ffmpeg included with Blender doesn't have the ffmpeg functionality to properly use the patch attached.

So @yellow you have a few missteps from very the misinformation I apparently accidentally presented to you. Apologies.

Again, the proper solution is to 1) Use an SVN ffmpeg and 2) Apply this patch to Blender. That's it.

Troy Sobotka (sobotka) added a comment.Apr 3 2010, 6:17 PM

Ugh. Typo in the patch. It worked fine but included a duplicate assert for anim being non-null. Sorry.

Troy Sobotka (sobotka) attached 1 file(s): F13059: yuvtorgb-fixed-patch.txt.Apr 3 2010, 6:17 PM
Troy Sobotka (sobotka) attached 1 file(s): F13062: nuke-out.png.Apr 4 2010, 4:52 AM

Don't tell anyone, but I think Nuke is erroneously dealing with luminance values too.

I have attached both sRGB output from Nuke and REC709.

You be the judge.

Troy Sobotka (sobotka) attached 1 file(s): F13065: nuke-out-709.png.Apr 4 2010, 4:52 AM
Troy Sobotka (sobotka) added a comment.Apr 4 2010, 5:37 AM

After a bit more toying, setting Nuke to REC709 at source is the only version that gets a histogram / scope readout that is even remotely close. That said, there is definitely some stretching going on in the blacks.

From what I can see, Blender's output thus far yields the largest gamut of differing colours. Whether or not there is some fudging going on at the YUV to RGB level is unknown.

Interesting however...

yellow .... (yellow) added a comment.Apr 4 2010, 9:45 AM

I've tested Troys patch on Linux Karmic Koala and find that for true progressive source material the patch gives unscaled luma as proposed. Unsure as to colour accuracy at this point.

However the patch does not appear to work with interlaced sources, neither fielded where there is a time difference between frames ie: Top Field or Bottom Field first.

Or PSF 'Progressive Segmented Frames' type interlaced sources, which are really progressive with two segments rather than fields with no time difference, but in a container that can get flagged as a interlaced source.

So currently the patch will fail on the majority of source video that the average user would import, whether it be from SD or HD cameras, only users with DSLR's would be successful.

Whether a video source is interlaced or not is important with regard to the YUV -> RGB conversion, it needs to be known, including field order and accurately detected re PSF material.

I think FilterY in blender handles this?

So maybe it's better to expose the colour matrix 601 or 709 and a interlaced yes or no , field order options to the user before conversion?

Also deinterlacing in the VSE doesn't give deinterlaced preview.

'Full Range' is not only the preserve of new cameras, even old DV cams have the full range to capture within but in reality unless a user underexposes substantially black values will struggle to reach below YUV 10.

YUV to RGB luma mapping doesn't mean YUV 0 is equivalent to RGB 0 and so on. Dynamic range is from lowest captured black to highest white, not necessarily 0 & 255.

As per my original bug report Sept 09 that there should be the option to import video sources without luma scaling in the conversion to RGB and possibly the default operation, but not everyone requires this, it requires additional steps in the import and export optimising levels by taking min populated black and max populated white then remapping to RGB 0 & 255 and mapping those back to 16 - 235 for DVD as an example.

Be prepared for comments like 'Why does my video looked washed out?' after import if no scaling is the default and no option is given for choice.

The need for extra info on the histogram like min and max luma and a levels tool for quick remapping would help also.

Matt Ebb (broken) added a comment.Apr 4 2010, 2:54 PM

re. interlaced: I noticed in the patch there is another separate (not sure why) area above the patched area in the code that deals with interlaced footage, that may need these additional flags added there, too. Hard to add/test it myself though since I can't compile it with the FFMPEG version I have here on osx.

Maybe troy can have a test?

Troy Sobotka (sobotka) added a comment.Apr 4 2010, 6:03 PM

@yellow:

Two things.

First, attach a sample. Five megs would suffice.

Second, on the issue of scaled lumi or not, the point is that within Blender it isn't entirely an illogical choice to use full gamut. And yes, while older cameras have ranges beyond the lumi expected, one would still likely want that information in a post production environment.

I can't think of a circumstance where this would _not_ be desirable during work. That said, there could easily be a checkbox for rendering to provide a 0-255 > 16-235 for destination preview perhaps?

If you have an interlaced progressive, see if you can dig one up. I may have one kicking around from my older GH1, but if you have alternate formats it would be appreciated.

yellow .... (yellow) added a comment.Apr 5 2010, 1:32 AM

psf25 PAL HDV 1440x1080 sample.

yellow .... (yellow) added a comment.Apr 5 2010, 2:16 AM

A link might help. :-)

www.jazmotion.pwp.blueyonder.co.uk/psf25.m2t

Brecht Van Lommel (brecht) added a comment.Apr 6 2010, 12:53 PM

We don't need to upgrade everyone's ffmpeg version to commit the patch, this code can be #ifdef'd out based on the ffmpeg version number, see e.g. the attached patch.

Brecht Van Lommel (brecht) attached 1 file(s): F13073: patch_with_version_check.txt.Apr 6 2010, 12:53 PM
Troy Sobotka (sobotka) added a comment.Apr 6 2010, 9:33 PM

I can't seem to spot where exactly sws_scale would handle things any differently as the context seems to remain the same. I tried moving the setColorspace around to no avail.

If I had to wager, I would suspect that the interlaced issues are likely ffmpeg related - most likely tied to the newer colorspace code landing.

Matt Ebb (broken) attached 1 file(s): F13074: interlace_ffmpeg.txt.Apr 8 2010, 5:39 AM

ack, of course it can be ifdefed based on version, silly.

Ok, with that version check, it compiles and runs just fine on my system, so I've committed it - let me know if there are any troubles.

As for interlaced sources, does it work to just copy that code block up higher in the part dealing with interlaced?

I've attached a patch, interlace_ffmpeg.txt, but I can't actually test it here. Give it a try and see what happens.

Troy Sobotka (sobotka) added a comment.Apr 8 2010, 6:02 AM

Yeah I spotted that too Matt.

I already attempted what you suggested ;) to no avail. I don't see where anim->img_convert_ctx would change for interlaced or non-interlaced content. If you can see where it would grab a new anim->img_convert_ctx then we should place it there.

It is pretty easy to spot the scaling in Blender's Seq simply by looking at the luma scope. if you see dark black bands in about 10% increments you can tell immediately that the source is scaled.

If I had to wager, I'd say this is ffmpeg missing the interlaced colorspace adjustments.

Xavier Thomas (xat) added a comment.Apr 9 2010, 7:54 PM

Hello,

I can't add a new file to this thread :(.

I made a patch to clean this one (after ot was commited unfortunately), you can find it here:

http://www.pasteall.org/12303/diff

It also shut up a couple of warnings.
Tested here and works fine.

However I am a little bit more pessimistic about its magic !
I think this only tries to detect the ycc colorspace details (matrix coeficient and range) form the codec used. This might be usefull for ffmpeg which can read/write jpeg png ... For blender however I suppose that the range parameter is only useful when reading Motion Jpeg. The matrix coefficient however is nice, but should not really produce tremendous differences.

What most digital camera users wants (those that will post process the footages) is to force JFIF conversion (instead of ITU601) and a custom conversion with the same matrix coeficient as ITU709 but with a 0-255 range (instead of true ITU709)

This is still not possible, this patch make ffmpeg trying to autodetect these parameters, no way to force them. And MPEG codecs will be detected as usin 16-235 range because they do use this range.

Xavier

Xavier Thomas (xat) added a comment.Apr 9 2010, 8:07 PM

Would be interesting to check the colorspace details while encoding also.

Troy Sobotka (sobotka) added a comment.Apr 9 2010, 9:42 PM

@xat:

It sure would.

This was a desperation hack on my behalf so that I didn't have to manually output a PNG series with a patched ffmepg.

What we _really_ need to attack is the matrix (perhaps we just let the technician select 601 / 709 and force it if possible?) and whether to broadcast clamp or not (again let the technician choose it) for output / preview / etc.

You likely know more than I by an order of a magnitude on this front Xat, so I'd hope you can help bring the Blender VSE to a new quality level.

It should also be noted that this isn't strictly a VSE issue as the same code is used everywhere when it comes to motion picture files. As such, the quality level impact is rather profound throughout Blender.

Xavier Thomas (xat) added a comment.Apr 9 2010, 11:32 PM

I did test with big bug bunny h264 (mov), and the detected range was 0 (16-235). Which I expected because mpeg is know to use this range. However I try debugging another time with the test vid one of you linked (MVI_1394.MOV) and the detected range for this one is 1 (0-255). The difference compared with blender unpatched is tremendous, specially in the blacks and the saturated yellow.

So I suppose some codec support both range and ffmpeg detect it correctly. H264 wint Quicktime format seems to be one of those case.

Still no form to post my patch here ?!¿
http://www.pasteall.org/12303/diff


Xavier

Troy Sobotka (sobotka) added a comment.Apr 10 2010, 4:01 AM

Xat are you not able to submit at the bottom of this form?

I find it odd that h264 doesn't work as it is YUV. Ideas?

Xavier Thomas (xat) added a comment.Apr 10 2010, 4:18 AM

I can't post the patch because there is no "attach file" at the end of this page (but I see it in other thread). Maybe there is a maximum number of attached files.

I don't know for h264 but I know MPEG2 can optionally embed "only":
- color primaries
- transfer characteristics (gamma transfer function)
- matrix coefficient (for YCbCr conversion)

and for these parameters you can choose:
-BT.709 SMPTE.274M
-FCC
-BT.470
-SMPTE.170M
-SMPTE.240M

and I am pretty sure all these map YCbCr to 16-235 16-240 range.

So I suppose this is a feature of the quicktime container.

Troy Sobotka (sobotka) added a comment.Apr 10 2010, 6:33 AM

Adding Xat's patch because he couldn't see the submit box for some reason.

Troy Sobotka (sobotka) attached 1 file(s): F13081: yuv-patch-xat.patch.Apr 10 2010, 6:33 AM
yellow .... (yellow) added a comment.Apr 10 2010, 10:19 AM

Hi

The Sequence Display Extension flag in the video file header needs to be set by the encoder for a 'correct' matrix decoding choice.

But it is not always set by the encoder and some set it fixed, some let you choose, some are oblivious. FFMPEG from the few tests I've done doesn't set the flag for any codecs it has implemented that I have tested.

Media player play back, some will ignore the SDE flag others respect it, DVD for example is generally played back as 601 regardless of matrix used.

Some software media players ignore the matrix flag too. And they'll clamp always to 16 - 235 like FFMPEG does on RGB playback. Even if the user sets the matrix flag it's not 100% that they will see what they encoded correctly depending on play back device handling.

If a user wants to use the full range luma, then for example with HD to SD encoding where the original is 709 and 0 - 255 re BBB DVD from full range RGB then the user must bring the 0 - 255 range down within the 16 - 235 zone and do a matrix conversion to 601 when encoding for playback devices that are DVD otherwise it will just ignore the 'out of range' or it will clip and crush by FFMPEG based software players for example. So choice at the encoder is needed.

Perhaps BBB was encoded with a limited 16 - 235 luma range depending on the codec choices. Was the matrix and luma scaling different between the BBB versions like DVD, or WWW or HD for example or was it 16 - 235 across all versions. This would have to be set in the encoder by the user if there was choice.

I mentioned further up this thread that I feel, the user needs to be able to set the colour matrix they want to use in the conversion to RGB or RGB to YCbCr and whether they use 0 - 235 or 0 - 255. Whether source is interlaced or not is involved in the the conversion to RGB and back too I believe.

Using Mediainfo the MOV's are 709 colour primaries but the colour matrix used is 601 perhaps that has an effect.

Quicktime container can use YUV or RGB and 16 - 235 or 0 - 255 for 8bit, and however many more for 10bit, so too can .avi.

I think it is the implementation of the codec and what is exposed to the user as encoder/decoder preferences that determines what a container format codec holds with regard to luma and colour matrix.

I believe some players choose based on frame size for example, like 720x576 being decoded as 601 and 16 - 235. 1920x1080 being decoded as 709 and 0 - 255.

Don't know if any of the above is accurate, useful or necessary. :-)


Xavier Thomas (xat) added a comment.May 5 2010, 2:25 AM

Committed some time ago.
Closing

Xavier Thomas (xat) changed the task status from Unknown Status to Unknown Status.May 5 2010, 2:25 AM
Fred Feuerstein (boogiwoogie) mentioned this in T66049: sequencer scales rgb values to 16-235.Jun 23 2019, 4:09 PM