ChrisH wrote: @Hans & salass00 Thanks for the info... but in the end, for the sake of compatibility with uh "other OSes", I've decided to keep the alpha channel as part of the bitmap. I guess it might be slightly faster as well!
Having the alpha channel in the bitmap itself is actually the "normal" way of doing compositing (no need to add a source-mask). I just assumed that you needed a separate mask for some reason.
@Hans As I recall, I asked whether it was possible to have a separate alpha channel bitmap, as this would make it easier to implement. So no need to explain yourself!
I subsequently discovered that some "other OSes" don't support a separate alpha channel bitmap, so I'll have to go the more complex route to support alpha channels.
@Hans I've been puzzling why CompositeTagList() wasn't working. It appears that OS4's CyberGraphics emulation doesn't support PIXFMT_ARGB32, because when allocating such a bitmap with depth=32, I still get a 24-bit bitmap back!
ChrisH wrote: @Hans I've been puzzling why CompositeTagList() wasn't working. It appears that OS4's CyberGraphics emulation doesn't support PIXFMT_ARGB32, because when allocating such a bitmap with depth=32, I still get a 24-bit bitmap back!
That sounds like a bug to me. Please send Hyperion a bug report against Picasso96's emulation library.
@Hans It does appear to be a bug... but not with the CGX emulation: I rewrote my code to use p96AllocBitMap(), and the problem still happens!
To p96AllocBitMap() I am supplying Depth=32, Flags=0, friend_bitmap=0, and rgbFormat=RGBFB_A8R8G8B8. When I use GetBitMapAttr(bitmap, BMA_DEPTH) on the returned bitmap, I get the value 24 (and CompositeTagList() doesn't see an alpha channel despite my code *hopefully* being correct).
(And yes, I know I need Flags=BMF_DISPLAYABLE for hardware acceleration, but that happens later. I first need to allocate a bitmap not in video memory. And anyway, using that flag makes no difference.)
This is VERY wierd . Any thoughts gratefully received. (My evidence points to an OS/driver bug, but this seems unlikely since SwampDefence/etc work fine. It seems more plausible I'm doing something wrong, but I'm not sure what that could affect both CGX emulation & P96.)
Edited by ChrisH on 2012/3/21 12:32:17 Edited by ChrisH on 2012/3/21 12:34:21 Edited by ChrisH on 2012/3/21 12:36:26
ChrisH wrote: @Hans It does appear to be a bug... but not with the CGX emulation: I rewrote my code to use p96AllocBitMap(), and the problem still happens!
I had a look at my Composite3DDemo code, and I'm getting perfectly usable ARGB bitmaps with the following line:
Since the call that you are making is more or less identical, you should have a valid ARGB bitmap. Maybe the returned depth is wrong, and you aren't writing to the apha channel properly.
@Hans OK, thanks for checking. p96GetBitMapAttr() reports that the bitmap has depth=24 *but* it also reports that rgbFormat=RGBFB_A8R8G8B8 !
Since I'm not doing allocation wrong, I'm going to have to assume the depth is really 32. Unfortunately the wrong reported depth breaks my code, and is *a* reason for the alpha channel not working... but it seems is not the only reason (since I have now worked-around that problem).
EDIT: Found the *other* reason for my problems. I was using COMPOSITE_SRC, when I should have been using COMPOSITE_SRC_OVER_DEST .
Edited by ChrisH on 2012/3/21 22:35:43 Edited by ChrisH on 2012/3/21 22:36:25 Edited by ChrisH on 2012/3/21 22:36:57 Edited by ChrisH on 2012/3/21 22:37:37 Edited by ChrisH on 2012/3/21 23:23:48
Way don you use BytesPerPixel instead? Because RGB is 24bit alpha is 8bit, even if ARGB is 32bit, it seams there is some confusion about the definition.
(X * BytePerPixel) + (Y * BytesPerRow)
(NutsAboutAmiga)
Basilisk II for AmigaOS4 AmigaInputAnywhere Excalibur and other tools and apps.
@LiveForIt Thanks for the suggestion. *Bits*PerPixel seems to do what I wanted.
That combined with me now using COMPOSITE_SRC_OVER_DEST (instead of COMPOSITE_SRC, which I had stupidly copied from my scaling routine) has basically solved my problem.
@Hans I have discovered that if I allocate bitmaps larger than 4032 wide, they don't go into video memory. Do you know if this is a specific limitation of RadeonHD cards, and if so is there any way to easily find out what the limit might be on other cards?
Oh, and do you know what (if any) height limitation there may be? I guess it would be the same as the width?
ChrisH wrote: @Hans I have discovered that if I allocate bitmaps larger than 4032 wide, they don't go into video memory. Do you know if this is a specific limitation of RadeonHD cards, and if so is there any way to easily find out what the limit might be on other cards?
Oh, and do you know what (if any) height limitation there may be? I guess it would be the same as the width?
I don't know where that limitation comes from. RadeonHD cards have a limitation of 8192x8192 for bitmaps/textures. Older Radeon cards have a limitation of 2048x2048 (or is it 2047x2047?) for textures, which affects compositing because compositing is done by the 3D GPU, instead of the 2D blitter unit.
To my knowledge, a means for applications to get these limitations does not exist at present.
If you have huge bitmaps, then you will need to cut them into tiles.
I don't suppose anyone knows why COMPOSITE_SRC is causing my source alpha values (sometimes 0x00 sometimes 0xFF) to always become 0xAA in the target? All I am doing is something like this:
ChrisH wrote: I don't suppose anyone knows why COMPOSITE_SRC is causing my source alpha values (sometimes 0x00 sometimes 0xFF) to always become 0xAA in the target? All I am doing is something like this:
COMPOSITE_Src ignores the destination completely BUT the source is first scaled to fit the destination. So the resulting alpha could be from the scaling if the sizes differ.
The COMPFLAG_IgnoreDestAlpha flag should have no effect with the COMPOSITE_Src operand (as the destination is ignored anyway).
IIRC, COMPFLAG_IGNOREDESTALPHA disables writing the alpha value to the destination bitmap. 0xAA will be the value that you had in there beforehand.
Thanks, that was the answer! I'd have never guessed that, since the SDK *can* be read as saying it only affects the reading of the alpha: Quote:
COMPFLAG_IgnoreDestAlpha If this flag is set, the destination bitmap is assumed to be without alpha channel, i.e. Alpha is assumed to be one. On some hardware this greatly enhances the performance or reduces the overhead of the function (for example, it frees the R200 driver from allocating temporary texture storage). Use this flag whenever possible.
Thanks, that was the answer! I'd have never guessed that, since the SDK *can* be read as saying it only affects the reading of the alpha: Quote: COMPFLAG_IgnoreDestAlpha If this flag is set, the destination bitmap is assumed to be without alpha channel, i.e. Alpha is assumed to be one. On some hardware this greatly enhances the performance or reduces the overhead of the function (for example, it frees the R200 driver from allocating temporary texture storage). Use this flag whenever possible.
That might have confused me a little too, but remember if you composite a pixel with alpha of 0.5 over a pixel with alpah of 1.0 then the result has an alpha of 1.0, so initialy you might assume that the dest alpah will be 1.0 after the operation, but ofcourse it's ignoring the dest alpha and assuming that it's allready 1.0 (or 0xFF in 8bit per pixel) so it doesn't set it to 1.0 as that would be inefficient, it allready being 1.0
As to the 'Use this flag wherever posible' it's clearly not 'possible' in this case.