Because I only use the vertex version for affine transforms. I don't know if there is any speed difference btw the two, but if you are just plotting square letters, then there is no need to use triangulations.
Errr, because it would magically speed-up software that doesn't use CompositeTags (for whatever reason). If the overhead is not negligible, then they can always try using CompositeTags later.
I really don't understand the logic that "we won't optimise an OS function because the speed-up would be smaller than everyone rewriting their apps".
That's because you're not factoring in the limited resources that we have, and the large number of other higher impact tasks that OS developers have. Once you factor those in, things look different.
AFAIK, BltBitMapTags() didn't exist before AmigaOS 4 update 1, so its use will be limited, and its highly likely that the software in question is being actively developed. There's always a chance that someone will enhance BltBitMapTags(), but there are plenty of other (higher impact) tasks demanding attention. So, I wouldn't count on it happening soon.
In the meantime, I'm encouraging developers to use CompositeTags() directly.
That's not the window's RastPort/BitMap, it's a temporary BitMap which then gets blitted to the window later (via the RastPort).
Okay. Thought I'd mention it as there was rp->BitMap term, and it's an easy mistake to make when changing functions.
Would be really useful to know if there was way to set the "damaged area" of a rastport when blitting to it's bitmap.
Clearly there is a way, as dedicated functions with the rport as target do so, but I've not seen any public code to do it (might well use private methods and strcutures ofcourse).
It doesn't work on old Radeons yet. Well, it does but I haven't released it.
@Chris
Looks correct from here. Have you tried disabling DestAlphaIgnore? Are you sure, that the AlphaMask is given as a bitmap? I cannot remember, and I don't have access to the autodocs ap.
I actually added the IgnoreDestAlpha to try and fix the problem
SrcAlphaMask is a struct BitMap* according to the AutoDocs, so I think that's right.... ah, but the glyph itself is raw data, not a struct BitMap. That means I need to blit that - with alpha - to a BitMap before doing the Composite. That's the blit I'm doing anyway, so the Composite is actually an extra operation in this scenario.
So, I repeat my original question, is there any way to hardware-accelerate the blitting of ALPHATEMPLATE bitmaps (which are raw data returned from BulletAPI, not struct BitMap*)?
That means I need to blit that - with alpha - to a BitMap before doing the Composite. That's the blit I'm doing anyway, so the Composite is actually an extra operation in this scenario.
Not really, blitting ARGB32 data into an 'empty bitmap' doesn't require any alpha blending so whilst it may not be hardware accelaerated it's not neccessarily slow either.
Quote:
So, I repeat my original question, is there any way to hardware-accelerate the blitting of ALPHATEMPLATE bitmaps (which are raw data returned from BulletAPI, not struct BitMap*)?
You need your data on the graphics card, the only way to get it onto the card is via a bitmap. So no. Not in the way you ask the question.
As far as I can see you fastest process would be
Aquire data from BulletAPI
Blit to friend bitmap of screen (I suppose you could lock the bitmap and manually write the data if you think you do abetter job of optimising that transfer, unlikely IMHO) This stage might be cacheable if you are reusing glyphs a lot.
It's ALPHA8 rather than ARGB32, but good point on the blending. Allocate bitmap, blit, composite, free bitmap seems a bit overheady but I'll give it a try. (and as you say, I might be able to cache the glyphs, or re-use the bitmap, or something).
It's ALPHA8 rather than ARGB32, but good point on the blending. Allocate bitmap, blit, composite, free bitmap seems a bit overheady but I'll give it a try. (and as you say, I might be able to cache the glyphs, or re-use the bitmap, or something).
Caching the glyphs is definitely the way to go. Otherwise you'll be copying the glyphs from RAM to VRAM for every character. Also, if you cache multiple glyphs into one bitmap like OGLFT does, then you could render multiple characters with one CompositeTags() call using vertex arrays.** It would be great if we had a shared library that handled this, so that people wouldn't have to reimplement this from scratch.
Hans
** Once again, note that there is no software fallback for vertex arrays, so you'd need your own fallback for graphics cards that don't support this feature.
What do you mean "transparent areas"? Are you talking about text rendering (a la Qt) or BananaBar?
In bananaBar, the transparency is controlled via a ClipRegion, which points to an ALPHA8 bitmap, which then designates the degree of transparency. The problem here is painting to the Alpha bitmap in a system compliant way.
In Qt clipping is done via Qt's own clipping system (which is really dumb and complicated, by the way), and for every drawing operation I need to manually specify COMPTAG_DestX, DestY, DestWidth and DestHeight for the call to CompositeTags.
But then again, I don't really know, what your problem is.
Which blitting function are you using? What do you mean be "not clipped"? For transparency effects, you need to somehow isolate the alpha channel and copy it to a separate ALPHA8 bitmap, that you then attach to a ClipRegion. And by the way: Every time you update this alpha bitmap, you need to do a correcponding call to SetWindowAttrs() to reset the clip region (can't remember the name of the tag).
Also: Which compositing operation are you using? Src_Over_Dest or just Src or something else? Maybe you should post a code snippet here to make it more clear, what you are doing.