I even can't understand what kind of bug it can be
Apart from mem trashes it could be things like compiler generating wrong code. Where sometimes it may not be even 100 % compiler fault, like with this strict aliasing stuff where it could be more a fault in the sources. Ever tried compiling with -O0 and or things like -fno-strict-aliasing? Maybe have once a gain a look at disassembled function which calls aglCreateContextTags() (the real function, not the dummy one, because it can be that correct code is generated for dummy one, but not real one which does more things incl. dprintf etc.).
Or it could be inside OGLES2's aglCreateContextTags() which must ~extract/~cast/~convert the "..." correctly to a struct TagItem* before passing it on to aglCreateContext(). On the various OSes/cpus and versions of it and it's compilers there are different methods (&lastparam +1, va_start() + co., va_startlinear() + co.,)
Please upload your minimal test examples (like Cadog stripped down) to my FTP. The less complex a test prog is the better.
Uploaded reports_here/cadog_test.lha. Inside 2 binaries, one with debugprintf before aglcreatecontexttags, and another without debugprinfs before aglcreatecontexttags.
There after running you can navigate to "exit" and exit from, all other stuff removed (most of it).
Run bins like this "bin_name -w - s 640x480". That one without debugprinfs before contexttags, should show background picture, another one, with debugprinfs before contexttags, show white background.
Through when you first time run "no debug printf" one, it also may not show background, but if you run it second time, it will (which all point on those random issues).
Quote:
Also note that the beforementioned fixed ogles2.lib 1.20 wip-version (also containing some other new features) is on the FTP too.
Yeah, will test it all now with all the games.
@Georg Quote:
Or it could be inside OGLES2's aglCreateContextTags() which must ~extract/~cast/~convert the "..." correctly to a struct TagItem* before passing it on to aglCreateContext(). On the various OSes/cpus and versions of it and it's compilers there are different methods (&lastparam +1, va_start() + co., va_startlinear() + co.,)
But then, pure aglCreateContext() should works fine. While with Cadog it is , memory trashing in LettersFall game still here.
But i will recheck with latest Daniel's ogles2.library with all those new createcontext functions, to see how it will behave now.
But i will recheck with latest Daniel's ogles2.library with all those new createcontext functions, to see how it will behave now.
It won't behave differently (well, if it does then it's just a sideeffect of sth. else) and it is certainly not the cause for any of our issues here. As being said: while Georg was of course absolutely right by pointing out the tags-wrong-IDs-bug in ogles2, there is nothing more there. And unless you used TAG_SKIP or other special tags (which AFAIK you did not), it simply did what you expected it to do. Other than this now fixed incompatibility to std. tags-processing, there is / was nothing wrong here. Don't forget to read the changelog.
@kas1e It's simply not the culprit, just like every 2nd adjustment here and there it may simply change the symptoms somewhat. That's the way it is when there's some undefined stuff happening somewhere.
Quote:
Should i also upload new cadog_archive with aglCreateContextTags2 or it is enough to have first version ?
Thanks, no need to, first version is enough, as being said: whether you use old aglCreateContextTags with the non-conforming tag-IDs or aglCreateContextTags2 with the new ones makes no difference for what's happening under the hood and it's not the source of the problems.
EDIT: both your builds (as the ones before) produce correct looking results here, no menu distortion at all.
@kas1e That's what I meant, no background or whatsoever distortions at all here with cadog (only checked sam460 so far, will check x5000 later too). Yes, doesn't come as a surprise to me neither Go on, upload what you got, the more the merrier
Just only with -O0 instead of -O2 , then i have on linking stage those errorrs:
Quote:
libgl4es.a(list.o): In function `rlVertex4f': list.c:(.text+0xe838): undefined reference to `rlVertexCommon' libgl4es.a(list.o): In function `rlVertex3fv': list.c:(.text+0xe938): undefined reference to `rlVertexCommon' libgl4es.a(list.o): In function `rlVertex4fv': list.c:(.text+0xea14): undefined reference to `rlVertexCommon' libgl4es.a(list.o): In function `rlNormal3f': list.c:(.text+0xf780): undefined reference to `rlNormalCommon' libgl4es.a(list.o): In function `rlNormal3fv': list.c:(.text+0xf800): undefined reference to `rlNormalCommon' libgl4es.a(list.o): In function `rlColor4f': list.c:(.text+0xf860): undefined reference to `rlColorCommon' libgl4es.a(list.o): In function `rlColor4fv': list.c:(.text+0xf8f4): undefined reference to `rlColorCommon' collect2: ld returned 1 exit status
Which for first kind of surprise, as if optimisation switch can cause such errors, then ..
And for second those words like "vertexcommon" and "normalcommon" make me think about our workaround we add for q3 before, where it was about "normalisation" as we think of.
In other words, can that shit be culpit ? I of course ask gl4es guy about as well , but more to know about the better.
EDIT: with -O1 also build fine, just with -O0 give those undefs.
Just like on my sam460ex I have exactly zero issues with any cadog build version. No distortions at all. The only true difference on our both systems is the RadeonHD.chip driver version (well, and the concrete gfx-card), IIRC. Mine is 1.17. Maybe we should try what happens if we temporarily exchange yours by mine.
EDIT: For whatever reason the compiler apparently gets confused by the "inline" of rlVertexCommon. Looks like with -O0 it doesn't inline it but creates a function call to it inside rlVertex4f etc., however it doesn't create this function which makes the linking fail. Maybe you can make it link by adding a static in front of the inline. Or temporarily remove the inline hint. What compiler (version) are you using?
Issue with -O0 and undefs on linking was just because of glitch of our GCC version: with -O0 it remove inline function but somehow still want the inline version for linking. I remove those "inline" in front , and all builds fine.
Now, i tested letters fall with gl4es compiled with -O0 and -fno-strict-aliasing: the same damn issues.
@Daniel With 1.17, issues in letters fall a bit changed (less visibly), but they still here. Try to go to the "options", and play with it (up/down many times, press on gadgets, etc ,etc). At least then i see distortion.
EDIT: about your edit about compiler issue: right, it was inline. But sadly -O0 -fno-strict-aliasing change a shit :) I use 4.4.3 one
@kas1e Oh, I *have* a bug. I only have to start that Q3 build. The thing is simply that by "luck" Cadog shows no symptoms here. Btw.: please also upload the Q3-hack-version which shows no symptoms on your side.
@Daniel Yep, just with striped version of cadog it can be easer than with q3. Will upload hack-"fixed" q3 in few hours as well as some small example from irtlicht engine with and wuthout hack (as it behave on my side as q3)
ioquake3_hacked.lha: 2 binaries inside, build with the same (last) versions of gl4es, just one version build as is, and other one with our hack.
irrlicht_example.lha: That one is one of their demo examples which come with their engine. The same 2 binaries hacked and not hacked. But run them as they are: they placed in bin/amigaos4/ , and they will ask for "media" directory in root. I.e. just unpack all as it and run from where they are.
Just in case we will have differences on our setup, that how looks like that example without hack on my setup:
And that how it looks like with hack (and how it should be):
All the bins compiles with libgl4es compiled with -O0 and -fno-strict-aliasing, just in case, as maybe optimisation will make it harder to play with, but probably not worse anyway :)
Seriously don't do that! Always use the Utility.libary tag AP to handle tags. It's so lightweight and easy to use , there is no excuse for writing your own code.
@broadblues I think this little (in this case even uncritical) issue already has been covered in great depth when I said "Nice great stupidity by me indeed" and "I just fixed it for the next lib release."
@kas1e Performance is a desaster, it has extreme hickups, but well, -O0 and your hack's temporary array alloc/free may easily be the explanation for this. Q3 looks good that way here too (those other issues like wireframe-effect and depth-buffer-sky-issues aside), so at least here we seem to have more or less matching results
I looked a bit around in cadog source. What happens if in sdlgl.c you comment out all calls to SDL_FreeSurface().
I don't really know GL, but what I see in there is code like this:
if (SDL_MUSTLOCK(img)) SDL_LockSurface(img) glTexImage2D(img->pixels); if (SDL_MUSTLOCK(img) SDL_UnlockSurface(img);
And my thinking is this: if glTexImage2D() is hw accelerated/asynchronous (no idea if it is), then after unlocking, the glTexImage2D() could still be in progress when the img == BitMap() is being killed/freed from memory. So img->pixels is valid when glTextImage2D() starts, but it may not be valid until it completes.