@Georg Thanks for looking at , but that probably not Cadog's problem, as same Cadog's code build over MiniGL give no problems (with the same SDL). Its something or with gl4es , or ogles2, or warp3dnova, or sdl1, or the way how i add it to sdl1.
Quote:
dprintf: do you have it set up to show up in console window (sashimi)? What if you make it go into file or NIL:?
I have real seral cable connect to pc with putty, so log everything on pc notebook, without setting up any additional sw on os4. In our case dprintf its just #define to IExec->DebugPrintfF.
It even not very much dprintfs , it also pure printf can shift the issues. Its like you add some function call somewhere (any fucntions, not exactly printf or dprinf) and behaviour may or may not change. Some random crap..
Tried to port CUBE2 for our tests for memory trash errors, and while binary runs:
Game sadly fail to start because of our shaders not supported arrays :) Shaders created for that game are very small ones, but still all of them use arrays.
Have you tried to modify a simple example, by adding some prints, or maybe context modifications, to see if it is possible to trigger bad behaviour that way?
@Capehill Yep, but the problem is that on Daniel's setup it didn't reproduced most of time (except quake3, and probably irrlicht engine example). For example shorten version of Cadog and LettersFall game produce for him no erros and no visual glitches, which mean that trashing of memory depend on how whole binary placed in the memory, how code and data sections of binary parsed and where in memory they placed, how elf.library split it all in the memory, etc, and then, later, something overwrite somewhere some areas (which may, or may not be on the same place on different setups), and which cause in end all fancy random problems..
It may also be the code execution speed which changes by adding/changing code (especially with slow debug output over serial), so instead of printf, dprintf try adding a delay with a simple for(;;) loop which does nothing:
/* global variable! */ int delaycount = 10000000; int delaycounter;
This way you can try to reproduce problems/make them go away again , by testing various values of delaycount. And the code is always the same, so no chance of shifting things around in memory. Only things changing is contents of delaycount variable.
In case it is delay related, not place in memory related.
Tried to build today NeverBall/NeverPutt game: some random funny shit again :)
Firsty, i buid it with gl4es compiled with -O0, then games runs, but then crashes randomly on glDrawArrays.
That when i setup "stack 2000000" before running the games.
Then, i rebuid gl4es with -O2 as it was before, and then, game doesnt run anymore, and start crashing on running all the time, in the glDrawElements. There is crashlog if it will any of help at all:
Both and neverball and neverputt crashes the same now.
But then, i tried to run those -O0 versions: crashes the same ! Even after poweroff the computer for a while !
Then, i tried to play with stack. And found, that if i do setup "stack 2000000" before running "neverputt_O0", then it crashes on start. But if i set "stack 10000", then it runs. And offten crashes on exit in glDrawElements again
Dunno if it point out on any stack trashing for us or not..
I uploaded whole archive to "Report_here/neverball.lha".
There is 4 binaries:
neverball (that one buid with -O2 gl4es) neverball_O0 (that one buid with -O0 gl4es) neverputt (that one buid with -O2 gl4es) neverputt_O0 (that one buid with -O0 gl4es)
Just unpack as it, and before running set some stack to play with.
I also tested that "cadog" cuttoff version where you have no issues with, to play with stack : that make no differences in that case.
Thanks for looking at , but that probably not Cadog's problem, as same Cadog's code build over MiniGL give no problems (with the same SDL).
Testing with SDL_FreeSurface() commented out would not be about problem in Cadog code but would be a workaround for theoretical problem in outside driver/gl/wrapper/whatever code in theoretical scenario like this:
- the SDL surface pixels happen to be in VRAM when cadog locks the surface and calls TexImage2D
- the driver/gl/warp3d whatever happens to have accelerated function to move direct pixel (no bitmap handle) data from one place in VRAM to other place in VRAM, maybe similiar code as used for function like WritePixelArray() where source is just raw pixel pointer.
- in driver/gl/warp3d the TexImage2D implementation uses some movetoVRAM() function which mistakenly looks at source address which happens to be in VRAM and mistakenly decides to use the accelerated but asynchronous helper function instead of normal CPU based CPU to VRAM transfer.
- the source texture data passed to TexImage2D() is reused too early by other things, because there's nothing which waits for async transfer of data to texture to complete.
@Georg Trying to comment out SDL_FreeSurface() in that cadog-test-case and buid versions with and without debugprintf() : no differences (i.e. without prinfs it show background, with debug printf before context creation do not show).
Then, tryied "delay" way instead of prinfs: tried everything: 1, 10, 100, 1000, 10000, 100000, 1000000, 10000000, 50000000, 999999999 : in all cases see no differences. But once i put : IExec->DebugPrintF("a"); then background disappear. Through pure "printf" make no changes now.
I also dump all the info via readelf (sections, headers, etc) just in case it may lead to any idea (16mb text file):
@Daniel I rebuild gl4es with enabed debug for fpe.c, and running neverball over it: its the same crashes for me of course as before (like its not very random for me now), but i for sake of more info noted that crashlog too:
I checked the index array at the time of the crash. It contains about let's say 80% garbage (tons 0,0,X "triangles", lots of 0xFFFF indices; well it looks like semi-randomly trashed memory), the last maybe 20% look like somewhat valid indices inside the expected range.
Btw.: there are two glDrawElement calls before the one that crashs, those looks absolutely sane (reasonable indices etc.) and they work flawlessly.
Until the point of the crash the whole lib seems to work correctly, no sign of any lib or other coruption (and quite a lot happens under the hood until then), but then it's fed with this invalid index array and says good bye.
Of course I cannot say who's the one who *really* corupts it in the first place (because its coruption can also be a side-effect of something else). But I can say that it is already corupt *before* ogles2 does any work inside glDrawElements, it is being sent in a corupted state to ogles2.
Maybe some endianess problem. Has gl4es ever been tested on Linux PowerPC?
Endianess problems sometimes appear in unexpected places. in AmigaOS/Exec/MakeLibrary() for example there's the table of functions which can contain function pointers (4 byte entries) or, if the first WORD in the table == 0xFFFF, then instead it contains offsets (2 byte entries). The check "if (*WORD*)funcInit==-1)" does not work on little endian (AROS), which was discovered more or less by "luck" when by pure coincidence the first function pointer happened to end exactly at address 0x????FFFF. And so the *(WORD *) saw 0xFFFF there and assumed it was offsets instead of absolute addresses (a function on x86 may start at an odd address).
How the context creation code looks today? I checked your repo and it hasn't changed for a while.
It still kind of the same, i didn't change it in repo as it all about local tests at moment.. If you have interest to add it to sdl1 , i can do all the latest changes, and that will help at least to not doing the same all the time new version come out :)
Quote:
I haven't checked readelf dump but are you seeing some difference if you dump "OK" and "NOK" binaries?
Didn't compare at momement, but will do.
@Daniel
Quote:
Of course I cannot say who's the one who *really* corupts it in the first place (because its coruption can also be a side-effect of something else). But I can say that it is already corupt *before* ogles2 does any work inside glDrawElements, it is being sent in a corupted state to ogles2.
Damn, that bad ! But as it crashes for you too, then at least its something reproducable. For sake of tests and just to be sure that code of game fine: i recompile the same source code over minigl : all works fine as expected.
@Georg Quote:
Maybe some endianess problem. Has gl4es ever been tested on Linux PowerPC?
As far as i know, no. At least when we start, we fix some big-endian issues in the gl4es , but they was just about colors most of time. Wrong alpha, wrong colors of textures, i.e. ususal crap.
Probably i need to install linux on my x5k, and do test gl4es on it. Through, problem is that it is unknown, if any linux on x5k works enough already to have OpenGLES, SDL, all developers tools, etc.
2 weeks ago i tried to setup virtual linux ppc over QEMU, but its slow like hell from the hell even on fast enough PC. Just unpossible to work with.
@Georg I do check all the gl4es sources on just "unsigned " , and found that there is mostly "unsigned int", "unsigned char" , "unsigned short". No "unsigned word" found , but found a bit of "unsigned long".
Probably if something can be, its list.c / list.h At least in list.c , i can see "unsigned long cap" , and later that "cap" are used. And in list.h there in structures some unsigned longs too.
In streaming.c (which are helper functions for the Streaming textures, which dunno if in use at all in our cases), there is also some:
That gles_glDrawElements(mode, count, type, indices); at end is the actual call to the GLES2 driver. So it's do directly amiglDrawElements at this stage, which call OGLES2->glDrawElements.
And no printfs happens before crash. It didn't find any 0xffff in before sending.
I even add that 0xffff check in amiglDrawElement in amigaos.c:
So that mean: from gl4es come no 0xffff , and , ogles2 recieve it as 0xffff already. That mean memory trashing happens after we do actual call to ogles2 driver, and before, ogles2 driver do what it do after starting handling of it.
Of course, if we take in account that there is always some trashed 0xffff present.
gl4es author says that it would be interresting to have the same kind of test inside OGLES2, and also see what are the address of those 0xffff value (to compare with the beginning of the indices array).
I replaced and that code part, as well as trying stack 200000000, and still same result.
But found something interesting. If i run neverputt binary with default stack (65 528). Then it crashes. If i set any more of it, it also crashes. But if i set less that 65 528, (let's say 65000), or anything less, then it runs. Through it also kind of random. One time it was just 65 528 crash, and 65 527 are not. Trying to reproduce it after reboot : and can't. This time crashes and 65528 and 65527, but 65200 don't.
At least in that example with neverputt, setting stack make a differences somehow. But with neverball it make no differences (as well as with cadog and others)
@kas1e It doesn't *have* to be 0xFFFF on your side, that is (among many other nonsense values) just part of the invalid (random) indices on *my* side, your array may contain other random stuff. As I said, the last ~ 20% of the array look plausible here, so for me at first glance it looks as if I'd get something like &indices[-4000]. Then again, those valid looking indices are rather high, so it actually looks more as if those 80% before that data are overwritten with crap.
Anyway, please do the following: please upload a version to my ftp which spits out the count / pointer etc. at your glDrawElements like you did above and then simply also print out the, let's say, first 100 ushort indices, comma seperated.