@Capehill Tested latest version : independent starttime & duration works fine, as well as output in info log are good too :) Thanks a bunch for such a fast fixes/changes.
I notice through few little nitpicking issues which we can skip, but maybe worth to note (or maybe add them to BZ for 0.4 or for some future):
1). As we can use "Starttime" or "Duration" only with "PROFILE", then maybe worth at begining when we run glsnoop check also on that: if no PROFILE arg was used, but or Starttime, or Duration, or both was used, then or exit with words "sorry, startstime & duration can be used only in PROFILE mode", or, continue with the same words, but like "sorry, starttime & duration can be used in PROFILE mode only, skiping and continue without".
So that will be good user's feadback about.
2). it didn't works in GUI , but that probabaly expected ? Maybe to make it works in GUI, the same checking on args need it (so if PROFILE used, and if anything like STARTTIME or DURATION or both used and that all toghet with GUI), then make it works, just run gui with all buttons disabled. Or even instead of buttons, wrote in the middle of gui window "you can't control tracing/profiling when starttime/duration is used"
3). if you run glsnoop like "glsnoop PROFILE STARTTIME 10" (for example), and then imeediately press "ctrl+c" , then after words "*** Control-C detected ***" , we have "*** Delay timer triggered ***", while it wasn't triggered (as we for first didn't start anything yet, and for seconds 10 seconds didn't pass for trigger the delay timer). I think it can be some missing "if/else" ?
All of this for sure not really important, just trying to find any pieces which can be improved even a little bit :)
ps. And i have no single crash or issues for now when use glsnoop in all conditions. Pretty stable and helpfull
ps2. And i checked "tequila" of course too, pretty good :)
2) Yes, GUI mode is not supported. I don't want to duplicate all that timer logic on GUI side and GUI is pretty pointless if you are using start/duration params, in my opinion. If it can be implemented without code duplication, I might enable it in the future.
3) It's probably because timer is not cancelled upon CTRL-C. EDIT: "delay timer" (or "patch cooldown timer") is triggered in all cases. IExec has been patched and those patches must be removed "safely".
Btw, sorry for saying that, i am sure that you know it all well, but i just see you always change the build date manually, which of course take your time for nothing. Instead, you can use "date" for both crosscompiler and native. There adapted glsnoop's makefile:
ifneq ($(shell uname), AmigaOS)
CC = ppc-amigaos-gcc
DELETE = rm -f
STRIP = ppc-amigaos-strip
NEWDATE = $(shell date +"%-d.%-m.%Y")
else
CC = gcc
DELETE = delete
STRIP = strip
NEWDATE = $(shell date LFORMAT=%-d.%-m.%Y)
endif
See there i add for both native and crosscompiler different date call to have the same output, and in the CFLAGS add new __COMPDATE__ which will take the date.
So in code for both glsnoop and tequila you can use that __COMPDATE__ later without needs to worry anymore about manual date set.
That will have needs to do once, and you can forget about for all the time, and date will be always 100% accurate
See first shader start dumping fine, with cariage return after ";", then it disappear, and after that all going in line. And second shader dumped not in full size.
1) That's because shader string doesn't contain '\n' characters. Apparently only the prefix part does.
2) It seems to be chopped after about 1 kilobyte while glSnoop log buffer is 16 kilobytes. Does it happen with or without Sashimi? At least DebugPrintF() function is able to print longer buffers (> 1024 chars).
I think there is a bug in glSnoop in case strings weren't NUL-terminated and "length" array was passed because lengths are ignored as far as I can see, but this a separate issue.
1) That's because shader string doesn't contain '\n' characters. Apparently only the prefix part does.
That probabaly because prefix part added by gl4es at top of original shader , and then other part taken without modification as it. So probabaly not something we need to worry about then (?).
Quote:
2) It seems to be chopped after about 1 kilobyte while glSnoop log buffer is 16 kilobytes. Does it happen with or without Sashimi? At least DebugPrintF() function is able to print longer buffers (> 1024 chars).
It was pure "dumpdebugbuffer" in console (and i even tried to redirect it to a file, same result). When i just trace shaders function there no big needs in sashimi, as not that much to dump, so no slowing downs.
And when i check, that string wasn't line 1024, but something like 1068 or so. Yeah, "almost" 1024 limit, but a bit more.
I haven't been able to reproduce issue. Maybe you can add a normal printf() somewhere around https://github.com/capehill/glsnoop/bl ... ter/ogles2_module.c#L2259 for comparison.
What you think , is it worth to create BZ about "put human values in ogles2 tracing to glTexParameteri() instead of numbers values" ?
For example, i for now need to trace some bug, to check why i have some distored picture. So i need to know what i set via glTexParameteri. Currently , output looks like this for example:
So currently i had to refer to something like https://developer.android.com/reference/android/opengl/GLES20 , to see actual meaning of "pname" and of "param". But for sure will be better if instead of numbers there will be actual values. That will mean no additional work of copy+compare+rewrite-log :)
@Capehill Tried to profile some game, loading of which take about 40 seconds on x5000, and only 5 seconds on some modern win32 box. On loading all what happens, its bunch of textures puts to GPU video memory (about 500-600mb).
And when i run game, via cpu_monitor or via tequilla it show that binary itself take 96% of cpu loading. So something heavy happens there. But then, seeing that profiling log, it looks like that warp3dnova and olges2 , there take only ~10% of cpu loading , right ? (at least that what i see checking last column).
Or the column i need to watch on are one before last one ? With that "% of 6041.878015 ms" ?
@Capehill Its x5000 with NGFS, and in general i notice that slow loading everywhere where game need to put lots of stuff to memory.
Besides, i surprised with slow compiling times. Remembr i say before that on x5000 compiling of irrlicht engine take about 20 minutes (or so), while just few on win32. That kind of strange ..