@Raziel I for myself never use that GLEW thing for real, but:
GLES (OpenGL ES) its same OGL (OpenGL), just without some stuff. So we have OpenGL ES because it was easy to do than full blown OpenGL. So they on the same level and can be compared.
Now, GLEW its completely different thing, it's some addon which allow to load other extensioins (extensions its just other functions which not present on opengl, or on opengles) in some easy and cross-platform way (at least that how it writen by authors). You probabaluy can use it and with full OpenGL, and with OpenGL ES (not sure, but probabaly).
In general "extension" thing it mostly problem of other OSes where many different drivers, versions, hardware, etc ,etc. I.e. for example they have opengl_1.0. It have some 100% set of fucntions which will works everywhere. Then some driver can add some other fucntionality, or some other hardware, but you can't put new functions to the "standard" which already set, so you have "extensions", which you check in the code, and if it present, you load it, and use, if your driver/hardware didn't have that, then app will not use it, and so no crash when will trying to use function which is not present on choicen driver.
For us on amigaos4 it make no sense to have extensions as extensions, as we have just one ogles2 driver with stable set functions, etc. Lately for sake of easy porting Daniel add to ogles2.library "aglGetProcAddress", so code from other platforms like win32/unixes, can be compiled "as it" (like we call loading of extensions, while we can use it just as direct call to funciton).
So, GLEW its some other (and probabaly for someone easy) way to load those extensions. You can not port it at all and dind't use, just comment it out , and call directly those functions which glew was in hope to load as extensions.
I remember back in past, with MiniGL it was problem to make a GLEW port, because probabaly minigl lack too much of GLEW extensions, so there wasn't many sense probabaly.
With GL4ES which give us better than minigl version of opengl, usage of that GLEW thing probably can be possible. If i remember right, GLEW its just one header file + some little source file, but i can be wrong.
Quote: "The OpenGL Extension Wrangler Library (GLEW) is a cross-platform open-source C/C++ extension loading library. GLEW provides efficient run-time mechanisms for determining which OpenGL extensions are supported on the target platform. OpenGL core and extension functionality is exposed in a single header file." - http://glew.sourceforge.net/
@Daniel I wrote you mail 2 days ago about one problem we have with SDL2 fullscreen/window mode switching, but as you didn't answer maybe its just in spam box and you dind't see it, so will try there:
SDL 2.0 can toggle windowed/fullscreen and back with OpenGL windows without losing the GL context (hooray!) via use of SDL_SetWindowFullscreen() function. What mean that context didn't destroyed/created, just the window.
With MiniGL all works fine, as MiniGL's context doesn't use window pointer, but with OGLES2, once we tried to swith between window/fullscreen, we crashes in ogles2's ContextSwapBuffers(), as seems that when window is destroyed, it seems to leave the OGLES context with a dangling window pointer.
Question is: did OGLES2 have any way to update context's window pointer ? So we can add necessary code to SDL2 / OGLES2 part to make it works. And if it didn't, how hard/possible it will be to implement ?
It's while since I did it but IIRC porting GLEW was tedious rather than difficult, there's miniGL based port in the src tree of blender. I vaguely remeber a lot of include editing and cross referenceing....
In the blender 2.48/9 port it allows it (amongst over things) to know when shaders are available in the game engine and GUI (ie that they aren't) without any OS4 specific code in the main program.
You would need a seperate port of GLEW for each GL implmentation.
@kas1e Man, you sent me a mail on Friday at 6 pm. Not even my very best long time customer would expect an answer over the weekend... Anyway, from what it sounds I'm asuming that you don't use ogles2 correctly. There are two ways how to do a fullscreen <-> window toggle (which usually means a window change):
1. destroy old context, create new context with new window setup. Implies recreating all gles2 objects of course (no matter what SDL2 thinks it can preserve, it has to recreate the stuff if going that route). 2. convenience helper function aglSetParamTags(OGLES2_CCT_WINDOW,Window*,TAG_DONE); (has been added ~ a year ago, see changelog 2018-02-18). SDL2 must use that one if it wants to preserve the context, no way around that.
So, if you don't use (2) correctly, then it won't work. (2) is actually exactly what you asked for in the last line above.
EDIT: actually the function's name is aglSetParams2 / aglSetParamsTags2. The others are deprecated and should be avoided!
Edited by Daytona675x on 2019/2/4 12:57:46 Edited by Daytona675x on 2019/2/4 13:02:22
Some of you may read Daniel's facebook where he post some details about new version of ogles2.library where he optimise quite a lot of stuff, which in end result in pretty big perfomance boost in some places.
Then he optimize it even more, and we have some more perfomance gain again.
I will post there results with current public 1.22 version, and latest beta one, so you will see differences. All done on my x5000 / r7-250. I will also put minigl results, where minigl builds possible. So, firstly let's see those ones which have minigl versions:
Feel the difference ! There is visibly very much. There is no mistake : 61 fps vs 378 fps.
Foobillard++ SDL2 (soon to be released)
minigl 2.21: 18.2 max
ogles2 1.22: 54.5 max
ogles2 beta: 61.5 max
There is maximum possible fps. Sometime it drops quite well, depent on the situation. Game itself also quite cpu demanding, in the original readme it said "with all details at low, you need minimum 1.6ghz cpu", so for my 2.2ghz on x5000 and with latest ogles2 beta, i can say its playable well on middle settings (on which i do those tests). Minigl version is fully unplayable and slow like slideshow, 18fps in it are maximum possible, sometime it even 5fps.
So in quake3 is not that much, just 2 more fps. But that with patching code for endian conversion inside of ogles2.library, and when Warp3D nova conversion code will fixed (optimisation didn't works for that code by some reassons) , then patching code from ogles2 can be removed, and it will give (we can hope), few more fps.
Now, those ones which have no minigl versions:
Fricking Shark:
ogles2 1.22: at some check point: 83 fps
ogles2 beta: at some check point: 111 fps
+ 28 fps !
Prototype :
ogles2 1.22:
-- first text scroll: 531
-- menu: 306
-- at start of level1: 243 max
ogles2 beta:
-- first text scroll: 575
-- menu: 330
-- at start of level1: 319 max
That one have huge gain too!
Barony:
That one have no differences. In some previous beta version of ogles2 it even have some decriasing of fps, but in very latest beta its now the same as for 1.22. Thats not surprise, as game itself is CPU demanding than GPU, and mostly want raw cpu power.
In end of all as you can see it depends where things go much faster, and where not that much faster. In whole , everything done over gl4es/ogles2/warp3dnova is faster than over minigl always, sometime even quite a loot. Even that damn quake3 with his oldschool way of doing things on the same level as minigl (just loose 3-4 fps, while giving as result better quality of game : have no rendering bugs in few places where minigl versions have).
Probabaly from the ogles2.library side, everything its already optimize quite well (as we know Daniels skills by all those games and things he do in terms of optimisation). Surely gl4es itself can be optimized more. Probably warp3dnova can be optimized a bit as well (and shaders issues fixed :) ), but in whole if you take in account that its gl4es -> ogles2 -> warp3dnova -> graphics.library -> kernel, then its quite cool :)
So, thanks to Daniel for ogles2.library works, to Hans for warp3dnova, and to ptitSeb for gl4es !
Edited by kas1e on 2019/2/5 21:01:46 Edited by kas1e on 2019/2/5 21:06:28
That's quite interesting, aniway is there any condition (a software port that can be compiled for both) that doesn't use this GL4ES wrapper at all, so to see how it could be the "real" comparison in term of performance between MiniGL and OpenGL ES ?
@samo79 SDL2 renderer benchmark which we check in SDL2 thread year ago running on all latest stuff show speed up in 2-3 times for ogles2 over opengl everywhere, but fillrects() tests show even 8 times more speed up (minigl one 430, ogles2 one 3700).
Except ReadPixels(), which is on the same level, but as we know its limitation of x5k not having DMA in graphics.library.
ptitSeb keep repeating its not gl4es, daniel says that it probabaly also not ogles2, and Hans says that as there is not warp3d to be seen in crashlog, then its not warp3d :)
So then i will try to debug it myself till understanding where is issue.
Just to understand more, you says before that "judging from the crash log it's likely that it's trying to use a non-existant vertex attribute array. It's loading data from address 0xFFFFFFFF (i.e., the end of addressable memory).".
So yeah, GPR #8 (r8) contain "FFFFFFFF" and disasembly point out that it crashes on lbzu r6,1(r8), so tried to use end of addressable memory..
But why there is 0xFFFFFFFF that another question then.
Is there anything else can be said from the crashlog which can point out us on something ?
While glRasterPos2d() is the trigger, it looks like some previous render call is setting up rendering incorrectly. I'm guessing that bitmap_flush() is called to execute any pending render operations before the raster state is updated. So, the crash's root cause is probably one of those pending render operations. It'll be something that is drawn via gl4es_blitTexture().
NOTE: It could be the frickingshark code itself that's at fault. Don't let "it works on other platforms" fool you. I've repeated this story a few times, but it's relevant here: one bug that was initially blamed on MiniGL was actually an uninitialized array in the game code. It happened to work on other platforms because other platforms zero newly allocated memory, which just happened to be a good default value.
@Hans Thank for another ideas, will check it more today.
Quote:
Don't let "it works on other platforms" fool you. I've repeated this story a few times, but it's relevant here: one bug that was initially blamed on MiniGL was actually an uninitialized array in the game code. It happened to work on other platforms because other platforms zero newly allocated memory, which just happened to be a good default value.
Yeah, funny example was foobillard++ port (plan to release it once ogles2.1 will be out for public): it have in original code 4 real bugs : 2 of them unitialized memory, 1 overflow of heap and 1 just mess from the mess copy of arrays to the wrong destinations (wrongly initialized memory). And so, funny thing is : none of them showups on win32 or linux/pondora build. Everything works like all fine. Seems that both linux and win32 not only deal with unitialized memory internally, but also have guards for more different kind of bugs. No suprise than when we trying to port something to os4, most of time we have some new bugs which others didn't have ..
Yes, but I also said that IMHO it's most likely gl4es I have no other reason for my guess but the rather low code quality of gl4es around gl4es_blitTexture_gles2 though, plus the fact that I see no way how ogles2 could produce such a crash without being fed with garbage from outside in the first place. Nova seems to be not guilty here.
As Hans pointed out that garbage from outside may come from Fricking Shark itself though. In any way, it looks as if ogles2 is being fed with invalid vertex array pointers and / or falsely enabled vertex arrays.
And yes, that "it works on other platforms" is nothing to count on. We had such issues here before where it ultimately turned out to be a gl4es issue despite "it worked on others". Or take a look at T57: the PC version has literally trillions of unitialized variable bugs but the coder didn't notice because "it worked on his system". Of course it blew up on Amigas
We just checking if the vertex setups correctly, by adding to the src/gl/blit.c (you can press on link to see code) that debug-prinfs:
LOAD_GLES2(glGetVertexAttribiv);
for(int i=0; i<hardext.maxvattrib; ++i) {
int ok = 0;
gles_glGetVertexAttribiv(i, GL_VERTEX_ATTRIB_ARRAY_ENABLED, &ok);
printf("vertexarray[%d] = %d\n", i, ok);
}
And so as he expect, only the 2 first vertex array to be enqbled (1) qnd all other to be disabled (0). And that didn't changed at crash as well.
Next, he agree with what you both saying, but the problem is that `gl4es_blitTexture(...)` function is mostly self-contained. This function simply blit a texture to the framebuffer. In that case, it is most probably the text beeing shown with f4 (that's a bitmap, so it is feed to a temporary texture than blitted). The blit function disable all VertexArray except the 2 first (wich you have checked is correct) then draw a simple triangle Fan. I'm fairly confident the 2 Vertex Array are correctly setup.
So yeah, the issue is probably in FrikingShark itself overwriting some memory it shouldn't I guess.
@Capehill It is not only happens when you die, but also when you pass the level. I.e. when everything clears by black and then should showup new things, then it crashes.
So, what it shows is the both VA seems to be correctly setupped, with sane value for all the fields. So if we take in account that it issue with VA, we then don't know where the 0xFFFFFFFF as a pointer comes from.
To Go further, would be interesting to have similar printf from inside OGLES2, to understand were this bad address comes from.
@Daniel It is possible for you to build test ogles2.library, which prinfs VA values (size,type,norm,stride,pointer) to serial or something ?
So even if it will be not ogles2, we can at least found from where 0xffffffff come (if it about VA at all..)