Maybe, but I'd much rather migrate to std::shared_ptr<>.
@kas1e
Quote:
Btw, is -O1 already cause problems ? I assume it is, as if you disable whole optimisation as you say.. via -O0 probabaly ? If -O1 already start to make it bad, then all flags which contain -O1 level can be added one by one manually as Daniel said , and in end find out the guilty one (or ones), which will point out on the roots of problem. At least you will know what exactly optimisation cause it.
From memory, -O1 used to work until I updated GCC to yet a newer version. After that, I had to disable optimization entirely. I never had the time to look into specifically which optimization triggered it, and nor do I have that time now.
I do hope to take another look at this at some point, but for now it's working reliably and I really do have other priorities.
[quote]I feel you already burnout a bit with all those new projects Mattew throw at you, maybe keep them all on pause, and instead focus for 3-4 weeks on NOVa again ? :) [quote] I have more work than only what Matthew gives me. My top priority right now is to increase my income, and that's *not* going to come from A-EON.
@Hans De nada, you know that you're always welcome, although there wasn't sooo much of an opinion inside but rather questions that got skipped (e.g. what's so hard about shared-ptr replacement), hints on how to easily get back some speed, analysis of the situation.
@Capehill Quote:
If you need only boost::shared_ptr, I would imagine you can import a more recent version to your project in a reasonable time. It shouldn't require porting.
Very very very good point!
@Hans Quote:
Maybe, but I'd much rather migrate to std::shared_ptr<>.
And very very very sad answer, considering that you already said to have given up on this before for unknown reasons, despite being so convinced in this old boost::shared_ptr being the culprit. Capehill's approach would likely be a matter of some minutes! This shouldn't interphere too much with any other stuff on the todo-list, even less than the try-manually-adding-optimizer-flags.
But well, I see that you won't move no matter what, so be it so.
Are you able to provide a test build with instructions? I built the latest ScummVM but cannot get the themes work - only the built-in green/black works for me.
If glsnoop is in place the start of scummvm is slow enough to see it. 1) the screen that is opened is not black but a mush of pixel lines in different colors, the same goes for the mouse pointer until the screen is fully loaded and both go back to normal. 2) If i return to launcher without closing scummvm (RTL from within a game) then the mouse pointer will be in either of two states. --- 1 - It will be a box of mushed up gfx garbage or --- 2 - it will be completely invisible (this is the case for about 75%)
I have also seen some transient visual issues with ScummVM/OGLES2 version ("first frame broken", "strange mouse pointer"). Could have something to do with texture update.
Added a frame counter print and some delay after SDL_GL_SwapWindow call in ScummVM code. The first frame is mostly pink, with some red and black pixels. The second frame is OK. If I disable either FBO or multitexturing in context.cpp, the first frame becomes black while the second is ok. Need more experimenting and understanding, and maybe some FBO test example. I have never used one, maybe it's time to try.
void main (void)
{
vec4 texcolor=gl_Color;
texcolor*= texture2D(Texture0, gl_TexCoord[0].xy);
// FOG
texcolor.rgb=mix(texcolor.rgb,gl_Fog.color.rgb,g_fFogFactor);
// ---
gl_FragColor=texcolor;
}
And with those shaders on win32 and on pandora with gl4es we have that:
(press open in new tab for fullscreen):
So all renders correctly, but on amigaos4 it fail like this:
(press open in new tab for fullscreen):
Pathes to textures of course fine and so on (because when shaders disabled, all renders fine from same code).
Also that how those shaders looks like after shaderconv of gl4es regenerate them and send to ogles2, and those ones we can test on our side (thank for glSnoop to take and see them easy!)
So.. now those shaders start to be really small, i hope it can be more understandable what wrong.
Maybe you have any idea of what to change for tests sake ?
I also will try to create some SDL2 simple test cases, which will use those 2 shaders, and see, how they reacts on win32/sdl2 and on aos4/sdl2 (so to rule out more involved parts).
@All Is anyone can write some test code which will use those 2 shaders ? As i understand there need to be 1 texture be draws, but vertex shader with all that matrix and camera make me feel sad :)
Edited by kas1e on 2019/8/5 9:30:07 Edited by kas1e on 2019/8/5 9:34:47
Have you tried simplifying the fragment shader down to just the following?
uniform sampler2D Texture0;
// FOG
varying float g_fFogFactor;
// ---
void main (void)
{
vec4 texcolor;
texcolor = texture2D(Texture0, gl_TexCoord[0].xy);
gl_FragColor=texcolor;
}
If that's not working, then you could try adding: texcolor.a = 1.0; That's just in case the alpha channel is missing. If the shader above works on AmigaOS 4 (as in, you see the texture), then you can gradually add back in the other code until you hit whatever breaks it.
I just totaly remove the fog and from vertex and from fragment now, as well as some other parts. That how they looks like now:
Vertex:
void main (void)
{
gl_TexCoord[0]=gl_TextureMatrix[0]*gl_MultiTexCoord0;
gl_Position = ftransform();
}
Fragment:
uniform sampler2D Texture0;
void main (void)
{
gl_FragColor = texture2D(Texture0, gl_TexCoord[0].xy);
}
So on win32/opengl and on pandora/gl4es i still can see textures, while on amigaos4 still same no-textures.
There is win32 look with those last-reduced shaders:
(press open in new tab for fullsize):
And there is amigaos4 one:
(press open in new tab for fullsize):
As you can see original shaders now very basic, but i assume the problem for us its those ones which generated by gl4es from them and which ogles2 actualy recieved:
void main (void)
{
gl_FragColor= texture2D(Texture0, _gl4es_TexCoord[0].xy);
}
Probably next step, is to take some sdl2/ogles2 draw-texture tutorial and put those shaders generated by gl4es in, and use them. So if it will works on win32 and fail on aos4 : alilua , almost close then.
Any ideas welcome :) Maybe problems with sampler/texure/arrays mixing ?
Edited by kas1e on 2019/8/5 20:51:28 Edited by kas1e on 2019/8/5 21:15:24 Edited by kas1e on 2019/8/5 21:45:29 Edited by kas1e on 2019/8/5 22:18:59 Edited by kas1e on 2019/8/5 22:19:29 Edited by kas1e on 2019/8/5 22:20:07
void main (void)
{
gl_FragColor= texture2D(Texture0, _gl4es_TexCoord_0.xy);
}
As you can see bug is gone just once we get rid of arrays usage in matrixes and in texCoords
And that what ptitSeb says:
--- It's something about texture matrixes yes, but what I'm not sure. Because, in a shader view, there is not such thing as a Texture Matrix. It's just a matrix.
I'm unsure what's wrong. You should ask Hans what could be wrong, I don't see any obvious way to explain the issue. ---
And, its not issue of arrays+Tecoords, its exactly issue with arrays+matrixes. I.e. we tried even those shaders and they fail on amigaos4 (i.e. without arrays in texCoords, just in matrixes):
void main (void)
{
gl_FragColor= texture2D(Texture0, _gl4es_TexCoord_0.xy);
}
So its exactly problem when matrixes used in array. I.e. issue is "uniform highp mat4 _gl4es_TextureMatrix[1];" + "_gl4es_TexCoord_0=_gl4es_TextureMatrix[0]*_gl4es_MultiTexCoord0;"
We think about maybe uniform was not found, but we do some test case , and it's not the case (at least in our simple test case)
@kas1e Hans is probably not guilty this time, it's more likely me I added code for GLSL input-arrays en passant back in early 2018 for version 1.18 when Nova finally supported structs and I implemented support for those, but the array-stuff was never tested (actually I didn't even mention it in the relnotes because of this). Looks like the array-part is broken, at least that would explain what's happening (although a not found uniform would be the kind of problem I had expected first; maybe it's Nova after all - but let me check first ). Please upload your most minimal test to my FTP, thanks!
@Daniel Sadly we don't have the most minimal test at moment :( Its all tested inside of gl4es compiled inside frickingshark, and shaders in test are replaced ones in fricking shark.
I was about starting to create some test case which will use exactly those shaders taken as base some simple sdl2+ogles2 example with texture drawing, but didn't gone so far , as shaders in question use matrixes calculation, while that simple test case over which i tried to use them, are not. But maybe it will be easy for you to fix test case to make it work with holding original shaders. As far as i got there need to create some dummy(or not)-matrix-calculation code to make those shaders work as intendent.
At top there commented out original shaders (so you can uncoment them, to see that original code with trawing triangle + texture over it works), and currently uncommented (but of coruse not working too) , those gl4es shaders with arrays which need to use in that test case.
Compile lines are at top of hello_texture_frickingshark_test.cpp file, for os4 it will be:
@kas1e Thanks, that will do! The weird part is that an uniform array with just 1 element should not break things inside ogles2 even if there's a bug in the array-support-code in general. But well, I'll check it out.
Hans is probably not guilty this time, it's more likely me
Don't worry, i seems found 2 other shader issues :) (not related to arrrays), but waiting confirmation from ptitSeb that on his side on gl4es/pandora all ok before posting them there.
So about those 2 new bugs i find in fricking shark shaders when we do test it with LIBGL_NOTEXARRAY.
issue #1:
There something with lighting. For first, whole game is "low-lighting". For second, when hit the enemies everything goes black. I.e. on the moment of explosion. Probabaly when fog starts (i.e. other shaders effects). But that bug disappear only when delete fully lighting code from shaders. When remove fog's code from shaders bug still there. Also as you will see on video, all textures are "low-lighting". I.e. textures of ships in menu, and whole gameplay in the levels.
As can be seen later in the video, everything but not "water-ripple" effect and "fog / fire" are start to be black. So like only pure textures, and not other "shaders" effects.
issue #2:
There is some "water-ripple" effect : in the menu at begining, and in the game there and there. So, after i game starts, and you watch water-ripple effect somewhere (be it menu, or game), then after about 150-200 seconds, that effect start to disappear bit by bit, until fully not disappear. And only way to get it back its quit and re-run game (So shaders will reinitialized).
I recorder another video to show this : i start 1st level, fly to the moment where water-ripple effect visibly, then press "Esc" (for menu), and waiting 150 seconds.
On video better just move slider to 3:20 and see how from 3:30 it all start:
I asking ptitSeb of course, if for him all fine, he say that yes, with or without LIBGL_NOTEXARRAY all works fine. He have an idea that issue with lighting (issue #1) , maybe a precision issue in the light calculation (and can be the same roots of issue #2). gl4es there use "high precision float in fragment shader" , so maybe related to that.
And there is those shaders in questions (original ones, not glsl, and pretty big ones):
I asking ptitSeb of course, if for him all fine, he say that yes, with or without LIBGL_NOTEXARRAY all works fine. He have an idea that issue with lighting (issue #1) , maybe a precision issue in the light calculation (and can be the same roots of issue #2). gl4es there use "high precision float in fragment shader" , so maybe related to that.
We're using 32-bit floats (that's what the hardware supports), so it's not a precision issue. Do you know which variable is responsible for #1 happening? Once we know which variable is causing it, then we can track down where it's going wrong.
For #2, what value does the CurrentRealTime variable get up to when the waves start reducing. Perhaps the hardware cosine instruction doesn't work well with large values.