To make it short : some old 68k code have very ugly FLC player routine, and there is very simple and clean SDL based player: https://www.libsdl.org/projects/flxplay/.
So i want just to start a game, and attach to the screen it opens, play the video via sdl code, and then back to the original game.
Also i want to add "auto-scale" in this player if possible, so video will be played full-screen, but didn't know how easy or good it will be with SDL, maybe better compositing or whatever else ..
testnative example is using SDL_Renderer so it should be able to cope with scaling. But I expect there will be some issues, it's not tested much, after all.
Event handling could be problematic. If you check testnative, you can see it gets the shared WA_UserPort message port from SDL. This must be given for the window so that SDL can get events. Not sure if it can changed after window creation (should be checked from Autodocs). But maybe it's enough for you to handle only application events, not SDL ones.
As you might see from the screen, the app looks bad anywhere there is text drawn. I tested it on multiple OS4 computers.
In parallel, I am working on the MorphOS version and tested it on a 16bit screen as well, and the problem does not appear there. It was tested on the same machine with the exact same graphics card, which is my main X5000 with a RadeonHD 6850 gfx card. And the code is the same on both machines.
So, my question is, is there something from the SDL port that breaks things with the rendered text?
I will continue the investigation, in case there is something that could help.
@Capehill But it's 16-Bit Mode He requests Should he really need to use a palette, shouldn't he use type setRGB for the colors he need. looks on the picture like a common RGB Format error, for instance ARGB versus ABRG (examples only), the screenmode prefs program should tell what it is (I think)?!
Yes I'll check return values, I just made a quick&dirty "update to SDL2". TiA
EDIT: not sure, but maybe in flxplayer.c COLORS256() and DECODE_COLOR() functions need to be "adapted" to use SDL2 or I'm using a very very wrong PIXELFORMAT in SDL_CreateTexture()
Edited by jabirulo on 2022/5/7 10:40:32 Edited by jabirulo on 2022/5/7 10:58:50
If window framebuffer is used instead of SDL_Renderer, it depends on screen depth of the window. But this cannot work if application assumes 32-bit bitmap. Application must inspect the surface and adapt. Pitch, colour mapping and so on.
If the same (window framebuffer using) code works on MorphOS, it probably means MorphOS uses 32-bit bitmap for framebuffers so application expectations are fulfilled.
When the SDL_Renderer is enabled the app works fine even on 16bit screens, but unfortunately, it gets slower as well.
So, I prefer to not have it enabled and get more speed. On both OS4 and MorphOS builds I tested with 16bit, the SDL_Renderer was not used.
Quote:
...it probably means MorphOS uses a 32-bit bitmap for framebuffers so application expectations are fulfilled
I see. And what is the right way to do that? I mean, should the application adapt to screen depth or should this happen from SDL automatically? Would something like that be beneficial for applications and games ported from other systems, where they are used to have a 32-bit colour depth screen always?
Operating system has to convert data from 32-bit to 16-bit. When testing SDL2 (testsprite2 program) on a 16-bit screen mode, I noticed slowdown seems to depend on the used render driver (compositing vs. opengles2 vs software).
Quote:
I see. And what is the right way to do that? I mean, should the application adapt to screen depth or should this happen from SDL automatically? Would something like that be beneficial for applications and games ported from other systems, where they are used to have a 32-bit colour depth screen always?
Application has to know the surface/bitmap format before writing to it. If application could only render in 16-bit, then it would work wrongly if SDL2 uses a 32-bit framebuffer.
So, if you would like to support different bitmap formats in framebuffer mode, you should branch the renderer logic.
@Capehill Thank you for the explanation and your help. I will be away from my Amiga for a couple of weeks, and I will recheck all these when I will be back.
I also wonder whether it would be better to call SDL_SetSurfacePalette instead of poking into surface directly.
I haven't tried any of these palette things so cannot say much without experimenting.
One man from freelance sites rewrote this for us, and that what he do:
1. Create 8bit SDL_surface with palette the same as it was with SDL1.2 2. When we draw, we convert it to 32bit ones. 3. Copy this over texture which is shown in the window 4. All other stuff like window-init-creation-etc from "migration guide".
He also says that textures always in ARGB-32 , dunno why he says so. And not sure if it all correct or not, but it works at least.
Now we need to add somehow some programm filters , like HQ2x or so, but i do not know what one is the best for the videos. HQ2x as far as i know good only for the pictures with with clean rounds..
In meantime i use "SDL_SetHint (SDL_HINT_RENDER_SCALE_QUALITY, "2")", this is not as good as filters, but better than without.