If you turn off streaming in the new WAV datatype, it will in turn disable the new MultiChannel and downmixing support and all the other new features of the sound datatype as the WAV will be limited to the mono/left/right channels.
I'm not entirely sure why switching off streaming disables any of those wonderful things.
IN reality 90% of wave files are either mono or stereo uncompressed PCM, not 7.1 5.1 or any other fancy encoding.
Quote:
A small conditional block of 10 lines of code in your programs and you can support much richer and modern sound features. Do we really want to be stuck in the 1990s with the old Sound DTC ?
EXtended the datatype with new features is fine, just don't discard the usefulo old ones at the same time.
Quote:
Do you want to offer you customers the best sound experience in your product?
Glames just wants to load a set of known files he provides with standard code that always has worked and always should, so that he can play them with hs own player, he doesn't want to add Enhancer specific code to acheive that. Especially since I'm not sure there is a safe way to test for the existance of the functionailty on any given system.
Also bear in mind that whilst glames may update his current code base, older releases will remain broken, giving him a bad nae, not everyone reads the forums in detail to understand the issues involved.
It's highly likely that there is other software out there using similar aproaches that may no longer be in development, that will just break and be unfixed.
I'm not entirely sure why switching off streaming disables any of those wonderful things.
Probably because you can't return the full audio data if you're playing a live stream (e.g., internet radio). The guys who wrote the spec. were thinking ahead...
@amigakit
How about getting the datatype to generate the static sample data when its get method is called for SDTA_Sample (and others)? I'm pretty sure this could be done relatively easily, and it wouldn't waste any memory on it unless the app/game specifically requests it. Obviously, you'd still have to return NULL for live stream, but it would be fine for audio files.
I know this technically goes against the specs., but sometimes it's worth supporting commonly (ab)used side effects of older implementations. In this case, a lot of people have taken advantage of the original datatypes lack of streaming.
If you turn off streaming in the new WAV datatype, it will in turn disable the new MultiChannel and downmixing support and all the other new features of the sound datatype as the WAV will be limited to the mono/left/right channels.
A small conditional block of 10 lines of code in your programs and you can support much richer and modern sound features.
Do we really want to be stuck in the 1990s with the old Sound DTC ?
Do you want to offer you customers the best sound experience in your product?
Probably because you can't return the full audio data if you're playing a live stream (e.g., internet radio). The guys who wrote the spec. were thinking ahead...
I don't think the Enhancer streaming datatypes can be used for live streaming. They just stream from a file instead of loading all the audio data into memory at once. I think it would be less confusing to call them buffering datatypes. If I'm wrong about that I would invite a correction.
Amiga X1000 with 2GB memory & OS 4.1FE + Radeon HD 5450
I don't think the Enhancer streaming datatypes can be used for live streaming. They just stream from a file instead of loading all the audio data into memory at once. I think it would be less confusing to call them buffering datatypes. If I'm wrong about that I would invite a correction.
'Buffering' is the non streaming approach, load the whole sample into a memory buffer, so you have your terminology backwards there.
Datatypes as a whole work of AmigaDOS file handles (even DTST_MEMORY has a internal 'memory-hander' that opens the memory buffer as if a it was a file) so to do streaming from an internet src you would "simply" need an 'http-handler' that could handle http streams and then open HTTP:url/to/stream/ or similar.
However such a stream would by it's nature break 'the spec' as it could never return the stream length, which is required by the autodoc, so something has to give somewhere in terms of "strict" interpretations of the spec as defined in autodocs.
Search for '7.1 surround sound sample wav' that ought to get you some, though the only ones I found on a quick search were "for sale" and not cheap either.
More typically such streams are encoded as AAC and are video soundtracks not music etc
Search for '7.1 surround sound sample wav' that ought to get you some, though the only ones I found on a quick search were "for sale" and not cheap either.
The few free & downloadable ones I found appeared to use WAV as a wrapper for audio in another format. They either didn't play at all or just produced a hissing sound on my speakers.
Amiga X1000 with 2GB memory & OS 4.1FE + Radeon HD 5450
'Buffering' is the non streaming approach, load the whole sample into a memory buffer, so you have your terminology backwards there.
I was just using the term 'buffering' to differentiate between network streaming and file streaming. I suppose everything on a computer is buffered in some way because CPU instructions only access memory or registers. Quote:
Datatypes as a whole work of AmigaDOS file handles (even DTST_MEMORY has a internal 'memory-hander' that opens the memory buffer as if a it was a file) so to do streaming from an internet src you would "simply" need an 'http-handler' that could handle http streams and then open HTTP:url/to/stream/ or similar.
Sounds good in theory but I'd be suprised if anyone does it.
I converted a CD music file to WAV and played it from a USB stick. It played but my X5000 went into slow motion. I had to click on a window close button and hold down of the mouse button for several seconds to get the window to close. To start a program from Amidock I had to hold the mouse button down for a second or 2 on the icon. Double clicking on a volume icon didn't work at all; no window opened. However, when I copied the music file to RAM: is seemed to play O.K.
Amiga X1000 with 2GB memory & OS 4.1FE + Radeon HD 5450
@Mattew You of course will write answer on questions we don't ask :), but:
As some of us told you before you faced right now with problems about which we told when say that there no needs to replace any working-fine os component.
You can't expect any developer to do changes in their code in favor to support your non-fully-backward compatibilty replacement instead of os default one.
If you make replacement, and want to make it "better", then call it differently. Or, if you call it the same (which is big mistake), then it should be not "better" , not "worse", not 99% compatibility, but 100% the same. And even that, will be hard and time consuming (and you see it now yourself, and i told it many time for you before), because any code have bugs, and you will loose loots of time till all will works as with originals (taking aside bugs which can't be seen right away).
If you want something replace and make better, then call it different. Anyone who want will use it later. By that forcing of replacement of os components, you only make things worse.
I mean, you do big good job with all that amiga related things, its all apprecated. But touching of components are wrong and many of us told you about it from begining. And many of us told it again in the same thread.
Just fix that datatype so it will be 100% as original, and then just never touch anything else in the os which fine enough already :)
The os component that has been "updated" is obviously broken and needs to be fixed instantly.
Technically, the replacement datatypes implement the specification correctly; it's other software that's using them incorrectly (i.e., taking advantage of observed behaviour rather than documented behaviour).
That's perfectly understandable, though, given the general lack of streaming datatypes in the past, there was simply nothing to test that would catch this mistake.
That said, implementing a workaround makes sense given how common the mistake is.
Or maybe there is an error in documentation. That wouldn't be the fisrt time ;)
The same code works also for MorphOS and I guess it's the same for AROS.
Anyway only one fact has to be taken into consideration: some software work for AmigaOS 4 but don't work (or partially work) with Enhancer software installed.
No matter how many they are.
Fix required. It lasts too long (8 months at the minimum!)
The whole thing becomes truly elegant when the sound files are loaded using datatypes and the ahi.device is used for coordinated playback. In principle the rule is again to use the already discussed method: open ahi.device and datatypes.library and use IDataTypes->NewDataType() to identify the files. With the help of that we now fill the AHIRequest. For that we need to ascertain a few values from Datatype. Especially the address of the file and the playback frequency.