If you are using JXFS then I guess you have OS4.1. Make sure you have the absolute latest version of XAD 7z as that will use swap space if available.
However, I suspect even with the maximum 2GB memory (some as swap partition), a 1.6GB file is going to use all that up. Both the compressed file and the decompressed data (plus some overhead) are going to come right up to the 2GB limit - it decompresses completely to RAM before dumping to disk.
Make sure you have a 1GB swap partition, and run UnArc by booting with no startup-sequence. You might get lucky. Otherwise you need to wait for OS4 to gain 64-bit memory addressing
Ask the original uploader to upload a different archive for you? Last resort but usually a good option.
On my SAM various 300-400 7z archives of iso images would take all the memory and then dig into the SWAP.. things would slow down quite a bit for sure but at the end of the day, not that long, there was the iso...
Never thought I'd need the SWAP but there you go
What about increasing the SWAP partition? I'm not aware of any limitation... could have missed that part in any documentation
~Yes I am a Kiwi, No, I did not appear as an extra in 'Lord of the Rings'~ 1x AmigaOne X5000 2.0GHz 2gM RadeonR9280X AOS4.x 3x AmigaOne X1000 1.8GHz 2gM RadeonHD7970 AOS4.x
Swoop wrote: I think the limitation is twice times installed Ram, upto a max of 2GB.
You mean 2GB in total then
That's a bit unlucky then, having 1GB real memory it would have been nice to have 1GB + 2GB virtual.
So that's one stroke for SAM... 512meg + 1GB virtual
~Yes I am a Kiwi, No, I did not appear as an extra in 'Lord of the Rings'~ 1x AmigaOne X5000 2.0GHz 2gM RadeonR9280X AOS4.x 3x AmigaOne X1000 1.8GHz 2gM RadeonHD7970 AOS4.x
However, I suspect even with the maximum 2GB memory (some as swap partition), a 1.6GB file is going to use all that up. Both the compressed file and the decompressed data (plus some overhead) are going to come right up to the 2GB limit - it decompresses completely to RAM before dumping to disk.
Why does it decrompress everything into RAM first before copying to disk?? That feels like ... well ... wrong.. if you would be using a disk based tempory directory it would be like Windows doing everything in its damn temporary folders on one disk before copying the stuff to the actual target disk .. which slows down the overall process in 99% of cases, but OTOH does prevent the out-of-RAM option as you usually have more space on your harddisk than in RAM.. still, thats a stupid approach IMHO to do everything in a temporary place (be it a harddisk or in RAM) instead of using the actual target as far as possible.
In your case, it might seem like a good idea to first use RAM and then copy the stuff over, but as you see here, it actually isn't that good. I'd suggest to only use some smaller temporary buffers in RAM and copy stuff to disk if a buffer is full.. that would clearly remove the need for that much memory and shouldn't have any great speed impact (if you copy big enough buffers to make use of the HD capabilities.. e.g. only in >= 64KB blocks, etc.).
You may have to experiment a bit how big the RAM buffers should be.. maybe only some hundret KB each, maybe some MB each.. you can use multiple buffers and switch between them in a round robin fashion so that e.g. first buffer full, start copying and decompressing into the second buffer.. second buffer full, see if copying of first buffer is done and start copying from the second one if so, start decompressing into the third buffer (or the first one again).. you got the idea..
Of course, i don't know the inner workings of 7zip .. maybe my above suggestions aren't even practical with 7zip for whatever reason (out of interest i'd like to know them please).. but IF they are possible, you should really consider doing that change as it would make your plugin automagically work on ANY OS4 machine (maybe with at least 32, 64 or 128MB RAM) with or without a zillion GBs of RAM and SWAP .. and thus greatly improve its usefullness (thinking of nowadays archive sizes..)
Edit: Or is that maybe a XAD limitation? I did never use XAD in coding, so I know as much about it as about 7zip internals .. well, if it is a XAD problem, then XAD could need a revise..
AmigaOS 4 core developer www.os4welt.de - Die deutsche AmigaOS 4 Gemeinschaft
"In the beginning was CAOS.." -- Andy Finkel, 1988 (ViewPort article, Oct. 1993)
Why does it decrompress everything into RAM first before copying to disk?? That feels like ... well ... wrong..
Yes, IF this is what it does, it's just wrong.
Quote:
Or is that maybe a XAD limitation? I did never use XAD in coding, so I know as much about it as about 7zip internals .. well, if it is a XAD problem, then XAD could need a revise..
I have used it to decompress big lha and old zip files, and it doesn't do that. I don't know what is the case with 7z files. If it decompress them in RAM first, i'd say it's a fault of the client, not of the XAD system.
Why does it decrompress everything into RAM first before copying to disk?? That feels like ... well ... wrong..
Yes, IF this is what it does, it's just wrong.
Well, that is exactly what Chris said:
Quote:
Chris wrote:
[...] it decompresses completely to RAM before dumping to disk.
And as he did the 7zip plugin, he should know
Quote:
Quote:
Or is that maybe a XAD limitation? I did never use XAD in coding, so I know as much about it as about 7zip internals .. well, if it is a XAD problem, then XAD could need a revise..
I have used it to decompress big lha and old zip files, and it doesn't do that. I don't know what is the case with 7z files. If it decompress them in RAM first, i'd say it's a fault of the client, not of the XAD system.
Good. As said, i never used XAD, so I can't talk about it's inner workings and thus I had to ask that question.
So it seems Chris has a bit work to do
AmigaOS 4 core developer www.os4welt.de - Die deutsche AmigaOS 4 Gemeinschaft
"In the beginning was CAOS.." -- Andy Finkel, 1988 (ViewPort article, Oct. 1993)
In your case, it might seem like a good idea to first use RAM and then copy the stuff over, but as you see here, it actually isn't that good.
Actually it doesn't seem like a good idea, but that's how the internals of the LZMA SDK work.
Even if I move the XAD write function into the decoding function, the memory allocation will still be requesting enough for the entire file. It'll need a bit of reworking to do this I think.. I'll look into it (again) at some point.
I'm using the 7-Zip code which doesn't provide that as an option. The fact it seems to decompress other bits of the archive even if you don't want them, is potentially a bigger problem.
I can decompress that here without needing swap ( I would be pulling the file to a slaved RPi I have attached to my AOS4 machine...and unpacking it there before accessing it with AOS4, the RPi will be using a "built from Source" Linux installation I compiled myself).
Maybe you can grab use of a machine to slave off of AOS4 for the same type of setup where you are ?
I'd forgotten this thread existed. At any rate, my recent update to the 7-Zip client was a quickie just to try and get rid of some crashing I'd been seeing, and remove the SObjs requirement which was annoying me after I tried to copy it over from my old installation. I took the opportunity to update to the latest version of the SDK as it didn't need many changes and probably has some benefit.
I spotted some new AES code but it appears to be Intel-specific and I haven't yet worked out how to run encrypted files through this if it isn't (that's not relevant to this thread, but is relevant to some old comments on os4depot).
The zlib-type interface may well exist in the C code, I haven't checked.
At any rate, that file should decompress fine using the swap partition. You might want to check that is set up properly (assuming the archive is OK and not encrypted or using anything other than LZMA/LZMA2/PPMd/BZ2 - without seeing it I don't know!).
Quote:
I`ve tried the 7zip v4.16 & v4.65 binaries (locale=C,Utf16=off,HugeFiles=off,1 CPU) , which both fail with "Unsupported Method" message.