Are you sure that there is some unused area on your SSD? The device firmware likes to move data around to minimise wear, and it can't do that easily (read: fast) if the device is full or almost full.
Yes, most (if not all?) modern SSD's have the wear leveling in their firmware. Just do not assign every last bit to a partition and you should be good.
No, unless all of your partitions on the SSD are nearly 100% full. The firmware can't know if some of the space on the SSD is not partitioned at all, or if it's some unused space on an existing partition. (Some SSD firmware might include monitoring support for bffs, ext2fs, NTFS, FAT, etc. file systems, but of course not for any partitions using an AmigaOS file system.)
Edit: SFS gets extremely slow on nearly full partitions, no matter if it's on a HDD or a SSD. Maybe NGFS doesn't have such problems.
The problem with slowed down SSDs is as follows: 1. HDDs can overwrite the same location on the disk without the need for a time consuming "erase before write" action. 2. As a consequence, filesystems traditionally do not report which files have been deleted to the HDD. There is no need to do so. 3. SSDs do need to erase a block/page before new data can be written. They can only erase complete blocks or pages. Not single logic block addresses (LBAs) . And these blocks/pages contain numerous LBAs. 4. Since filesystems do not report the deleted LBA to the SSD , the SSD has no way of knowing which data is valid and which can be deleted before the next write. This will slowly fill the drive with partially occupied blocks/pages. 5. Wear leveling only helps when full blocks can be moved to a new location. If even one LBA is still marked as valid by the ssd than the whole block/page cannot be erased up front before a new write to this location is addressed. 6. When wear leveling has reached the end of its capability because the SSD is full of partially written blocks/pages, the drive will slow down. Because only when the filesystem writes to a LBA marked as valid, the SSD finally knows that this LBA is not valid and the whole block/page can be erased. But this takes milliseconds before the new data can be written. Hence the slowdown. 7. To prevent this issue, a so called Trim command can be issued to tell the SSD up front which LBAs are deleted by the filesystem and can be erased by the SSD before future writes occur. 8. Trim is not part of the (public available) trackdisk specification. Filesystems do not advertise which LBAs have been deleted. 9. Leaving empty space will not solve this issue because there's no relation between LBA and physical location on the SSD. Wear leveling will touch all physical locations.
Edited by geennaam on 2023/4/6 18:50:12 Edited by geennaam on 2023/4/6 18:53:08
@geennaam SFS doesn't, and never will, support something like TRIM, or any other SSD support.
Main problems: 1. SFS is an about 30 years old file system which was implemented for HD partitions up to about 100 MB (not GB). I removed the 128 GB partition size limit of SFS\0 in SFS\2, but that probably was a bad idea. SFS doesn't scale well for lager partitions, even FFS is better in that respect. 2. While the trackdisk API supports HD_SCSICmnd for extended features of SCSI and ATAPI hardware, there is nothing like a HD_ATACmd, which would have been required at least 20 years ago already, for example for the S.M.A.R.T. support. Instead sg2's PATA/SATA/SCSI drivers and his smartctl tool used undocumented, internal functions of his drivers instead.
I'd suggest to work together with tonyw to improve the ancient trackdisk API and add whatever is required for SATA and NVMe, and using such improvements in his NGFS.
It can be done with an offline tool. As a matter of facts, Intel recommends to disable continuous Trim. And instead run it only a couple of times each year. For the sake of the SSD lifetime.
Since defragmentation tools like sfsDefrag must be aware of the LBA address map of the filesystem, it could be used as basis for a manual Trim command.
Since defragmentation tools like sfsDefrag must be aware of the LBA address map of the filesystem, it could be used as basis for a manual Trim command.
SFSDefrag, PartitionWizard, etc. don't know anything about the SFS LBA mappings. FFS defragmenting tools like ReOrg, my PartitionWizard, DiskMonTools and DiskOptimizer, as well as any other FFS defragmenting tools, do.
The difference between FFS and SFS is that in case of FFS the external defragmenting tool has to do all the defragmenting work itself, for SFS it's just something like using internal "start defragmenting", "report current progress" and "stop defragmenting" commands. The actual defragmenting work is done internally by SFS itself.
FFS defragmenting tools have to stop the file system (IDOS->Inhibit()), SFS defragmenting can be done on a live partition while it's used by other software at the same time instead.
But neither FFS nor SFS are relevant any more, everyone should use NGFS instead.
Yeah, over 10% is left free...but maybe modifying (according some threads on Hyperion forum) some parameters in expert mode in MediaToolbox (blocks per cylinder 2048, block size 512 and few others) caused the problem? It was fast on the begining
1. Is there ANY speed advantege when using the most expensive NVMe SSDs, compared to the very chepest ones? Or are the bottlenecks of our system non-proportional and so severe, that even the cheapest ones can offer the maximal speed possible?
2. Samsung mentions in the specs of SSDs a feature named 'auto garbage collection', whereas e.g. A-Data and and WD do not. Is this feature of any help in our system without Trim support, or is it meaningless and not a reason to choose especially Samsung?
1. There is a small advantage for drives with embedded cache memory. You get a small speedup and (when I have the HMB feature implemented) it saves precious system memory which is used as SSD cache. Eg 128MB for each nvme drive. But all. Drives are limited by Amigaos4, filesystem and our systems. A sub $100 Samsung SSD with embedded cache is more than good enough.
2. Samsung garbage collection could act as next best solution for the absence of Trim. But it will also move around LBAs which contain no valid data anymore because the SSD still has no way of knowing which LBAs are valid and which are not. So it will impact SSD life more than it would with Trim.
Further questions came to my mind, concerning partitions:
What happens when you copy a partition to a new location (with dd, RAWdisk, Gparted etc.) on a SSD? I suppose no 'garbage' is transfered to the new partition?
And when you delete a whole partition, what happens to the 'garbage' left after that operation? TRIM does not help anymore as there is no partition to work with. Is there some difference whether you use Mediatoolbox in AmigaOS or Gparted in Linux?
Trim works on a physical level. So it needs input from somewhere to work properly. Drive ulitization from driveinfo like tool. And filesystem info from the filesystem itself.
If DD or RAWdisk do a block based copy then you create a 1:1 copy including garbage.
I have HD7770 gfx card on the top slot. RadeonHD v5.14. And M2 card on the next slot. The monitor went to standby mode and booting stopped, it looks like.
Can I use the next slot from top to bottom ?
Edit: ...
I can answer that question myself. Those smaller PCIe slots are physically too small. The card won't fit.
The machine booted to WB having M2 card in the top slot and the gfx card in the large slot below.
The SSD is Kingston NV2 1Tt PCIe 4.0. The adapter card is Deltaco M.2 PCIe.
Everything seems to work fine.
Edited by TSK on 2023/4/13 22:09:41
Rock lobster bit me - so I'm here forever X1000 + AmigaOS 4.1 FE "Anyone can build a fast CPU. The trick is to build a fast system." - Seymour Cray
Edit2: This is apparently a bug in the Radeon HD driver. See #100
Edit: That Kingston drive has no embedded cache. In the meantime I have implemented the Host Memory Buffer feature. It will use 64MB of main memory as NVMe cache. This will benefit durability and also gives a small speedup.
I have HD7770 gfx card on the top slot. RadeonHD v5.14. And M2 card on the next slot. The monitor went to standby mode and booting stopped, it looks like.
Can I use the next slot from top to bottom ?
Edit: ...
I can answer that question myself. Those smaller PCIe slots are physically too small. The card won't fit.
The machine booted to WB having M2 card in the top slot and the gfx card in the large slot below.
The SSD is Kingston NV2 1Tt PCIe 4.0. The adapter card is Deltaco M.2 PCIe.
Everything seems to work fine.
I am not understand 100% what is your configuration, but if it like this:
- X1000, RadeonHD v.5 - Slot 1 ( slot numbers according to Nemo TRM ) PCIe x16 = RadeonHD gfx - Slot 2, PCIe x8 ( x16 physical ) NVMe or any other PCIe card
this configuration not works - see post #49 and follow. There is reported bug in new RadeonHD v5, Hans is working on it. It has something to do with Radeon power initialization and PCIe multiplexing of Slot 1 / Slot 2.
Workarounds are: - Use slot 1 for NVMe + Slot 2 RadeonHD gfx... i.e. like you did - or temporary use RadeonHD v3 driver - I use this solution, I have in bootmenu RadeonHD v.3/v.5 selection.
AmigaOS3: Amiga 1200 AmigaOS4: Micro A1-C, AmigaOne XE, Pegasos II, Sam440ep, Sam440ep-flex, AmigaOne X1000 MorphOS: Efika 5200b, Pegasos I, Pegasos II, Powerbook, Mac Mini, iMac, Powermac Quad