Find out lately with Andrea that this code while works on all oses , fail on OS4 with new CLIB2, when use getcwd() with more than 1024. I.e. this is the test case:
@kas1e don't know, but doesn't NGFS have larger PATH_MAX than SFS (perhaps 8192 compared with 1024), does newlib check FS and alter the PATH_MAX depending on FS in use?
IIRC 8192 is the new PATH_MAX for almost all os's out there now, but 1024 was perhaps old os'es like the amigaos with FFS.
As I understand it... No! it allocates memory for PATH_MAX which is set to 1024 (AFAIU) if you try to use it with a buf or size >1024 it will always fail.
60 if(len >= (int)sizeof(nti->substitute))
61 {
62 D(("path name '%s' is too long (%ld characters total; maximum is %ld)!",name,len,sizeof(nti->substitute)-1));
63
64 __set_errno(ENAMETOOLONG);
65 goto out;
Newlib uses the same path translation so I doubt it. It just means that if you have the unix path semantics enabled (-lunix) there is little to no benefit from using a buffer larger than PATH_MAX bytes.
Actually there is a difference between newlib and clib2 getcwd() in that newlib getcwd() will return the AmigaOS style path if __translate_amiga_to_unix_path_name() fails whereas clib2 will just return an error. Still doesn't explain why it works when the buffer size is 1024.
PATH_MAX should not affect getcwd. Because the path returned from system should be always less than 1024. So if you pass a larger buffer it must work (and it works..)
@xenic The problem is that "standards" are a constantly moving target. We try to be consistent, but sometimes people just do what they want without understanding the ramifications. Other times, things are imposed on us.
Back in V1x and V2x, people used 256 byte buffers for paths and 32 byte names, and when someone made a filesystem that could handle larger names, the old software broke.
Then later on in V3x, software started to max out the data structures that were in use at the time, like a FIB (FileInfoBlock) and it could hold 107 byte names, so software started making their name buffers 108 bytes and 256 bytes for paths, that worked until someone decided to use ExAll().
Exall() could effectively provide limitless length names, but due to the usual short-sightedness of the time, they left file sizes unexpandable past 4 gig. Now people tended to limit buffers to 256 byte names and paths in their software.
Then I came along and messed up the party by creating a non-broken API and allowing effectively limitless names, but kept them to 255 bytes, a length only imposed on me by legacy BSTR (BCPL strings) interoperability when using older filesystems, but paths are now effectively limitless too.
This is where, decades ago, I decided to finally add defines to the DOS includes to try and limit the number of people "doing their own thing"(tm) by actually having some sort of "official" definition to reference, hopefully to limit the amount of pre-broken software being written. Those set names to 255 bytes and paths to 1024, this worked quiet well until the target moved again.
The discrepancy between the relatively common 1K and the 4K lengths defined in dos.h is because I have since added UTF-8 compatibility and new filesystems can and do store UTF-8 encoded names, the problem with UTF-8 is that the number of bytes that represent a "character" can be between 1 and 4 for some languages.
So, the old 1K is now bumped to 4K to handle a UTF-8 worst-case situation and the growing availability of resources. So as long as you are not a Klingon, it probably won't matter.
However, I would implore everyone to simply define all your buffers with the DOS include values, as CLIB/NewLib just uses the DOS calls internally, and the CLIB/NewLib legacy includes should just reference the DOS include values too.
Though this is of little help to the getcwd() bug..