@sailor
Unfortunately our PowerPC processors are very constrained in how they handle PCIe. No matter if it's the P1022, P5020 or even P2080. They all have predefined configurations how to assign lanes to PCIe controllers and other serial controllers (SATA, XAUI, Ethernet, etc) You cannot assign multiple controllers to the same lanes and dynamically reconfigure depending on what you connect to those lanes.
But there's no split needed anyways unless you want to operate multiple PCIe endpoints in a single PCIe slot. Like for example multiple NVMe SSDs on a single PCIe card.
Unless there's a limitation in the software/firmware, you can connect a PCIe switch behind a single PCIe controller to get more physical PCIe slots. You can see the upstream link between the PCIe controller inside the PPC460x and the PCIe switch as a backbone connection running multiple virtual links to the individual slots behind the switch. You can compare this with an ethernet router/switch.
This is especially important for Tabor. But also nice to have for the sam460.
The X5000 works in the same way with the on board Renesas 89H12NT12G2 switch. The Renesas 89H12NT12G2 PCIe switch is used to create multiple x1 ports and a x4 port (with 4 lanes) from a single x4 link behind a single PCIe controller inside the P5020.
The board that I have pointed to contains a five port
PCIe switch. One 5Gbps x1 upstream port and four 5Gbps x1 (1 lane) downstream ports.
Three of four downstream ports are connected to a PCIe x1 connector. This limits the options to PCIe cards with x1 connectors unless you mechanically modify the PCIe slots.
There is also a similar PCIe expander board available from the same manufacturer. It uses the same ASMedia PCIe switch but instead of routing the downstream x1 ports to x1 connectors, it routes the x1 ports to four x16 connectors. Bandwidth wise this makes no difference. But it allows you to use PCIe cards with x1, x4,x8 and x16 connectors. The cards will simply operate at x1 speed.
Here's why bandwidth doesn't really matter much for these slots:
Both sam460x and the ASmedia switch support PCIe generation 2. This means 5 gigabit per seconds link speed. 8/10bit encoding results in a theoretical maximum bandwidth of 500MegaBytes per seconds. The difference between PCI and PCIe is that PCIe can send and receive in parallel. So 500MByte/s in both directions at the same time.
Imagine that you have a Gigabit ethernet card connected to one slot and an NVMe ssd connected to the other slot. If you want to save a file over ethernet with 100MByte/s to the NVMe ssd than this is not limited by the hardware. The ethernet stream is causing 100MByte/s over the RX (receive) lane of the upstream link for processing with the ethernet stack. The resulting data send in opposite direction to the NVMe SSD and causing a 100MBYte/s strean in parallel on the TX (transmit) lane of the upstream link.
That still leaves enough headroom for a PCIe soundcards or USB cards.
The limiting factor in bandwidth is AmigaOS4, the PPC460ex and lack of cache coherency.
Even a modern graphics card connected to the 4 lanes x16 slot only manages up to 600MByte/s according to GFXBench2D.
For Tabor, you would like a solution with a generation 2 x4 upstream link between Tabor and PCIe switch. And at least one x4 dowstream link in a x16 slot. The other slots can have only a single lane a link. It is highly unlikely that you can max out the upstream link anyways. So you will most likely not notice that all card share the same uplink bandwidth.
Edited by geennaam on 2023/1/17 11:35:45
Edited by geennaam on 2023/1/17 12:10:29
Edited by geennaam on 2023/1/17 12:10:54