That's the fundamental issue here - they're not a pool, they're literally wired in to certain things, and only certain individual items (generally under the optional list from the chipset) can be allocated around (or switching for consumer platforms x16/x0 to x8/x8. Setting x8 -> x4 doesn't free up 4 lanes for something else. walks awayĬlick to expand.Yeah - it won't work like that. DO YOU UNDERSTAND NO NEED for PLX because this has nothing to do with plx. fuckĮDIT: the free up lanes dumb dumb is freeing up my 40 lanes limit. *how the fuck is this so complicated and I have wasted half a day explaining this to you. I would never need it but the question was "is there a use case for this" and yes there is. I would be able to see what is not being maxed and what is being maxed and than better optimize.Īs I said.there is a weird niche use case. I can turn it to 4x and now I am using 4 less of my maximum 40 lanes, which means I can use that for something else.* So if these theotereical program existed. So if I have something at 8x and its not maxing out. I have to choose what I want to run at what. So I am forced to have to decide on what I want to run at 16x or 8x or 4x or 2x (not sure if 2x is an option but you get my point) So If I have all 5 filled. Without a PLX chip, that is - which is extremely rare these days.Ĭlick to expand.here is my MB it has 5x 3.0 16x slots and 1x 2.0 16x slot (PCH maybe?) My CPU does not support 80 lanes. In your case, you can't set the M2 drives to x2 to "free up lanes" for the RAID device - those wires don't cross (literally) in this case, and they're tied to different things. And that optional setting tends to be very limited. If I set a PCIE slot to x8 instead of x16, I don't free up 2x x4 for M2 drives, for instance - that all depends on how they tied the "optional" lanes from the chipset to things (the CPUs tend to be hard locked to what they work on). Since each of those connections is 100% dedicated, it doesn't matter if it's fully utilized or not - slowing it down or choosing a smaller allocation won't free up lanes for things that aren't wired. You can choose to make a slot x16, x8, or x4/4/4/4, and you can lock it to a slower speed, but locking it to a slower speed doesn't free up lanes for other use - you also can't set an M2 drive to x2 or the like. But if 10GbE and single M.2 isn't heavily used I could run those at 2x and get the extra 4x I would need to be able to set my m.2 RAID to 16x or a GPU to 16x if those were bandwidth staved.Ĭlick to expand.Yes and no. If I had those 6 items, I would realistically have to do something like this, GPU gets 8x, GPU gets 8x, M.2 RAID gets 8x, HBA gets 4x,10GbE gets 4x, M.2 gets 4, which is 36 lanes. If I could see and log what their PCI-E bandwidth was I could tell what is and isn't bandwidth starved. I only get 40 lanes so I would have to set certain things at reduced lane speeds. But I can't have that because that would be 60 lanes. In a hypothetical situation with 6 devices, I would want GPU gets 16x, GPU gets 16x, M.2 RAID gets 16x, HBA gets 4x,10GbE gets 4x, M.2 gets 4. So if I have a 1650v3 I have only 40 lanes. So on ATX or EATX motherboards where you have like 5 or 7 PCI-E slots, you can set in BIOS how many lanes you want to use for whatever slot. So yes, there is a use case for this but an extremely niche use case But if 10GbE and single M.2 isn't heavily used I could run those at 2x and get the extra 4x I would need to be able to set my m.2 RAID to 16x or a GPU to 16x if those were bandwidth staved. Click to expand.So on ATX or EATX motherboards where you have like 5 or 7 PCI-E slots, you can set in BIOS how many lanes you want to use for whatever slot.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |