SAME ISSUE
I was really hoping this was going to be it. Back to 52Pi.
As you see the file transfer eventually happened. Though looking at the DMESG logs you can clearly see it is having to reset the controller just like before!
Code:
massmin@massnas:~ $ sudo dd if=/dev/zero of=/home/massmin/nvme0/testfile bs=1M count=1000 &sudo dd if=/dev/zero of=/home/massmin/nvme1/testfile bs=1M count=1000 &sudo dd if=/dev/zero of=/home/massmin/nvme2/testfile bs=1M count=1000 &sudo dd if=/dev/zero of=/home/massmin/nvme3/testfile bs=1M count=1000 &wait[1] 3189[2] 3190[3] 3191[4] 31921000+0 records in1000+0 records out1048576000 bytes (1.0 GB, 1000 MiB) copied, 4.48754 s, 234 MB/s1000+0 records in1000+0 records out1048576000 bytes (1.0 GB, 1000 MiB) copied, 5.15898 s, 203 MB/s[1] Done sudo dd if=/dev/zero of=/home/massmin/nvme0/testfile bs=1M count=1000[3]- Done sudo dd if=/dev/zero of=/home/massmin/nvme2/testfile bs=1M count=10001000+0 records in1000+0 records out1048576000 bytes (1.0 GB, 1000 MiB) copied, 6.59622 s, 159 MB/s1000+0 records in1000+0 records out1048576000 bytes (1.0 GB, 1000 MiB) copied, 32.615 s, 32.2 MB/s[2]- Done sudo dd if=/dev/zero of=/home/massmin/nvme1/testfile bs=1M count=1000[4]+ Done sudo dd if=/dev/zero of=/home/massmin/nvme3/testfile bs=1M count=1000
Code:
massmin@massnas:~ $ dmesg -w | grep -E "pcie|nvme|md0"[ 0.000000] Kernel command line: reboot=w coherent_pool=1M 8250.nr_uarts=1 pci=pcie_bus_safe cgroup_disable=memory numa_policy=interleave numa=fake=8 system_heap.max_order=0 smsc95xx.macaddr=2C:CF:67:8D:9F:39 vc_mem.mem_base=0x3fc00000 vc_mem.mem_size=0x40000000 console=ttyAMA10,115200 console=tty1 root=PARTUUID=2c3f7751-02 rootfstype=ext4 fsck.repair=yes rootwait[ 1.030220] /axi/pcie@120000/rp1: Fixed dependency cycle(s) with /axi/pcie@120000/rp1[ 1.048628] /axi/pcie@120000/rp1: Fixed dependency cycle(s) with /axi/pcie@120000/rp1[ 1.809558] brcm-pcie 1000110000.pcie: host bridge /axi/pcie@110000 ranges:[ 1.816554] brcm-pcie 1000110000.pcie: No bus range found for /axi/pcie@110000, using [bus 00-ff][ 1.825906] brcm-pcie 1000110000.pcie: MEM 0x1b80000000..0x1bffffffff -> 0x0080000000[ 1.834211] brcm-pcie 1000110000.pcie: MEM 0x1800000000..0x1b7fffffff -> 0x0400000000[ 1.842515] brcm-pcie 1000110000.pcie: IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000[ 1.851977] brcm-pcie 1000110000.pcie: Forcing gen 3[ 1.857084] brcm-pcie 1000110000.pcie: PCI host bridge to bus 0000:00[ 2.018657] brcm-pcie 1000110000.pcie: link up, 5.0 GT/s PCIe x1 (!SSC)[ 2.582677] pcieport 0000:00:00.0: enabling device (0000 -> 0002)[ 2.588824] pcieport 0000:00:00.0: PME: Signaling with IRQ 38[ 2.594635] pcieport 0000:00:00.0: AER: enabled with IRQ 38[ 2.600275] pcieport 0000:01:00.0: enabling device (0000 -> 0002)[ 2.606476] pcieport 0000:02:01.0: enabling device (0000 -> 0002)[ 2.612713] pcieport 0000:02:03.0: enabling device (0000 -> 0002)[ 2.618929] pcieport 0000:02:05.0: enabling device (0000 -> 0002)[ 2.625149] pcieport 0000:02:07.0: enabling device (0000 -> 0002)[ 2.631432] nvme nvme0: pci function 0000:03:00.0[ 2.636165] nvme 0000:03:00.0: enabling device (0000 -> 0002)[ 2.726806] nvme nvme0: missing or invalid SUBNQN field.[ 2.764106] nvme nvme0: failed to allocate host memory buffer.[ 2.777894] nvme nvme0: 3/0/0 default/read/poll queues[ 2.785973] nvme nvme0: Ignoring bogus Namespace Identifiers[ 2.792667] nvme nvme1: pci function 0000:04:00.0[ 2.797402] nvme 0000:04:00.0: enabling device (0000 -> 0002)[ 2.888113] nvme nvme1: missing or invalid SUBNQN field.[ 2.925402] nvme nvme1: failed to allocate host memory buffer.[ 2.934009] nvme nvme1: 1/0/0 default/read/poll queues[ 2.941768] nvme nvme1: Ignoring bogus Namespace Identifiers[ 2.951111] nvme nvme2: pci function 0000:05:00.0[ 2.955847] nvme 0000:05:00.0: enabling device (0000 -> 0002)[ 3.049038] nvme nvme2: missing or invalid SUBNQN field.[ 3.088970] nvme nvme2: failed to allocate host memory buffer.[ 3.097585] nvme nvme2: 1/0/0 default/read/poll queues[ 3.105338] nvme nvme2: Ignoring bogus Namespace Identifiers[ 3.111995] nvme nvme3: pci function 0000:06:00.0[ 3.116729] nvme 0000:06:00.0: enabling device (0000 -> 0002)[ 3.209687] nvme nvme3: missing or invalid SUBNQN field.[ 3.249266] nvme nvme3: failed to allocate host memory buffer.[ 3.257866] nvme nvme3: 1/0/0 default/read/poll queues[ 3.265601] nvme nvme3: Ignoring bogus Namespace Identifiers[ 3.275086] brcm-pcie 1000120000.pcie: host bridge /axi/pcie@120000 ranges:[ 3.282084] brcm-pcie 1000120000.pcie: No bus range found for /axi/pcie@120000, using [bus 00-ff][ 3.291181] brcm-pcie 1000120000.pcie: MEM 0x1f00000000..0x1ffffffffb -> 0x0000000000[ 3.299501] brcm-pcie 1000120000.pcie: MEM 0x1c00000000..0x1effffffff -> 0x0400000000[ 3.307805] brcm-pcie 1000120000.pcie: IB MEM 0x1f00000000..0x1f003fffff -> 0x0000000000[ 3.316106] brcm-pcie 1000120000.pcie: IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000[ 3.325462] brcm-pcie 1000120000.pcie: Forcing gen 2[ 3.330473] brcm-pcie 1000120000.pcie: PCI host bridge to bus 0001:00[ 3.490656] brcm-pcie 1000120000.pcie: link up, 5.0 GT/s PCIe x4 (!SSC)[ 3.607118] pcieport 0001:00:00.0: enabling device (0000 -> 0002)[ 3.613257] pcieport 0001:00:00.0: PME: Signaling with IRQ 49[ 3.619068] pcieport 0001:00:00.0: AER: enabled with IRQ 49[ 611.718108] EXT4-fs (nvme0n1): mounted filesystem 7f06c5b2-b3df-435c-a6fa-1c90a9a460e0 r/w with ordered data mode. Quota mode: none.[ 616.825324] EXT4-fs (nvme1n1): mounted filesystem 02aa2226-4b7f-4c48-b876-9e1a55f87814 r/w with ordered data mode. Quota mode: none.[ 620.577818] EXT4-fs (nvme2n1): mounted filesystem 75834661-508d-4400-8125-175ccceea0c2 r/w with ordered data mode. Quota mode: none.[ 624.764133] EXT4-fs (nvme3n1): mounted filesystem 209dc387-2df4-4a6b-a4c2-4d8a9b7d4440 r/w with ordered data mode. Quota mode: none.[ 706.221544] nvme nvme3: controller is down; will reset: CSTS=0x3, PCI_STATUS=0x10[ 706.291643] nvme nvme3: failed to allocate host memory buffer.[ 706.294587] nvme nvme3: 1/0/0 default/read/poll queues[ 706.315202] nvme nvme3: Ignoring bogus Namespace Identifiers[ 708.013532] nvme nvme1: controller is down; will reset: CSTS=0x3, PCI_STATUS=0x10[ 708.083635] nvme nvme1: failed to allocate host memory buffer.[ 708.086595] nvme nvme1: 1/0/0 default/read/poll queues[ 708.124897] nvme nvme1: Ignoring bogus Namespace Identifiers[ 714.165445] nvme nvme0: controller is down; will reset: CSTS=0x3, PCI_STATUS=0x10[ 714.235597] nvme nvme0: failed to allocate host memory buffer.[ 714.243634] nvme nvme0: 3/0/0 default/read/poll queues[ 714.251269] nvme nvme0: Ignoring bogus Namespace Identifiers
As you see the file transfer eventually happened. Though looking at the DMESG logs you can clearly see it is having to reset the controller just like before!
Statistics: Posted by rmassey — Sat Feb 15, 2025 1:42 am