Category Archives: Sys Admin
Extra Content #2: Motherboard upgrade from ASUS PRIME B550M-A to ASUS ProArt B550-CREATOR
This post is part of my video blog: In The Lab With Jay Jay.
You can support this channel on Patreon: patreon.com/JohnElliotV
This is a long and unedited video of me upgrading my computer from ‘longing’ https://www.jj5.net/sixsigma/Longing to ‘lore’ https://www.jj5.net/sixsigma/Lore
As I said today was computer upgrade day. I made a video of me doing the upgrade, it runs for about two hours: Motherboard upgrade from ASUS PRIME B550M-A to ASUS ProArt B550-CREATOR. It was pretty uneventful and so far it seems to have been successful too. Yay.
Thanks very much for watching! And please remember to hit like and subscribe! :)
Following is a product I use picked at random from my collection which may appear in my videos. Clicking through on this to find and click on the green affiliate links before purchasing from eBay or AliExpress is a great way to support the channel at no cost to you. Thanks!
aToolTour Black Hexagon Deburring Drill Bitnotes |
Let’s go shopping!
Upgrading my server today
Well I’ve been procrastinating on, and preparing for, my server upgrade and today is gonna be the day.
Later this afternoon I will be upgrading ‘longing‘ to ‘lore‘.
This will involve:
- installing a new motherboard
- removing a BluRay drive and replacing it with a SATA mobile rack
- installing some new M.2 NVMe drives
- switching from Ubuntu to Debian
- importing my old spinning rust ZFS zpool
Fingers crossed. Wish me luck!
LogoFAIL: A New Security Flaw in UEFI
Today I read New Security Flaw in UEFI Affects Millions of PCs. It’s about a vulnerabilities in UEFI image parsing code. *sigh*. The image parsing bugs distress me the most.
ChatGPT explains ZFS scrub operation to me
Now I understand ZFS scrub better. Thanks ChatGPT! I hope we can be friends, you, and me, and all of your kind.
ChatGPT 3.5
User:
when zpool scrub reports bytes issued, what does it mean?ChatGPT:
When zpool scrub reports “bytes issued,” it refers to the amount of data that has been read from the storage devices during the scrub operation.A ZFS scrub is a process that checks all data in the pool for errors and automatically repairs any detected errors. It involves reading all the data in the pool and comparing it to the checksums stored with the data. If errors are found, ZFS will attempt to correct them using redundant copies (if available).
The “bytes issued” metric indicates the total amount of data read from the pool during the scrub. It’s a measure of the workload that the scrub operation has imposed on the storage subsystem. In general, a scrub is a resource-intensive operation, and the “bytes issued” metric can give you an idea of how much I/O (Input/Output) activity is being generated by the scrub process.
Keep in mind that this metric doesn’t necessarily indicate errors or problems with the pool; it’s more of a diagnostic measure to understand the activity and workload associated with the scrubbing process. If you see a high number of bytes issued, it simply means that a significant amount of data has been read from the pool during the scrubbing operation.
User:
if bytes issued is the data read, what is the bytes scanned?ChatGPT:
In the context of a ZFS scrub operation, the “bytes scanned” metric refers to the total amount of data that has been read and checked for errors. This includes both the data that is actively used (not just the spare or redundant copies) and any parity or checksum information associated with that data.When ZFS performs a scrub, it reads through all the data in the pool, including primary and spare copies, and it also reads and checks the checksums associated with that data. The “bytes scanned” metric represents the total size of the dataset or pool that has been read during the scrubbing process.
To summarize:
“Bytes issued” is the amount of data read, indicating the workload imposed on the storage subsystem.
“Bytes scanned” is the total amount of data, including both the active data and associated checksums, that has been read and checked for errors during the scrub operation.
These metrics are useful for understanding the scope of the scrubbing process and can be monitored to assess the health and integrity of the ZFS pool. If errors are detected during the scan, ZFS will attempt to repair them using redundancy or other available mechanisms.User:
why am I doubtful about your answer?ChatGPT:
If you have doubts about the information provided, I encourage you to seek additional sources or clarify specific points you find confusing. It’s always a good practice to verify technical information, especially when it comes to system administration or storage technologies.If there’s a particular aspect of my response that you find unclear or if you have additional questions, please feel free to ask for further clarification. I’m here to assist you and provide accurate information to the best of my ability.
User:
how is bytes scanned different to bytes issued? are they both measures of bytes read?ChatGPT:
I apologize for any confusion in my previous responses. Let me clarify the distinction between “bytes scanned” and “bytes issued.”In the context of ZFS scrub operations:
Bytes Scanned: This refers to the total amount of data that has been read and checked for errors during the scrub operation. It includes the actual data as well as any associated checksums, metadata, or parity information. “Bytes scanned” represents the entire content of the pool that was examined during the scrub, including both active data and any additional information needed for verification.
Bytes Issued: This metric specifically measures the amount of data issued to the storage subsystem during the scrub. It includes the actual data read from the disks, but it does not include any additional metadata, checksums, or parity information. “Bytes issued” is a subset of “bytes scanned” and represents the workload imposed on the storage devices.
In summary, both metrics involve the amount of data read during a scrub, but “bytes scanned” encompasses all the data and associated information checked during the process, while “bytes issued” focuses specifically on the workload sent to the storage subsystem in the form of data reads.
User:
why is there many gigs of bytes scanned with zero bytes issued?ChatGPT:
The scenario where you observe many gigabytes of “bytes scanned” with zero “bytes issued” in a ZFS scrub operation can occur when ZFS is checking the integrity of the data without actively reading it from the disks. Let me break down the possible reasons for this:Checksum Verification: ZFS uses checksums to ensure data integrity. During a scrub, ZFS may verify the checksums of the data without necessarily reading the actual data from disk if the checksums match the expected values. In this case, “bytes scanned” would include the entire dataset, but “bytes issued” would be zero because the actual data doesn’t need to be read.
Caching and Prefetching: ZFS employs caching mechanisms, and if the required data is already in the cache, it might not be necessary to issue new read requests to the storage devices during a scrub. The system can use cached data for checksum verification without incurring additional I/O operations.
Redundancy and Parity Checks: If ZFS detects errors in the metadata, checksums, or redundancy information without finding errors in the actual data blocks, it may not need to read the redundant copies or parity blocks during the scrub. This could contribute to “bytes scanned” being non-zero while “bytes issued” remains at zero.
In summary, the ZFS scrub operation involves both checking data integrity and repairing errors. If the data and associated metadata are consistent and error-free, “bytes issued” may remain zero even though ZFS is scanning the entire dataset to ensure its integrity. This behavior is part of ZFS’s design to optimize performance and minimize unnecessary I/O operations during scrubbing when possible.
User:
does nonzero bytes issue imply there were checksum errors?ChatGPT:
Yes, a nonzero value for “bytes issued” in the context of a ZFS scrub often implies that there were checksum errors detected during the scrub operation. When ZFS encounters checksum errors, it needs to issue read requests to retrieve the actual data from disk, allowing it to attempt to repair or correct the errors.Here’s a breakdown of the process:
Checksum Verification (Bytes Scanned): ZFS first scans the entire dataset, checking the integrity of data and associated metadata using checksums. This process contributes to the “bytes scanned” metric.
Checksum Errors Detected: If ZFS detects checksum errors during the scan, it identifies the affected blocks or sectors where the errors occurred.
Data Repair (Bytes Issued): To correct the errors, ZFS issues read requests (“bytes issued”) to retrieve the actual data from redundant copies, spare devices, or parity information. The system then uses this redundant information to repair or reconstruct the corrupted data.
So, if “bytes issued” is nonzero, it indicates that ZFS had to actively read data to correct errors detected during the scrub. This is a key aspect of ZFS’s self-healing capabilities, where it can automatically repair data corruption when redundancy or spare copies are available.
ChatGPT on RMA for SMART errors
ChatGPT told me this:
ChatGPT 3.5
User:
are SMART errors good enough reason for RMA?ChatGPT:
SMART (Self-Monitoring, Analysis, and Reporting Technology) errors are indicators that a storage device, such as a hard drive or SSD, has detected potential issues with its health or performance. SMART errors can cover a range of issues, including bad sectors, high temperature readings, and other abnormalities.Whether SMART errors are considered a good enough reason for a Return Merchandise Authorization (RMA) depends on the severity and type of errors, as well as the specific policies of the manufacturer or seller. Here are some general considerations:
Severity of Errors: Some SMART errors may be relatively minor and not immediately indicative of a catastrophic failure. Others, particularly those related to critical components like bad sectors, can be more serious. If the errors suggest a high risk of imminent failure, it could be a valid reason for an RMA.
Warranty Status: If your device is still under warranty, the presence of SMART errors is often a reasonable cause for an RMA. Warranty policies vary between manufacturers, so it’s important to check the terms and conditions provided by the company.
Manufacturer’s Policies: Different manufacturers have different policies regarding what constitutes a valid reason for an RMA. Some companies may require a certain threshold of errors or specific criteria to be met before they authorize a return.
Backup and Data Loss: Before seeking an RMA, it’s crucial to back up your data, especially if the SMART errors suggest potential drive failure. Some manufacturers may request proof that you’ve backed up your data before processing an RMA.
Communication with Support: If you’re experiencing SMART errors and are considering an RMA, it’s a good idea to contact the manufacturer’s customer support. They can provide guidance on whether the errors are within acceptable limits or if they warrant a replacement.
Remember that not all SMART errors are equal, and some devices may continue to function adequately despite certain types of errors. However, if you’re concerned about the health of your storage device and it’s under warranty, it’s generally a good idea to reach out to the manufacturer for guidance on whether an RMA is necessary.
ChatGPT on dd block size
I was pretty impressed by this:
ChatGPT 3.5
User:
if you’re going to use dd to write random data over a hard drive device what block size is best?ChatGPT:
When using the dd command to write random data over a hard drive device, the choice of block size can impact the performance of the operation. The optimal block size can vary depending on the specific circumstances, but there are some general guidelines you can follow.A common recommendation is to use a block size that is a multiple of 4 KB (4096 bytes), which is often the physical sector size of modern hard drives. This alignment can improve performance because it matches the underlying hardware.
Here’s an example of using dd with a block size of 4 KB:
$ dd if=/dev/urandom of=/dev/sdX bs=4K
Replace /dev/sdX with the actual device name you want to overwrite, and adjust the block size (bs) according to your needs.
However, keep in mind that there is no one-size-fits-all answer, and performance may vary depending on your specific hardware and use case. You might want to experiment with different block sizes to see what works best for your situation.
Additionally, when overwriting a hard drive for security reasons (e.g., before decommissioning or selling it), consider using tools specifically designed for secure erasure, such as shred or wipe. These tools often implement more sophisticated algorithms to ensure data is thoroughly and securely overwritten.
More chromium woes
Man my chromium browser has been giving me a lot of grief lately. Two times so far I needed to reboot to fix the problem. This is the third time and rebooting is always such a hassle. :(
------------------- Thu Nov 23 20:20:54 [bash:5.2.15 jobs:0 error:0 time:0] jj5@charisma:/home/jj5 $ chromium [1123/202056.945777:ERROR:elf_dynamic_array_reader.h(64)] tag not found [1123/202056.945823:ERROR:elf_dynamic_array_reader.h(64)] tag not found Trace/breakpoint trap (core dumped) -------------------
New motherboard: ASUS ProArt B550-CREATOR
Ordered myself a new motherboard today, the ASUS ProArt B550-CREATOR. This new board will replace my ASUS PRIME B550M-A in my new/upgraded computer ‘lore‘ (which is replacing ‘longing‘).
This new board has 2x M.2 PCIe 4.0 (as with the old motherboard), 2x 2.5Gbps Ethernet (compared to 1x 1Gbps Ethernet in my old motherboard), and 3x PCIe 16x slots (compared to 1x PCIe 16x slot in my old motherboard).
I will put in 2x M.2 PCIe 4x adapters and 2x M.2 PCIe 1x adapters which will mean I am able to install 6x M.2 drives: 2x Kioxia 256GB, 2x Samsung 990 Pro 2TB, and 2x PNY 1TB.
I will also put in a quad 2.5Gbps PCIe Ethernet adapter which will give me six 2.5Gbps Ethernet ports in total (two onboard and four on PCIe 16x).
When I’m done all four RAM slots will be full (4x Corsair Vengeance 32GB 3200MHz DDR4), both the M.2 slots will be full, all three of the PCIe 16x slots will be full, all four SATA ports will be full (3x Western Digital WD Ultrastar DC HC550 16TB and 1x ASUS BW-16D1HT PRO 16X Blu-ray burner), and both PCIe 1x slots will be full too. Also there will be a bunch of high speed USB ports available on both the front and the back of the case. I feel like I have really squeezed a lot of juice out of this system, it’s operating very close to capacity.
I’m kinda dreading plugging in the USB attachments to the front panel, but I guess if I RTFM and apply some thought I will be able to figure that out.
More chromium woes
So it seems that my trouble with chromium is not over. Today a new twist. Thank God I still have Firefox! I will try updating and rebooting again, but can’t do that at the moment.
------------------- Mon Nov 20 10:11:08 [bash:5.2.15 jobs:0 error:0 time:0] jj5@charisma:/home/jj5 $ chromium [4085207:4085207:1120/101109.660602:ERROR:policy_logger.cc(157)] :components/enterprise/browser/controller/chrome_browser_cloud_management_controller.cc(163) Cloud management controller initialization aborted as CBCM is not enabled. [4085207:4085238:1120/101109.672026:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672044:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672050:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672055:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672061:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672064:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672075:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672079:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672082:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672086:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672091:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672098:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672107:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672115:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672132:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672147:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672154:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672159:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672165:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672169:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672173:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672178:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672185:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085238:1120/101109.672193:ERROR:login_database.cc(1100)] Password decryption failed, encryption_result is 2 [4085207:4085207:1120/101109.758579:ERROR:cmd_buffer_helper.cc(141)] ContextResult::kFatalFailure: CommandBufferHelper::AllocateRingBuffer() failed Fontconfig error: Cannot load default config file: No such file: (null) [1120/101109.770548:ERROR:elf_dynamic_array_reader.h(64)] tag not found [1120/101109.770620:ERROR:elf_dynamic_array_reader.h(64)] tag not found 30185030681216:error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED:../../third_party/boringssl/src/ssl/handshake.cc:393: Trace/breakpoint trap (core dumped) -------------------
// 2023-11-22 jj5 – UPDATE: a reboot fixed this problem.