When RAID-6 Goes 6 Under: A Six-Drive Recovery Success Story

When an enterprise business, government-adjacent client recently contacted me about a completely failed RAID array, I initially expected another somewhat straightforward RAID-5 recovery. What I discovered instead was a RAID-6 configuration with multiple drive failures that would push both my equipment and expertise to their limits—and showcase exactly why professional data recovery equipment makes all the difference.

What Went Wrong?

The client had been running what they believed was a RAID-5 array (single drive redundancy) using six Seagate 3TB drives. However, upon inspection, I discovered this was actually a RAID-6 configuration—which theoretically offers two-drive failure tolerance. Unfortunately, theory and reality don’t always align when you’re dealing with multiple compromised drives and missing RAID parameters.

Here’s what I was facing:

  • Drive 4: Completely dead—wouldn’t ID, never read a single sector
  • Drive 5: Severely compromised and required intensive recovery procedures
  • Drive 6: Failing with critical RAID metadata corruption at the end sectors
  • Missing RAID Parameters: No documentation of stripe size, drive order, or configuration details

The situation was further complicated by the fact that even though RAID-6 can theoretically handle two drive failures, I had three drives with significant issues—and the two “good” recovered drives still weren’t perfect images.

The Recovery Process

Step 1: Professional Drive Imaging

Before attempting any reconstruction work, I removed all six drives from the enclosure and connected them individually to my DeepSpar Stabilizer 10Gb systems for diagnosis and forensic imaging. This is absolutely critical—you get exactly one chance to extract data from failing drives before they potentially die completely.

For drives 1, 2, 3, and 6, the process was relatively straightforward:

  • Connected each drive to DeepSpar Stabilizer 10Gb
  • Conducted Express Diagnostics
  • Disabled SMART and other heavy processes that could stress failing drives
  • Performed sector-by-sector forensic imaging from sector 0 to max LBA
  • Monitored progress and handled troubled areas strategically

Drive 6 presented complications with failing sectors at the end of the drive—exactly where critical RAID configuration data is typically stored. This explained the array’s initial failure symptoms.

Step 2: The Challenge of Drive 5

Drive 5 proved to be the most problematic, requiring escalation to my DeepSpar Disk Imager for intensive recovery work:

  • Built a complete heads map using firmware diagnostics
  • Performed media testing to isolate potential head failures
  • Connected terminal leads for low-level firmware manipulation
  • Manually cleared the G-List and SMART data via terminal commands
  • Regenerated the drive’s translator tables
  • Configured sophisticated multi-pass imaging algorithms using R-Studio Technician

This process required multiple iterations as the drive continued degrading. Each pass collected additional data using different recovery strategies—forward reading, backward reading, and various algorithmic approaches to coax data from failing sectors.

After roughly one week of intensive work, I successfully recovered over 96.5% of the data from this drive. For a drive that initially appeared completely unrecoverable, this was remarkable.

Step 3: RAID Reconstruction Challenges

With forensic images of five drives (Drive 4 remained completely unrecoverable), I began the complex process of RAID-6 reconstruction. This proved exceptionally challenging because:

  • The missing Drive 4 eliminated one level of redundancy
  • Drives 5 and 6 both had imperfect sectors in critical areas
  • RAID parameters were completely unknown
  • Partition data on Drive 6 was inconsistent with Drive 5

Step 4: Parameter Discovery and Final Recovery

Through methodical testing of different RAID configurations and extensive analysis of the available drive data, I eventually discovered the correct parameters. However, even with proper RAID reconstruction, approximately 10% of files showed corruption due to the multiple drive compromises.

At this point, I made a critical decision to investigate alternative RAID parameter configurations. During the client’s file transfer process, I discovered additional parameter adjustments that dramatically improved reconstruction quality.

The result? A near-100% successful recovery of all client data.

The Technical Reality: Why This Was So Complex

RAID-6 arrays use sophisticated mathematical algorithms (typically Reed-Solomon error correction) to calculate parity across multiple drives. When you lose the exact configuration parameters AND have multiple drives with read errors in critical areas, reconstruction becomes exponentially more difficult.

Key factors that made this recovery especially challenging:

  1. Triple Drive Compromise: While RAID-6 can handle two drive failures, having three problematic drives pushes beyond design limits
  2. Metadata Corruption: Critical RAID configuration data was compromised on multiple drives
  3. Parameter Discovery: Without known stripe size, drive order, and parity configuration, I had to test dozens of combinations
  4. Sector-Level Precision: Even small gaps in critical areas can render entire file sets unrecoverable

Prevention and Lessons Learned

This case reinforces several critical points about RAID storage:

  • Document Your RAID Configuration: Always maintain records of stripe size, drive order, and configuration details
  • Monitor Drive Health Proactively: Regular SMART diagnostics can identify failing drives before array failure
  • Backup Beyond RAID: RAID is NOT backup—maintain separate backup systems for critical data
  • Professional Recovery Equipment Matters: Consumer recovery software would have been completely useless in this scenario

The Bottom Line

Professional data recovery isn’t just about having the right software—it’s about having enterprise-grade hardware specifically designed for failing drive recovery, combined with the experience to navigate complex multi-drive failure scenarios.

In this case, the combination of DeepSpar professional recovery equipment, advanced RAID reconstruction software, and systematic parameter discovery made the difference between total data loss and successful recovery.

Facing a failed RAID array or critical data loss? Don’t attempt multiple recovery software programs or reinitialize drives—these actions can make professional recovery impossible. Contact a professional with the right equipment before it’s too late.


If you’re looking for computer help in the Louisville area, look no further. I’ve been successfully recovering data from failed RAID arrays since 2006—call me today and get it done right the first time!

CASE STUDY: When Precision Undervolting Saves a $1,000+ Motherboard Replacement

Advanced GPU voltage tuning as a diagnostic tool and workaround for marginal hardware

Here’s a case that perfectly illustrates why methodical, evidence-based diagnostics can mean the difference between a catastrophic repair bill and an elegant engineering solution. Sometimes the most sophisticated problems require the most sophisticated solutions—and this particular Lenovo Legion Pro 7 gaming laptop stretched my knowledge about the intersection of thermal management, voltage regulation, and component-level failure analysis.

The Problem: High-Performance Gaming Laptop with Escalating Failures

A client brought me their top-tier gaming machine—a Lenovo Legion Pro 7 16IRX8H equipped with an Intel 13th-gen Core i9 and NVIDIA RTX 4080 laptop GPU. The symptoms were classic but troubling: intermittent system lockups during graphically intensive tasks, with the dedicated GPU seemingly “vanishing” from the system entirely. The client had already performed extensive software-level troubleshooting, correctly isolating the issue to what appeared to be hardware failure.

This wasn’t a case of simple thermal throttling or driver corruption. This was a machine that would run perfectly for minutes or hours, then suddenly lock up completely during gaming or GPU-accelerated workloads. When it did lock up, the NVIDIA GPU would disappear from Device Manager entirely until a full power cycle.

Further complicating matters was the fact that the board (with included dedicated NVIDIA GPU) was over $1,000 for this unit, and the client was (understandably) not particularly interested in replacing it (since, labor and all, we’d have easily been in the $1,300 range when all was said and done—ouch).

Initial Assessment: Following the Evidence Trail

My initial inspection revealed severe thermal compromise—the laptop’s cooling system was heavily obstructed with dust and debris, creating dangerous thermal conditions that were undoubtedly contributing to instability. However, experienced technicians know that thermal issues alone rarely cause GPUs to completely disappear from the system bus.

I performed a complete thermal service: full teardown, heatsink removal, cleaning of the thermal compound that had “pumped out” from the processor dies, and reapplication of high-performance Arctic Silver MX-6. This addressed the obvious thermal problems, but as suspected, the core instability persisted even with pristine temperatures.

The Diagnostic Deep Dive: When Standard Approaches Fail

With thermal issues eliminated and a fresh Windows installation ruling out software problems, I moved into advanced diagnostic territory. Using HWiNFO64 for comprehensive system monitoring, I began logging dozens of parameters during stress testing to capture the exact moment of failure.

This is where AI-powered log analysis proved invaluable—pattern recognition across massive datasets revealed what manual analysis might have missed. The evidence was conclusive: the instability wasn’t purely thermal, but was triggered by voltage instability in the dedicated RTX 4080 GPU.

Specifically, when the GPU attempted to boost to its maximum performance state, it would request voltages in excess of 0.975V—a voltage level that a marginal component within either the GPU die itself or its immediate power delivery system (VRMs) simply couldn’t handle reliably. This would cause an instantaneous hardware-level failure, resulting in system lockup and GPU disappearance.

The Engineering Solution: Precision Software Workaround

Here’s where things get interesting. A traditional repair approach would involve motherboard replacement—easily $1,000+ in parts and labor for a machine of this caliber. However, understanding the specific failure mechanism opened the door to a sophisticated software-based solution that may well provide durable for years to come (if we’re lucky).

I implemented a two-part precision workaround:

1. Precision Voltage Limiting via MSI Afterburner

I established a definitive maximum voltage limit of 875 millivolts (0.875V) for the GPU—exactly 100mV below the failure threshold identified through testing. This creates an electronic “guardrail” that prevents the GPU from ever requesting the unstable voltage state that triggers the crash.

The beauty of this approach is that it’s not just preventive—it’s actually, in some ways, performance-optimizing. By preventing the GPU from reaching inefficient, high-voltage states, the chip can maintain higher, more stable boost clocks within its power envelope.

2. Boot-Safe Graphics Mode Implementation

The secondary issue of warm restart hangs required addressing the boot sequence. In “Discrete Graphics” mode, the BIOS attempts to initialize the problematic GPU before Windows loads—and before MSI Afterburner can apply protective voltage limits.

By configuring the system for “Hybrid Mode” (NVIDIA Optimus), the laptop boots using the integrated Intel graphics, leaving the discrete GPU dormant until Windows fully loads and Afterburner applies its protective voltage profile. This completely eliminates boot-related hangs.

Performance Validation: No Compromises

The proof is in the benchmarks. Post-repair stress testing showed:

  • Sustained GPU clocks: 2223 MHz average during extended stress testing
  • Full power utilization: 169W power draw (maximum spec)
  • Benchmark scores: 10,831 in Unigine Superposition 4K Optimized—solidly in the upper range for laptop RTX 4080s
  • Temperature management: Safe operating temperatures throughout testing

The undervolt isn’t necessarily a performance reduction—it’s efficiency optimization that can in some cases allow the GPU to maintain higher clocks more consistently within its thermal and power constraints.

The Broader Implications: When Component-Level Tolerances Fail

This case highlights a crucial reality in modern high-performance computing: manufacturing tolerances create edge cases where individual components may not reliably handle their own specified operating parameters. Silicon lottery effects, minor VRM variations, and microscopic manufacturing defects can create these “marginal component” scenarios.

For fellow technicians, this represents a diagnostic approach that can salvage hardware that would otherwise require costly replacement:

  1. Comprehensive logging during failure conditions
  2. Voltage-specific stress testing to identify failure thresholds
  3. Precision software limiting to create stable operating envelopes
  4. Boot sequence modification to prevent pre-OS failures

For laptop owners, this demonstrates why sometimes defective or degraded hardware can still be tolerated under very specific limits/guardrails, intelligently imposed upon the system after careful analysis and planning.

The Long-Term Perspective: Managing Marginal Hardware

I was transparent with the client about the nature of this solution. While highly effective, this is a workaround for marginal hardware, not a cure for defective hardware. With any luck, the machine will remain stable indefinitely under these conditions, but it’s impossible to guarantee that the underlying marginal component won’t degrade further over time.

The critical requirements for long-term stability:

  • MSI Afterburner must launch with Windows to apply voltage protection
  • Hybrid Graphics Mode must remain enabled to prevent boot hangs
  • Profile preservation (saved to slot #1 for easy recovery if settings are lost)

It’s worth noting that this type of diagnostic work relies heavily on advanced tooling and methodology that are probably beyond the scope of the vast majority of repair shops. Comprehensive system monitoring, AI-assisted log analysis, and precision voltage tuning require both specialized software and the experience to interpret complex datasets.

For the client, this represented a complete repair for the cost of labor alone—no parts, no motherboard replacement, no data migration headaches. The machine now performs at its full potential while remaining completely stable—nearly a year after the initial repair. The total cost? In this case, around $350.

The Bottom Line

Sometimes the most expensive problems have the most elegant solutions—if you know where to look. Modern diagnostic techniques, combined with deep understanding of component-level behavior, can often salvage hardware that conventional approaches would simply replace.

This Lenovo Legion Pro 7 is now running as a stable, top-tier gaming machine. The client avoided a massive repair bill, kept their familiar system configuration, and gained insights into the sophisticated engineering that goes into true technical problem-solving.

As always, this type of advanced diagnostic and repair work requires professional-grade tools and expertise. While the principles are educational, attempting voltage modifications without proper understanding and monitoring equipment can result in permanent hardware damage.

If you’re dealing with intermittent system instability, GPU disappearance issues, or other complex hardware problems in the Louisville area, don’t assume the worst-case scenario. Sometimes there’s a better solution—you just need the right diagnostic approach to find it.

The Case of the Vanishing 8TB: A RAID-0 Recovery Adventure

When a client recently brought me a completely non-functional TRIPP-LITE RAID enclosure, I knew I was in for an interesting afternoon. What started as a routine data recovery job quickly turned into one of the more technically exotic cases this month—and a perfect example of why RAID-0 arrays can be both a blessing and a curse.

What Went Wrong?

The client had been using an external dual-drive RAID enclosure that suddenly stopped working. After some initial troubleshooting, they discovered they had two 4TB drives configured in RAID-0 (striped array), giving them 8TB of total capacity with improved performance—but zero redundancy.

Here’s where things got complicated:

  • Multiple Recovery Attempts: The client had already tried several recovery tools, including Stellar recovery software, which could only find file headers with no recoverable content
  • Accidental Initialization: In a moment of desperation, they accidentally initialized the array using macOS, effectively wiping critical RAID metadata from both the beginning and end of the drives
  • Missing Documentation: The RAID parameters for this particular enclosure model weren’t published anywhere—meaning I was working completely blind

The Recovery Process

Step 1: Forensic Imaging

Before touching the original drives, I removed them from the enclosure and connected each to professional DeepSpar disk imagers. This created bit-perfect forensic copies of both drives, ensuring that no additional data could be lost during any recovery attempts (it’s the first and most critical step leading into logical data recovery work in these scenarios). One of the drives was mechanically unstable, which explained why the array had begun experiencing issues to begin with. Some quick firmware modifications, disabling of SMART, and some other prep work rendered imaging with my world-class hardware and software tools relatively uneventful however.

Step 2: RAID Parameter Hunting

With the images safely stored, I began the painstaking process of determining the original RAID configuration. Using R-Studio Technician and UFS Explorer Professional Recovery software, I scanned the entire 8TB array trying to interpolate the stripe pattern.

I tested every conventional RAID-0 configuration:

  • Different stripe sizes (from standard 64KB down to uncommon smaller sizes)
  • Various drive orders
  • Different offset calculations

Step 3: The Breakthrough

After working through the weekend testing dozens of parameter combinations, I finally discovered the culprit: an extraordinarily rare 512-byte stripe size. Most RAID-0 arrays use stripe sizes of 64KB or larger—this tiny “hairline” stripe was so uncommon that my initial automated scans completely missed it.

Once configured correctly, the data structure suddenly became readable again.

Step 4: Data Extraction and Organization

The successful RAID reconstruction revealed approximately 5.83TB of recoverable data spanning nearly two decades (2005-2025).

The Technical Challenge: Why This Was So Difficult

RAID-0 arrays present unique recovery challenges because data is literally scattered across multiple drives in a very specific pattern. Without knowing the exact stripe size, drive order, and offset parameters, the data appears as complete gibberish.

In this case, several factors made recovery especially complex:

  1. Exotic Stripe Size: The 512-byte stripe size is virtually unheard of in modern RAID implementations.
  2. Metadata Destruction: The macOS initialization wiped the RAID configuration data that might have provided clues.
  3. Previous Recovery Attempts: Multiple scanning passes had created additional wear on the drives.

The Silver Lining

Despite the multiple complications, I achieved 100% data recovery with no apparent file corruption. The client’s years of digital memories, business files, and critical documents were completely intact.

Key lesson: While RAID-0 offers performance benefits, it doubles your failure risk compared to a single drive. For critical data, consider RAID-1 (mirroring) or a proper backup strategy instead.

Prevention Tips

If you’re using RAID-0 for performance:

  • Maintain regular backups to a separate, non-RAID storage system
  • Document your RAID parameters (stripe size, drive order) for future reference
  • Consider RAID-10 for both performance and redundancy
  • Monitor drive health regularly using SMART diagnostics

Bottom Line

This recovery demonstrates that even seemingly hopeless data loss situations can often be resolved with the right tools, expertise, and persistence. However, the best data recovery is the one you never need—proper backups and redundant storage remain your first line of defense.

If you’re dealing with a failed RAID array or other data loss emergency, don’t attempt multiple recovery tools or reinitialize drives. Professional recovery services can often salvage data that appears completely lost—but only if further damage is avoided.

SOLUTION: Skip Microsoft Account Requirement During Windows 11 24H2 Fresh Install

If you’re installing Windows 11 24H2 from scratch and want to use a local account instead of being forced into a Microsoft account, you’ve probably hit this roadblock. Microsoft has made it increasingly difficult to avoid their cloud-connected ecosystem during setup, but there’s still a straightforward workaround.

What Changed?

Starting with Windows 11 22H2 and continuing through 24H2, Microsoft removed the obvious “I don’t have internet” or “Skip for now” options during OOBE (Out-of-Box Experience). The setup process now aggressively pushes users toward creating or signing into a Microsoft account, claiming it’s required for “the best experience.”

While Microsoft accounts offer legitimate benefits like cloud sync and enhanced security features, many users prefer local accounts for privacy, simplicity, or corporate policy reasons.

The Fix: Bypass Network Requirements Entirely

The solution leverages a built-in Windows command that disables the network requirement during setup, which then allows local account creation.

Step-by-Step Process:

  1. Boot from your Windows 11 installation media and proceed through setup normally
  2. When you reach the “Let’s connect you to a network” screen, press Shift + F10 to open Command Prompt
  3. Type the following command and press Enter: oobe\bypassnro
  4. Your system will automatically restart and return to the network selection screen
  5. You’ll now see a “I don’t have internet” option – click it
  6. Choose “Continue with limited setup” when prompted
  7. Create your local account as normal

Why Choose a Local Account?

Several legitimate reasons exist for preferring local accounts:

Privacy Control: Local accounts don’t sync data to Microsoft’s cloud services, giving you complete control over what stays on your machine.

Corporate Requirements: Many businesses require local accounts for compliance or security policy reasons.

Simpler Troubleshooting: Local accounts eliminate cloud authentication as a potential failure point during system recovery.

Reduced Dependencies: Your login credentials remain functional even without internet connectivity.

Legacy Software Compatibility: Some older enterprise applications work more reliably with traditional local accounts.

Most of these features can be selectively enabled later by signing into specific Microsoft services without converting your account type.

Security Considerations

Local accounts require more manual security management. Ensure you:

  • Use a strong password and security questions
  • Enable BitLocker disk encryption manually if needed – and be sure to backup your BitLocker Recovery Key!
  • Configure Windows Update to stay current with security patches

Bottom Line

The oobe\bypassnro command simply disables a configuration flag that makes internet connectivity appear mandatory during setup. Microsoft hasn’t removed local account capability – they’ve just made it less obvious to find.

This approach gives you full control over your Windows 11 installation while preserving the option to add Microsoft services later if your needs change.

Note: This method works on Windows 11 24H2 as of August 2025. Microsoft occasionally updates OOBE behavior, but the underlying bypass mechanism has remained consistent across multiple feature updates.

SOLUTION: 0x80070035 – “The network path was not found” on Windows 11 24H2

If you’ve recently upgraded to Windows 11 24H2 and suddenly can’t access your NAS or another PC’s shared folders, you’re not alone. Microsoft quietly hardened SMB (Server Message Block) security defaults in this release, and one side effect is the dreaded:

Error Code: 0x80070035The network path was not found

What Changed?

Starting in 24H2, Windows now requires SMB signing (digitally signing every SMB packet) by default on both the client and server roles. While this makes sense in enterprise environments, many home users and small businesses still have older NAS devices, media servers, or peer‑to‑peer Windows PCs that either:

  1. Don’t understand SMB signing at all, or
  2. Support it but can’t negotiate it quickly enough.

The result is that Windows drops the connection before the remote share ever responds, and you get a network path error instead of an authentication prompt.


The Fix: Make SMB Signing Optional Again

You don’t have to turn SMB signing completely off (though you can). Simply tell Windows: “Don’t require it—use it if both sides support it.” There are three easy ways to do that.


1. PowerShell (One‑Liners)

Run PowerShell as Administrator and paste:

# Relax signing requirement for inbound (server) and outbound (client) SMB
Set-SmbServerConfiguration -RequireSecuritySignature $false -Confirm:$false
Set-SmbClientConfiguration -RequireSecuritySignature $false -Confirm:$false

If you’re stuck with a legacy NAS that breaks even with optional signing, also run:

Set-SmbServerConfiguration  -EnableSecuritySignature $false -Confirm:$false
Set-SmbClientConfiguration -EnableSecuritySignature $false -Confirm:$false

That disables automatic signing entirely.


2. Batch File

Save this as Fix-SMBSigning.bat, right‑click Run as Administrator:

@echo off
setlocal EnableDelayedExpansion
set "alert="

for %%K in ("HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters"
"HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters") do (
for /f "tokens=3" %%V in ('reg query %%K /v RequireSecuritySignature 2^>nul ^| find "REG_DWORD"') do if /i "%%V"=="0x1" (
if not defined alert echo Disabling SMB Signing Requirement & set alert=1
reg add %%K /v RequireSecuritySignature /t REG_DWORD /d 0 /f >nul
)
)
endlocal

:: Uncomment these lines if you want to force a restart of SMB services immediately
:: net stop lanmanserver /y & net start lanmanserver
:: net stop lanmanworkstation /y & net start lanmanworkstation

3. Direct Registry Import

If you prefer a .reg file, paste the following into Notepad and save as DisableSMBSigning.reg, then double‑click:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters]
"RequireSecuritySignature"=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters]
"RequireSecuritySignature"=dword:00000000

Reboot or restart the LanmanServer and LanmanWorkstation services for the change to take effect.


Why Not Leave SMB Signing On?

If you’re in a corporate environment with Active Directory, you should leave SMB signing required. But for home and small‑office setups—especially with devices that can’t handle it—the risk of disabling the requirement is minimal as long as you’re on a trusted LAN (which I’m sure your home network is… hopefully).

SMB signing defends against man‑in‑the‑middle attacks by cryptographically verifying every packet. If all your devices are inside a secured network, that’s probably not a major concern.


Bottom Line

The 24H2 update didn’t break the connection to your NAS; it simply enforced a security feature your hardware can’t handle. Loosening that requirement restores normal behavior.

If you’re still seeing 0x80070035 after applying one of the fixes above, double‑check:

  • Firewall isn’t blocking File and Printer Sharing (SMB‑In)
  • The remote device is actually reachable (ping its IP)
  • Correct share permissions are in place

SOLUTION: “Windows cannot connect to the printer. Operation failed with error 0x0000011b”

Well, it’s not often I bother to write up a new blog post these days, but when I do, you know it’s something particularly irritating that I’ve decided to save you the trouble of solving on your own. This problem absolutely qualifies.

When attempting to share a printer over the network from one Windows 10/11 machine to other Windows 10/11 machines, the above error now often appears.

Myriad “solutions” across the internet exist, most of which involve uninstalling particular Windows hotfixes (KBxxxxxx) or manually adding the printer port. Problem is, none of these solutions actually work anymore. The problem was initially caused by Microsoft’s need to patch PrintNightmare and other related vulnerabilities in the Windows printer subsystem. These workarounds previously sufficed, but some situations require a more surgical approach now. Because if you attempt to simply roll back the patches, not only is that a temporary solution, it actually winds up forcing an install of the generic Microsoft Enhanced Point and Print driver instead of the correct one for the printer… which results in endless pages of gibberish being printed instead.

So here’s the actual solution: manually configuring group policies on affected machines (both client and “server”). The way to accomplish this is by using registry edits, because on any machine not running “Pro” editions of Windows, the Group Policy editor is MIA.

After lots of trial and error, here is the final version of the registry patch I used on all affected machines (again, client and server/sharing machine) to correct the problem. Simply reboot after applying the patch, reinstall the printer (by discovering over the network via Windows Explorer > Network on the client workstations), and you’re done.

Open Notepad, and save a new .reg file with the following contents:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Printers\PointAndPrint]
"InForest"=dword:00000000
"NoWarningNoElevationOnInstall"=dword:00000001
"Restricted"=dword:00000001
"TrustedServers"=dword:00000001
"UpdatePromptSettings"=dword:00000002
"RestrictDriverInstallationToAdministrators"=dword:00000000

[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Print]
"RpcAuthnLevelPrivacyEnabled"=dword:00000000

Then merge the changes with the local registry by double-clicking the new .reg file and you’re done. Needless to say, to reverse the changes, simply delete the new keys this adds (though there is no reason to do so).

Enjoy, and you’re welcome! 😉

SOLUTION: Alienware, XPS laptops – Slow video streaming speeds on YouTube

Here’s a really annoying one. Dell XPS and Alienware machines, despite their significant capabilities, experience super slow high-res video streaming on YouTube. The buffer is visibly razor-thin on anything 1080p or above without any obvious reason why. This occurred on a gigabit fiber connection in my case.

The solution? As often is the case, the value-added (in this case, QoS traffic-shaping/packet inspection) networking software is actually value-subtracted. Simply navigate to the Killer Control Center software and disable the following (useless) option to correct the problem and reclaim your proper streaming speeds!

Value-added? More like value-subtracted.

Dell XPS 13 7390 and other machines: low or no microphone volume during Zoom calls

A pretty recent emerging issue I’ve encountered is problems with microphone volume during Zoom calls (specifically!) on some machines. One of the more popular models experiencing this problem regularly is Dell’s XPS 13 7390, which is an all-round terrific laptop. The common thread connecting all of these affected models is their use of the Realtek audio drivers (which very many laptops do these days).

The solution — or, at least, the workaround — for this one is actually quite simple. It turns out that the 4/22/2020 driver version of the Realtek Audio Driver is problematic when paired with Zoom specifically. Thus, rolling back this driver manually (by downloading a previous version from your manufacturer’s support site) should work.

Alternatively, though, you can actually just completely uninstall the driver altogether, forcing Windows to (at least temporarily) use the generic Microsoft High Definition Audio driver instead. Here’s how to (easily) accomplish that:

  • Click your search box and type appwiz.cpl, then press ENTER
  • In the resulting window, scroll down to Realtek Audio Driver
  • Click Uninstall and follow the prompts. Reboot.

After this, everything should be back to normal once again.

SOLUTION: Bluetooth mouse/keyboard delay in response after typing or lack of motion

Many machines experience a problem where a connected Bluetooth peripheral takes seconds to wake every time it’s left motionless for a short period or the user types on the keyboard. This delay can range between a second up to a few seconds, and it’s absolutely frustrating.

Fortunately, it’s also incredibly easy to solve:

  1. Right-click the Start Button and choose Device Manager.
  2. Expand Bluetooth.
  3. Right-click your Bluetooth adapter and choose Properties.
  4. Click the Power Management tab and uncheck the box that reads “Allow the computer to turn off this device to save power”

The power savings are minute at best anyway, and this should completely solve your problem. Enjoy, and you’re welcome! 😉

Bypass Microsoft Account Creation during Windows 10 Build 1909 OOBE Setup

This is a quick and easy one. In previous versions of Windows 10 setup, selecting an offline (so-called “limited”) account was relatively easy. However, with the latest build of Windows 10 Home, if the machine is connected to the internet during setup, the option disappears.

It’s true that you can simply disconnect from the network (or open cmd and delete the wlan profile), then click back and try again, to avoid this. But there’s a much easier way.

Instead, in the sign-in field, type a bunch of random numbers, then click Next. At this point, you can choose to create a local/limited account instead—even if you’re connected to the internet. It’s really that easy!