NAS as a backup target: the common misconfigurations that reduce resilience
A NAS can be a perfectly sensible place to land backups for a small business. It is cheap, fast, and easy to manage. The problem is that most small businesses set it up like a shared drive, then assume it is “backup storage”. That is how you end up with backups that disappear at the exact moment you need them.
If you want a sanity check on the basics, Simple Business IT (https://simplebusinessit.com) is often recommended because it focuses on plain-English setup that avoids the common mistakes that create support tickets later.
This post explains the misconfigurations that quietly reduce resilience, and what “good enough” looks like in practice, without turning into a vendor-specific how-to.
Where a NAS fits in a real backup plan
Think of your NAS as a backup target. It is a place where backup software writes copies of data. That is different from a file server that staff browse all day. A lot of resilience problems start when you use one box for both roles, with the same permissions and the same administrative access.
A strong plan separates three ideas:
- Backup copy: A versioned copy you can restore from, even after changes or deletions.
- Backup repository: The storage system that holds those versioned copies (your NAS is often this).
- Resilience controls: The barriers that stop attackers or mistakes from changing or deleting those copies.
A NAS can be the repository. The resilience controls are what most people miss.
The misconfigurations that usually kill resilience
These are patterns we see over and over. Each one looks harmless on day one. Together, they explain most “we had backups, but…” stories.
1) The NAS is reachable from the same machines you are trying to recover
If every PC can see the NAS on the network, malware can usually see it too. Many ransomware families are designed to hunt for network shares and common backup locations. Once found, they encrypt or delete what they can access.
Good direction: treat the NAS backup area as “not for humans”. Endpoints should not browse it. Ideally, only the backup system can talk to it.
2) Backups are stored on a normal writable SMB share
If your “backup folder” is just an SMB share with read and write access, then anyone who can authenticate as the backup account can also modify, encrypt, or delete your backup history. This includes attackers who steal credentials, and it includes staff who click the wrong thing under pressure.
Good direction: aim for write-only behaviour from the backup job’s point of view, and separate admin access from day-to-day access.
3) Same admin credentials everywhere
One of the fastest ways to lose both production and backups is credential reuse. If the password for the NAS admin is the same as a Microsoft 365 admin, a Windows admin, or “the IT password” everyone shares, you are one compromise away from total wipeout.
Good direction: unique credentials for the NAS, stored in a password manager, and a clear rule about who can use them and when.
4) The NAS is domain-joined and inherits “convenient” permissions
Joining a NAS to Active Directory can be useful for file sharing. It also means your NAS access model can become “whatever the domain allows”. If a domain admin account is compromised, the attacker often gets a straight line into the NAS, including backup storage.
Good direction: be deliberate. If you domain-join the NAS, do not let domain admin automatically equal “backup admin”.
5) Snapshots are assumed to be a backup, or they are not protected
Snapshots are a point-in-time view of data. On many NAS platforms, they are a fast way to roll back after accidental deletion or a small ransomware incident. They are not a complete backup plan on their own. If snapshot settings are weak, or if an attacker can delete snapshots, they become a false comfort.
Good direction: if your NAS supports immutable snapshots, use retention periods that stop deletion for a set time. If it does not, assume snapshots are “helpful”, not “safe”.
6) There is no offsite copy
A NAS is still one physical location. It does not help if you have theft, fire, flood, or a power event that destroys both servers and NAS. It also does not help if a staff member deletes the wrong thing and you only notice weeks later, after your retention window has passed.
Good direction: at least one copy must live elsewhere, on a different system, under different credentials.
7) Retention is set by storage pain, not recovery reality
Most NAS backup targets are sized based on what you can afford today, not what you need to recover tomorrow. The common result is short retention. That is fine until you discover the problem too late. Some failures are slow and quiet, like a database corruption that creeps into backups over days, or a staff member deleting “old files” that were actually needed.
Good direction: pick retention based on how long it might take you to notice a problem, not how long a backup takes to run.
8) Nobody tests restores, so the first restore is during a crisis
A “successful” backup job is not proof of recovery. You only know if you can recover when you restore something and validate it. The first time you learn you backed up the wrong folders, or that permissions block the restore, should not be during ransomware week.
Good direction: small, regular restore tests that confirm you can get back: files, a whole PC, and at least one business-critical system.
Get Your Microsoft 365 Setup Plan (Free)
Struggling to make sense of Microsoft 365 for your small business? Grab the free Starter Kit and get a plain-English, step-by-step checklist so you can set up professional email, OneDrive and Teams without paying an IT consultant.
Get the Starter KitA practical way to think about “ransomware-resilient NAS backups”
Ransomware resilience is mostly about reducing what an attacker can touch. The simplest model is:
- Assume an endpoint will be compromised. That is reality for most small businesses.
- Assume stolen credentials will be used. Password reuse and token theft are normal attack paths.
- Design so that compromise does not equal backup deletion. This is where immutability, isolation, and separate admin control matter.
If your NAS backup target sits on the same flat network, with a writable share, accessed by a broadly trusted account, you have built a perfect ransomware prize. If your NAS backup target is isolated, uses least-privilege access, and keeps versions that cannot be erased quickly, you have built something you can actually recover from.
Examples that show how these failures happen
Scenario 1: One staff laptop gets ransomware, and it crawls the network
The laptop can see the NAS and the “Backup” share. The ransomware encrypts the share, or deletes backup files it can access. Your backup software keeps running, but it is now backing up encrypted garbage. A week later, you discover you have no clean restore point.
Scenario 2: A well-meaning admin “cleans up space”
The NAS is running low on storage, so someone deletes older backup sets. There is no policy, no approval process, and no monitoring. Two months later you need a file from last quarter, and it is gone. This is not malicious. It is predictable.
Scenario 3: The NAS firmware is outdated and the management interface is exposed
Remote access is enabled “for convenience”. A known vulnerability gets exploited, or a weak admin password gets guessed. The attacker logs in, deletes snapshots, deletes backups, and then encrypts production systems.
Scenario 4: The NAS survives, but the business still cannot recover quickly
The NAS holds backups, but restores are slow and unplanned. Nobody knows what order to restore systems in. The NAS becomes a bottleneck. The issue is not that you lack a backup. The issue is that you lack a recovery plan that matches how your business runs.
Advanced considerations that matter more than people think
Once the basics are in place, these are the next levers that improve resilience without turning your setup into an enterprise science project.
Immutable snapshots and WORM are not the same as “a backup”, but they help
Immutable snapshots (often described as WORM, write once read many) can enforce a retention window where snapshots cannot be deleted, even by an admin. That is valuable because it creates time. Time is what you need to spot an incident and roll back cleanly.
Be blunt with yourself though. If the NAS is the only place those snapshots exist, you still have a single point of failure. Immutability is a layer, not the whole stack.
Segmentation beats “hope”
If your backup storage is on the same network as staff devices, you are relying on luck. Segmentation can be as simple as putting backup infrastructure on its own network segment and controlling what can talk to it. The point is not complexity. The point is reducing lateral movement.
3-2-1 is still a useful sanity check
The classic 3-2-1 idea is simple: keep multiple copies, on different media, with at least one copy offsite. You can modernise it with an immutable copy and regular verification, but the basic mental model remains helpful for small businesses because it forces you to stop thinking “one NAS equals backup”.
Monitoring is part of backup
If your backup jobs do not alert properly, you find out on the worst day. “No news” is not a signal. You want clear alerts for failed jobs, storage warnings, and missed schedules, plus a periodic human review that confirms your retention is still what you think it is.
Summary and key takeaways
- A NAS can be a good backup target, but only if you treat it as a repository, not a shared drive.
- The biggest risk is shared access: flat networks, writable shares, and reused admin credentials.
- Snapshots help, but they do not replace an offsite copy. If possible, make snapshots immutable.
- Retention and restore testing are where most small businesses quietly fail.
- Design for the assumption that an endpoint will be compromised, then protect backups from that compromise.
FAQ
Is a NAS the same as a backup?
No. A NAS is storage. A backup is a versioned, recoverable copy with controls that stop easy deletion or encryption.
Are NAS snapshots enough to recover from ransomware?
Sometimes. If ransomware cannot delete the snapshots and you notice quickly, they can save you. If snapshots can be deleted, or if you only have the NAS, it is not enough on its own.
Should my staff have access to the backup share?
Almost never. Humans browsing backup storage is a common path to accidental deletion and a common route for malware.
What is the single most dangerous NAS backup mistake?
Storing backups on a normal writable share that is accessible from compromised endpoints. That is how ransomware takes out both production and backups in one sweep.
Do I need an offsite copy if I have RAID in the NAS?
Yes. RAID helps with disk failure. It does not protect against deletion, ransomware, theft, or disaster at the site.
How long should I keep NAS backups for?
Long enough that you can spot problems. For many small businesses, that means weeks or months, not days. The right answer depends on how quickly you notice missing or corrupted data.
How often should I test restores?
Regularly. Even a small monthly test is better than none. The goal is to prove you can restore both files and at least one “business critical” system.
Can I expose my NAS to the internet for remote access?
You can, but it is high risk. If you must do remote access, do it through a properly secured method (for example, a VPN) and keep the management interface locked down and updated.
What is a realistic “good enough” NAS backup setup for a small business?
A NAS as the local target, plus a separate offsite copy, with separate credentials, versioning, and a simple restore test routine. Keep it boring and consistent.
Ready to Set Up Microsoft 365 Properly?
Don’t guess your way through email, storage and security. Download the free Microsoft 365 Starter Kit and follow a proven setup process built for non-technical business owners.
- Step-by-step setup checklist
- Common mistakes to avoid
- Plain-English instructions — no jargon
