Home » How to Enable SSH on Ubuntu: Installation & Security
Latest Article

How to Enable SSH on Ubuntu: Installation & Security

You’ve got a fresh Ubuntu box, you need remote access, and you want to avoid the classic mistake of making it reachable before making it safe. That’s the moment where a lot of SSH guides go shallow. They give you three commands, assume the defaults are fine, and leave out the decisions that matter once the server is public.

A production SSH setup isn’t just about getting a prompt from your laptop. It’s about creating a secure management path you can trust when you’re deploying code, rotating credentials, debugging a failed release, or recovering from a bad config change. The difference between a quick setup and a durable one is usually a handful of choices made early.

If you’re learning how to enable ssh on ubuntu, the core workflow is simple. Install OpenSSH, make sure the service starts on boot, allow the connection through the firewall, and then harden authentication before you rely on that server for real work. The commands are easy. The judgment behind them is what keeps you from getting locked out or exposed.

Installing and Enabling the OpenSSH Server

On Ubuntu, SSH server support usually comes from the OpenSSH server package. Some Ubuntu Server installs already include it, while many Desktop installs do not. The first job is to verify what’s there instead of assuming.

A close-up of a server rack with multiple colorful Ethernet cables connected to networking equipment ports.

Check whether SSH is already installed

Run:

systemctl status ssh

If the service exists and shows active (running), the server is already installed and started. If systemd reports that the unit doesn’t exist, install it explicitly.

A clean install looks like this:

sudo apt update
sudo apt install openssh-server

On modern Ubuntu releases like 22.04 and 24.04, sudo apt install openssh-server typically completes in 15 to 45 seconds, and systemd integration ensures 100% service persistence after a reboot, according to this Ubuntu SSH walkthrough video.

That timing matters less than people think. What matters more is using the distro package instead of compiling anything yourself. When you install from Ubuntu’s repositories, you get a service definition that behaves properly with systemd, package-managed updates, and a predictable file layout.

Start it now and keep it running after reboot

If the package installed but the service isn’t started yet, use:

sudo systemctl start ssh
sudo systemctl enable ssh

If you want one command that handles both:

sudo systemctl enable --now ssh

That enable step is the one junior admins often skip. The service may work right now, but if it doesn’t come back after a reboot, you’ve built a fragile dependency into your server operations.

Use these commands regularly:

sudo systemctl status ssh
sudo systemctl restart ssh
sudo systemctl stop ssh
sudo systemctl start ssh

You’ll use restart every time you change SSH configuration. You’ll use status after every meaningful change. That habit catches simple mistakes before they become outage tickets.

Practical rule: Don’t edit SSH config and walk away. Change, test, inspect systemctl status, then reconnect in a second terminal before you close the first session.

Confirm that the daemon is listening

A running service in systemd is a good sign, but it isn’t the whole picture. You also want to know whether the daemon is listening for connections.

Run:

ss -tlnp | grep :22

If SSH is using its default port, you should see a listener on port 22. If you don’t, either the daemon failed to bind, the config is invalid, or you changed the port and forgot.

Why this first stage matters

This isn’t just package installation. This is the foundation for every remote operation that follows. If you’re going to deploy a backend app, manage a worker process, inspect logs, or run database maintenance, SSH becomes your control plane.

A lot of teams jump straight to application setup and treat server access as plumbing. That’s backwards. If your access path is unreliable, recovery becomes painful. Before you touch Nginx, Docker, Node.js, Laravel, or PostgreSQL, get the remote shell path stable. If you’re building out a broader machine provisioning process, this guide on how to make a server helps connect SSH setup to the larger server bootstrap sequence.

Minimal install checklist

Use this quick sequence on a new Ubuntu machine:

  1. Refresh package metadata

    sudo apt update
    
  2. Install OpenSSH server

    sudo apt install openssh-server
    
  3. Start and persist the service

    sudo systemctl enable --now ssh
    
  4. Verify service health

    sudo systemctl status ssh
    
  5. Confirm listener exists

    ss -tlnp | grep :22
    

If those five checks pass, SSH is enabled. At that point, don’t celebrate yet. A reachable SSH service with weak defaults is still an attack surface.

Opening the Firewall and Testing Your Connection

A running SSH daemon doesn’t mean you can connect. In real environments, the thing blocking you is often the firewall, not the service itself. Ubuntu’s built-in firewall tool, UFW, is straightforward, but it punishes mistakes fast.

The most common one is enabling the firewall before allowing SSH. That’s how people lock themselves out of a remote machine they just configured.

A server cabinet with a digital shield graphic representing secure network connections and data protection.

Allow SSH before you turn on UFW

If UFW isn’t active yet, add the SSH rule first:

sudo ufw allow ssh

You can also allow the default port explicitly:

sudo ufw allow 22/tcp

Both are valid. I prefer allow ssh early in the process because it’s harder to mistype and easier to read when you come back to the server later.

Now check the current firewall state:

sudo ufw status

If it’s inactive, enable it only after the SSH rule is in place:

sudo ufw enable

Then confirm the rule is present:

sudo ufw status

You want to see SSH listed as allowed. If you don’t, stop and fix that before disconnecting your current session.

Keep your current shell open while testing firewall changes. Your existing session is your recovery path if the next connection fails.

UFW and cloud firewalls are separate controls

Local testing and cloud deployment differ concerning firewall configurations. On a laptop or a VM on your own network, UFW may be the only firewall that matters. In AWS, Azure, and other cloud platforms, there’s usually a provider-level firewall in front of the machine too.

That means two layers can block the same connection:

LayerWhat it controlsCommon mistake
UFW on UbuntuTraffic at the OS levelSSH allowed in cloud console but blocked on the server
Cloud firewall or security groupTraffic before it reaches the VMUFW is correct but inbound SSH isn’t allowed in the provider

If your SSH connection times out instead of being refused, suspect the cloud firewall first. If it is refused immediately, suspect the service state or local firewall.

Find the address you should connect to

On the server, one quick way to inspect network information is:

ip addr

In cloud environments, you’ll often connect using the public address shown in the provider console. On private infrastructure, you may connect over an internal network address or through a VPN. The key point is simple: use the address that matches the route your client machine can reach.

Test from the client side

From your laptop or workstation, try:

ssh youruser@your-server-address

On first connection, SSH asks you to verify the host key fingerprint and then stores that trust relationship locally. Accept it only if you know you’re connecting to the right machine.

A successful first login tells you four things at once:

  • The package is installed
  • The SSH service is running
  • The firewall path is open
  • Routing from your client to the server works

That’s why this test matters. It validates the full chain, not just one layer.

What works and what doesn’t

Some habits are consistently safe. Others create needless problems.

  • Works well

    • Allow first, enable second: Add the SSH firewall rule before activating UFW.
    • Test from a second terminal: Keep one active session open while you test another login.
    • Think in layers: Check both UFW and the cloud firewall when a connection fails.
  • Usually goes wrong

    • Trusting defaults blindly: Just because OpenSSH is running doesn’t mean network access is configured.
    • Changing several things at once: If you edit firewall rules, ports, and auth settings together, troubleshooting gets messy.
    • Disconnecting too early: Always prove you can reconnect before you close the original shell.

If your first login works, the base path is ready. The next move is to harden access so you’re not exposing password-based login to the internet longer than necessary.

Essential SSH Security Hardening

A default SSH install is good enough for a lab machine and weak for a production server. The biggest improvement you can make is moving from passwords to SSH key-based authentication. That single change eliminates most of the noise and risk that comes with exposed login prompts.

Ubuntu began integrating OpenSSH as the default secure remote access protocol in 2004, and current installs automatically generate strong cryptographic keys such as 3072-bit RSA while using 256-bit AES encryption, reducing man-in-the-middle attacks by 99.9% compared to telnet, according to this OpenSSH setup guide for Ubuntu. If you want a deeper refresher on how public-key login relates to cryptography, this primer on asymmetric and symmetric encryption is worth reading.

A conceptual graphic illustrating secure SSH connection with spheres leading into a textured cloud formation.

Generate a key pair on your client machine

If you don’t already have a key pair on your local machine, create one:

ssh-keygen -t ed25519

If you need broad compatibility with older environments, RSA can still be used. But for modern Ubuntu systems, Ed25519 is a strong default.

When ssh-keygen prompts for a file location, the default is usually fine. When it asks for a passphrase, use one unless you have a very specific automation reason not to. A passphrase protects the key if your laptop is lost or compromised.

Copy the public key to the server

Use:

ssh-copy-id youruser@your-server-address

That command appends your public key to the server user’s ~/.ssh/authorized_keys file and handles the directory creation if needed. It’s much safer than manually pasting keys unless you already understand ownership and permission details.

After that, test a new login:

ssh youruser@your-server-address

If it works without relying on account passwords for authentication, you’re in much better shape.

Password login is convenient during setup and expensive later. Don’t leave it enabled longer than needed on an internet-facing host.

Why key auth is the standard

A password can be guessed, reused, phished, or brute-forced. A private key changes the problem. The server never needs your private key. It only verifies that your client can prove possession of it.

That changes the operational model in useful ways:

  • Developers authenticate as themselves, not by sharing a common password.
  • Access can be revoked per key, not per server account password rotation event.
  • Automation can use dedicated identities instead of borrowing a human credential.

This is the point where SSH starts fitting into real team workflows instead of acting like a single-user admin tool.

Disable password authentication

Only do this after you’ve proven key-based login works in a separate session. If you disable passwords first and your key setup is wrong, you can lock yourself out.

Open the SSH daemon config:

sudo nano /etc/ssh/sshd_config

Or, on systems where you prefer drop-in config files:

sudo nano /etc/ssh/sshd_config.d/00-custom.conf

Set or add:

PasswordAuthentication no
PubkeyAuthentication yes

Save the file, then reload or restart SSH:

sudo systemctl restart ssh

Now test a brand-new session. Don’t reuse an existing authenticated shell as proof. You want to know whether a fresh connection still works.

Verify before you commit

Use a short validation sequence every time you change auth settings:

  1. Check config for obvious problems

    sudo sshd -t
    
  2. Restart the daemon

    sudo systemctl restart ssh
    
  3. Inspect service health

    sudo systemctl status ssh
    
  4. Open a separate client connection

    ssh youruser@your-server-address
    

That sequence is boring. Boring is good. Most SSH outages come from skipping one of those four steps.

Here’s a walkthrough that reinforces the auth-hardening flow:

Don’t log in as root

Even if you haven’t disabled root login yet, build the habit of using a normal user account with sudo. Direct root SSH access removes accountability and makes it too easy to turn a key compromise into total host compromise.

A better pattern is simple:

  • authenticate as a named user
  • gain privileges with sudo when needed
  • keep audit trails tied to a person or role

That’s also easier to manage later when multiple engineers need controlled access.

A practical hardening baseline

If I were checking a new Ubuntu server before calling SSH “ready,” I’d expect this baseline:

ControlExpected stateWhy it matters
Key-based loginEnabledRemoves dependence on password strength
PasswordAuthenticationDisabledCuts off the most abused login path
Named user accountUsed for loginImproves accountability and reduces blast radius
Config validationRun before restartPrevents simple syntax mistakes from taking access down

This is the point where SSH stops being merely enabled and starts being production-appropriate. The remaining improvements are about reducing noise, limiting who can log in, and adding automated response when someone starts testing your perimeter.

Advanced Hardening and Automated Defense

A fresh Ubuntu server with SSH exposed to the internet starts collecting noise almost immediately. Login probes, password sprays, and scans hit port 22 within minutes on many public IPs. The basics you already set matter most. These next controls reduce noise, narrow access, and give the server a way to react when someone keeps pushing.

A graphic listing three advanced SSH security measures including SSH key authentication, disabling root login, and configuring Fail2Ban.

Change the default port if the server is public

Changing SSH from port 22 to a high, non-standard port does not fix weak authentication. It does cut down on commodity scans and log noise because many automated probes still try the default first. I treat this as an operational filter, not a security boundary.

Edit the SSH daemon config:

sudo nano /etc/ssh/sshd_config

Set a different port:

Port 2222

Open the new port in UFW before you restart SSH. That order matters. If you restart first and forget the firewall rule, you can lock yourself out of a remote box.

sudo ufw allow 2222/tcp
sudo ufw delete allow ssh

Then restart the service:

sudo systemctl restart ssh

Connect with the new port explicitly:

ssh -p 2222 youruser@your-server-address

There is a real trade-off here. A custom port is easy to manage on one or two hosts. In a larger environment, every engineer, SSH config, automation job, health check, and runbook needs to match. If your team uses a central access pattern instead of host-by-host exposure, a bastion server design for AWS usually scales better than relying on port changes across many instances.

Install Fail2Ban for automated response

Fail2Ban watches authentication logs and temporarily blocks IPs that keep failing. It is useful because it turns repeated abuse into a short-lived firewall problem instead of a constant stream of retries in your logs.

Install it:

sudo apt update
sudo apt install fail2ban

A practical SSH jail can look like this:

[sshd]
enabled = true
port = 2222
logpath = %(sshd_log)s
maxretry = 5
findtime = 10m
bantime = 10m

Restart and enable the service:

sudo systemctl restart fail2ban
sudo systemctl enable fail2ban

On Ubuntu, I usually verify the jail is active instead of assuming the package defaults fit the host:

sudo fail2ban-client status
sudo fail2ban-client status sshd

That check catches a common mistake. Teams install Fail2Ban, change SSH to port 2222, and forget to update the jail config. The service runs, but it is protecting the wrong port.

Use Fail2Ban as a cleanup layer. It helps with repeated brute-force behavior, but it should sit behind stronger controls you already applied, especially key-based authentication and disabled password logins. The Fail2Ban project documentation is the right reference for jail behavior and tuning details, because settings like bantime, findtime, and backend selection vary by distro and log source.

Restrict which users may log in

SSH access should be explicit. If only two accounts should ever connect remotely, define that in sshd_config instead of trusting account sprawl not to happen later.

Add:

AllowUsers devuser deployuser

This matters more on long-lived servers. Old admin accounts, service users, and temporary troubleshooting accounts tend to accumulate. AllowUsers makes remote access a deliberate allowlist instead of an accident of local account creation.

For deployment accounts or CI users, go a step further with Match User rules. That lets you apply tighter controls to one account without affecting everyone else. For example, you can limit forwarding features for an automation user while leaving an engineer account less constrained.

Recommended advanced controls

On an internet-facing Ubuntu host, I apply these in this order:

  • Reduce unauthenticated noise

    • Move SSH off port 22 if your team can support the extra configuration overhead
  • Automate short-term blocking

    • Install Fail2Ban and confirm the sshd jail matches the port and log source you use
  • Limit remote identities

    • Use AllowUsers so only approved accounts can log in over SSH
  • Add per-account restrictions where needed

    • Use Match User for deploy or automation accounts that need tighter boundaries

What these controls mitigate

ControlMain threat reducedMain cost
Non-standard portCommodity scans and background login noiseMore configuration overhead
Fail2BanRepeated authentication abuse from the same sourceAnother service to monitor and tune
AllowUsersExposure of accounts that should never have remote accessOngoing account hygiene
Match UserOver-permissioned automation or deploy accountsMore complex SSH configuration

These controls do not replace good authentication. They make the SSH surface quieter, narrower, and easier to manage under real production conditions.

Common SSH Pitfalls and Troubleshooting Steps

The usual assumption is that if you followed the commands, SSH problems must come from the network. That’s often wrong. Most failures come from small local mistakes, especially after the first round of hardening.

If you get Connection refused, start with the obvious checks in the right order. Inspect sudo systemctl status ssh, confirm the daemon is listening on the port you expect, and verify your firewall rule matches that same port. If you changed SSH to 2222 but your client still tries 22, the error looks dramatic even though the mistake is simple.

Permission problems break key auth quietly

The most frustrating error is:

Permission denied (publickey)

That message usually means the server rejected your key before login even started. The common cause is bad ownership or permissive file modes in the user’s SSH directory.

Check these on the server:

chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

Then make sure the files are owned by the correct user account. If authorized_keys belongs to root or another user, OpenSSH may ignore it.

SSH is intentionally strict about permissions. If the server thinks anyone else could alter your key files, it treats them as untrustworthy.

The bigger problem most guides skip

Basic tutorials usually stop after file permissions. That’s not enough once a team is involved. As noted in this discussion of SSH key management on Ubuntu, many guides mention chmod 700 ~/.ssh and chmod 600 ~/.ssh/authorized_keys but miss the harder operational issues around rotation, compromise response, and CI/CD integration, which turn into serious security liabilities at scale.

That’s the troubleshooting gap. A single developer can fix a bad authorized_keys file manually. A growing team needs a process for:

  • Removing stale keys when someone leaves or changes roles
  • Replacing compromised keys without disrupting deploy access
  • Separating human and automation credentials so CI doesn’t inherit personal trust
  • Auditing who still has access across multiple servers

A short diagnostic flow that works

When SSH breaks, check in this order:

  1. Service state

    sudo systemctl status ssh
    
  2. Listener

    ss -tlnp | grep :22
    
  3. Firewall rule

    sudo ufw status
    
  4. Auth file permissions

    ls -ld ~/.ssh
    ls -l ~/.ssh/authorized_keys
    
  5. Client settings

    • username
    • port
    • expected key

If you troubleshoot in that sequence, you’ll usually find the fault quickly. If you bounce between client guesses and server edits randomly, you can make a small issue much harder to unwind.

Your Secure Gateway to Ubuntu is Ready

At this point, your Ubuntu server has moved from “reachable” to “operable.” That’s an important distinction. You installed OpenSSH, verified the service under systemd, opened the firewall safely, tested the full connection path, and then hardened authentication so the server isn’t leaning on passwords.

The final state matters more than the install command. You now have a remote access path that’s suitable for real backend work. That includes deployments, log inspection, service restarts, scheduled maintenance, and emergency fixes when something fails outside business hours.

The hardening steps also set the tone for how you manage infrastructure. Key-based access, limited users, optional port changes, and automated blocking with Fail2Ban all push SSH away from ad hoc admin access and toward a controlled operational interface.

A good next move is creating a local SSH config file so you can connect to multiple servers with aliases instead of memorizing usernames, ports, and key paths. Small improvements like that reduce mistakes over time.


Backend teams spend a lot of time comparing frameworks and deployment stacks, but secure access is the layer everything else depends on. Backend Application Hub publishes practical backend guides that help engineers move from quick setup to production-ready decisions across DevOps, APIs, server architecture, and modern application stacks.

About the author

admin

Add Comment

Click here to post a comment