Amazon EC2 for Enterprise
When you sign up for AWS you create a root account that is defined by an email address and a password. This account is special because its permissions can never be limited and it can never be replaced. Best practice is to choose a long random password for the root account, enable 2nd factor authentication with a gemalto token, and use it only when necessary. You should create a master sub-account that has full permissions and use it for regular operations. If the master sub-account is comprimised, it can be deleted by the root account.
Any account that can launch new EC2 instances should be protected with strong authentication, otherwise an attacker might use it to launch thousands of machines which could incur large costs even in the few hours before a billing alert was noticed.
Purchase two gemalto tokens, one for the root account and one for the master sub-account. Payment is accepted by the amazon store. You can use any amazon account for the purchase. You may use the root account, but in general you want to lock up and never use that identity. Although the root account can be used with the amazon store, someone with the root credential can't make arbitrary purchases on the amazon store because one-click-checkout is disabled by default.
When the tokens arrive, you will associate them by serial number and consectutive OTP values to a single account. One token cannot be used with two accounts at the same time. For the root account, login, then go to "My Account", then enable the token. For the master account, login, then go to IAM > Users > Security Credentials, then enable the token.
As your enterprise grows, you may create more sub-accounts with more restricted permissions.
IAM - Secure AWS Access Control
Login to the root account and select IAM from the AWS Management Console.
Click "Password Policy" in the sidebar and enable a strict password policy: all boxes checked, minimum length 10 characters.
Click "Dashboard" in the sidebar. You could create an account alias to have a more memorable url for sub-account logins, but after an initial sub-account login, aws.amazon.com remembers you and redirects to the appropriate long login url. In any case, note the IAM users sign-in link.
In IAM you can create users that are basically sub-accounts that sign-in via username and password (and optionally gemalto token) via the IAM sign-in link. Users can be assigned permissions and/or groups. Groups are just sets of permissions. Roles aren't for users, they are used to grant permissions to applications that might interact with AWS on your behalf.
Click "Users" in the sidebar and create a new user called "master". You don't need to generate an access key now.
Click the created master user and on the "Permissions" tab add the "Administrator Access" permissions template which gives full control to "AWS Management Console" services. It doesn't give access to "My Account" features such as: Account Activity, Manage Your Account, Payment Method, Personal Information, Security Credentials, Usage Reports, Billing Alerts or Billing Preferences. You will have to use the root account for those features. The master user does have access to CloudWatch which can show estimated charges and can view/edit alarms such as per-instance CPU alarms or billing threshold alarms.
On the "Security Credentials" tab click "Manage Password" and assign a custom password. When your gemalto token arrives, come back here (as the master user) and click "Manage MFA Device" to bind the token to the master user.
Logout of the root account. When your gemalto token arrives, log back into the root account and in "My Account > Security Credentials" bind the token to the root user. Store the root password and root gemalto token in a secure location. Use only the master account for regular operations.
In the future when creating other more limited sub accounts, note that you can downlaod an autogenerated password, but that user is not required to change their password at first-login.
Note: don't be surprised if the OTP from the gemalto token is rejected with an obtuse error. The clock often falls out of sync. You just need to follow the "Having problems with your authentication device?" link and re-sync the device.
Before you get started, you might want to create a billing alarm. Login to the master account and select CloudWatch Resource & Applicaton Monitoring.
Click "Alarms" in the sidebar then "Create Alarm". Click "Amazon EC2" then "Continue". Specify "Billing Alert" for both the Name and Description. Choose a US dollar amount so the alarm reads something like: "This alarm will enter the ALARM state when EstimatedCharges is >= $40 for 360 minutes".
This is an alarm on the Maximum EstimatedCharges for a 6 hour period. The EstimatedCharges is an accumulating monthly amount. It starts at zero at the 1st of the month and you are billed the final amount at the end of the month. So this alarm means that 6 hours after your total monthly bill exceeds $40, you will be notified. Click "Continue" and specify your email address to complete the alarm.
Private Networks (background info)
Amazon describes how to get started with a VPC.
The concept of a private address space is defined in RFC 1918.
The Internet Assigned Numbers Authority (IANA) has reserved the following three blocks of the IP address space for private internets: 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
Any address of the form 10.x.x.x does't exist in the internet and it is the job of the local network to find the appropriate local computer.
A subnet is defined by an IP prefix. CIDR notation for an IP prefix shows a decimal IP and the number of most-significant bits included in the prefix. IPv4 is four groups of 8 bits, so 192.168.1.0/24 means that only the last eight bits (i.e. the '.0' portion) are outside the prefix. Transport between subnets is achieved by logical or physical routers.
IP address 11000000.10101000.00000101.10000010 192.168.5.130 Subnet mask 11111111.11111111.11111111.00000000 255.255.255.0 Network prefix 11000000.10101000.00000101.00000000 192.168.5.0/24 Host part 00000000.00000000.00000000.10000010 0.0.0.130
Routers in networks not using private address space are expected to be configured to reject (filter out) routing information about private networks.
For example, if you have three computers and a router physically connected by ethernet, and one of these computers broadcasts a message for IP X then all four devices see it. Since only one device thinks it is IP X, only that device operates on the packet. The router is effectively configured to all addresses outside its subnet mask and passes those packets to whatever larger network it is attached. The other network sees the router as a single IP, so when the router receives a response it must translate that to an internal IP and broadcast it internally.
Amazon VPC lets you to define subnets and routing for your EC2 resources. AWS reserves the first four IP addresses and the last IP address in each subnet. They're not available for use.
Network prefix 00001010.00000000.00000000.00000000 10.0.0.0/24 Reserved 00001010.00000000.00000000.00000000 10.0.0.0/24 Reserved 00001010.00000000.00000000.00000001 10.0.0.1/24 Reserved 00001010.00000000.00000000.00000010 10.0.0.2/24 Reserved 00001010.00000000.00000000.00000011 10.0.0.3/24 Reserved 00001010.00000000.00000000.11111111 10.0.0.255/24
You can assign an Elastic IP address to any instance in your VPC. Thus by default all boxes in your VPC are unreachable from the web. If you add an Elastic IP to box1 then only it is reachable from the web, yet connectivity exists between it and all boxes inside the VPC.
You can define a VPC, attach an Internet Gateway which connects the VPC to the internet and AWS EC2/S3/etc, create a subnet in your VPC, launch an EC2 instance into your subnet which will be assigned a private IP from the subnet's range, setup a router in the VPC to connect the subnet to the internet, set up a security group to allow traffic into the EC2 instances in the subnet, assign an elastic IP to the EC2 instance in addition to its private IP so that internet traffic can travel from the web, through the gateway, through the router, through the security group to your EC2 instance. For details on the following diagram see amazon's vpc guide.
Setup a VPC
kiip provides an excellent how-to post about amazon's VPC. Login to the master account and select VPC Isolated Cloud Resources.
Click "Start VPC Wizard" from the "VPC Dashboard". Your VPC will be created in the region currently selected in the top-right dropdown. This defaults to the region your browser is nearest.
In production you would select "VPC with Public and Private Subnets and Hardware VPN Access" so devices in the private subnet can connect to your corporate network, and so that administrators from your corporate network can connect to the machines in the private subnet.
The cost for Hardware VPN Access is $440 USD/yr. The network address translation box that connects your public and private subnets isn't supported on a micro instance, so your minimum cost for an on-demand version (not-pre-paid) is $570 USD/yr.
For a development environment it is sufficient to have a single semi-public subnet. This will contain public boxes attached to elastic IPs, and private boxes only accessible indirectly via a public machine in the subnet, often refered to as a Bastion host.
Select "VPC with a Single Public Subnet Only" and accept the defaults. It's okay to leave the availability zones at "No Preference". Assuming you are in N.Virginia, i.e. us-east, then your availabily zones options are: us-east-1a, us-east-1b, us-east-1c, us-east-1d. These are only meaningful if you're setting up high-availability with multiple VPCs accross multiple availabily zones. No information is given on the specific meaning of us-east-1a, only that its failures aren't likely to co-incide with failues in other availability zones. However, take note, I setup a VPC in us-east-1d only to discover that no reserved micros were available there. I had to move my VPC and all my instances to us-east-1b. So you might want to search for reserved instances from EC2 prior to selecting an availability zone for your VPC.
The wizard has created:
- 1 VPC (size /16, default DHCP, AmazonProvidedDNS)
- 1 Subnet (size /24)
- 1 Network Access Control List (control traffic at the subnet level)
- 1 Internet Gateway (attached to VPC)
- 2 Route Tables (traffic between subent and gateway)
- 1 Security Group (firewall)
EC2 instances launched inside your VPC are private. They're not assigned a public IP address. By default, all instances in the VPC receive an unresolvable host name that AWS assigns (e.g., ip-10-0-0-202). To assign your own domain names to your EC2 instances you must customize the DHCP options used in your VPC, and must provide your own DNS.
"domain-name-servers=AmazonProvidedDNS" is the single option contained in the default DHCP. It maps to a DNS server running on VPC IP+2, i.e. 10.0.0.2 on a 10.0.0.0/16 subnet.
The Gateway created by the wizard has has been attached to your VPC ID. The custom (not main) route table created by the wizard has been attached to your VPC ID, and targets the entire VPC space 10.0.0.0/16 as local. It sends everything else 0.0.0.0/0 to your Gateway (associated by ID). The route table has been assoicated with your subnet subnet (by ID) (10.0.0.0/24). This achieves internal connectivity and allows access out to the internet. The main route table is not used. Any new subnets will default to the main route table and have no direct access to the internet.
A security group acts as a virtual firewall. Security groups in a VPC are different from normal EC2 security groups. New instances belong to the VPC's default security group. Each instance can be a member of up to five security groups. Normal EC2 security groups only have inbound rules, but VPC security groups have both inbound and outbound rules. Security groups implicity deny everything. You can only add allow rules. New security groups default to no inbound rules (deny ingress) and an allow-all outbound rule (allow egress). Responses to inbound traffic are allowed past egress rules, and vice versa. Instances in a security group therefore can't communicate unless the rules explicitly allow it. Instances in separate subnets can be in the same security group, and instances in the same subnet can be in separate security groups.
The default security group allows all inbound traffic from other boxes in the same security group (associated by ID) and allows all outbound traffic to everywhere.
For the semi-private development subnet desccribed above, you should create two security groups. One for your webservers, the other for internal boxes. You can allow communication between these two groups and use a webserver box as a gateway for ssh access to the internal boxes.
Click "Create Security Group". Name it "PublicSG", give it description "Publicly Facing Boxes", select your VPC, confirm it and note its ID.
Click "Create Security Group". Name it "PrivateSG", give it description "Internal Boxes", select your VPC, confirm it and note its ID.
For the PublicSG, delete the outbound allow-all rule and apply something like the following Custom TCP rules. Be sure to click "Apply Rule Changes" on both your inbound and outbound modifications.
Allow inbound from <your_ip> to port 22 Allow inbound from PrivateSG (referenced by ID) to port 8443 Allow outbound to PrivateSG (referenced by ID) on ports: 22, 8443 Allow inbound from anywhere to ports: 443, 80 Allow outbound to 188.8.131.52/24 on port 80 Allow outbound to 184.108.40.206/16 on port 80
For the PrivateSG, delete the outbound allow-all rule and apply something like the following Custom TCP rules. Be sure to click "Apply Rule Changes" on both your inbound and outbound modifications.
Allow inbound from PublicSG (referenced by ID) to ports: 22, 8443 Allow outbound to PublicSG (referenced by ID) on ports: 8443 Allow outbound to 220.127.116.11/24 on port 80 Allow outbound to 18.104.22.168/16 on port 80
In the above example, you can SSH into PublicSG on 22 and then SSH furhter into PrivateSG on 22. Boxes on both PublicSG and PrivateSG can talk to eacheother over 8443. Anyone from the web can contact PublicSG over 80 and 443. Note that neither can call out to the web except for yum updates to machines on ranges 22.214.171.124/24 and 126.96.36.199/16. Also note that this is just the security group firewall. Your individual boxes should also have firewalls.
You can launch a micro EC2 instance in the VPC subnet with the PublicSG firewall and use the VPC console to attach an elastic IP to it. The launch of an EC2 instance creates a keypair, drops the public key onto the instance for SSH authentication and prompts you to download the private key. That download is the only copy of the private key. The default linux image is already hardened and requires the private key for SSH connectivity.
Login to the master account and select EC2 Virtual Servers in the Cloud.
Click "Launch Instance". Continue with the "Classic Wizard". Select "Amazon Linux AMI" 32bit. Note: it is easy to migrate to a larger instance at a later date as long as you don't need to change from 32bit to 64bit. If there is any likelihood that you will ever require more then 4gb RAM or any 64bit feature then you should choose 64bit now.
The "T1 Micro (t1.micro, 613 MiB)" instance is the least expensive type. It can spike to two EC2 compute units (ECUs). It is deisgned to support tens of requests per minute. It has 613 MiB of RAM (i.e. 642 MB), and 8 GiB of hard disk.
Select one T1 Micro (t1.micro, 613 MiB) instance, and launch it into your VPC. In advanced options, accept all defaults and continue. The micro instance defaults to an 8 GiB disk. You can increase that by editing it. The wizard shows it as a "delete on termination" disk, but it will in fact be backed by an EBS volume and survive reboots.
Optionally add some tags, such as "Name = My Public Server"
As part of an EC2 launch you must define the key that will be used for SSH access. The matching public key will be embedded in the instance and only connections with the private key will be accepted.
You can use existing keypairs or create new ones. Amazon only stores the public keys. You can use the same keys for multiple boxes or create keys for each box.
When you create a new keypair, you download the raw private key as a PEM file like the following. You must immediately protect this file.-----BEGIN RSA PRIVATE KEY----- MII53+FTW2K2gDtGkXOEBGy/7JQCqtvvs4AECgYEAWpT0wVQQaD7FtxLAbOlgRR2gVDCKtFRLL7W lMdUowFaXb08HDrV7cwIIKIdzhA4M48K9bIxYqgLvR9aYudDxLPlBAAKCAQEAruYgXo1mqUnw... -----END RSA PRIVATE KEY-----
I use putty on WinXP to SSH into these Linux boxes. If that's the case for you, you'll need a password protected private key file. Download puttygen.puttygen conversions > import key > YourKey.pem key passphrase: INVENT_A_STRONG_PASSWORD confirm passphrase: STRONG_PASSWORD_AGAIN save private key > YourKey.ppk exit
You now have an encrypted .ppk version of your plaintext .pem key. You should delete, encrypt or and store the plaintext .pem file on a thumb drive in a secure location. Make sure you don't send the plaintext .pem key to your recycle bin (SHIFT-DELETE). If you're hardcore you might use a true-delete application to scrub the disk where the private bytes were. For additional security store the encrypted .ppk on a usb thumb drive and only connect it when necessary so it is only prone to attack while in use.
Continuing with the launch wizard: When you created your VPC and subnets, you should have planned what machines are necessary and created security groups. If not, you can create new ones now. Choose the appropriate security group. Review and launch your new EC2 instance. Close the final dialog. It doesn't auto-redirect once the launch is complete.
Click on "Instances" in the EC2 sidebar and select your newly created instance. In the actions dropdown select "Change Termination Protection". Click "Yes, Enable". This prevents you from accidentally terminating the instance.
In EC2, stopping/rebooting an instance is different from terminating it. Stopping releases the virtual machine but retains your EC2 config and EBS volume which is your OS hard-disk. Stopping is like powering down a physical tower, removing the hard-disk, noting the details of the tower's hardware, and giving away the tower. Starting is like fetching a new (probably different) tower with the same brand/config of hardware, inserting your hard-disk and powering it on. Terminating an EC2 instance destroys it. You give everything away.
To connect to the new instance you will have to assign it an Elastic IP. You can't use existing EC2 Elastic IP addresses in your VPC. AWS limits you to 5 VPC Elastic IP addresses; to help conserve them, you can use a NAT instance (see "NAT Update" below).
From the VPC console click "Elastic IPs" in the sidebar then click "Allocate New Address". Click "Yes, Allocate" an "EIP used in: VPC". Select the new IP address and click "Associate Address".
Choose your EC2 instance (displayed by ID and name tag), which auto-populates the "Private IP address" box with an internal IP already assigned to this instance. Leave the other fields blank. The "Allow Reassociation" box would allow you to later move this IP to another internal box without first explicitly removing it from the current box.
Click "Yes, Associate" and note the IP for use below.
I use putty on WinXP to SSH into these Linux boxes.
session > saved_sessions: My Public Server session > host: YOUR_EIP_FROM_ABOVE session > port: 22 session > type: SSH window > lines of scrollback: 3000 window > colours > use system colours: checked connection > data > auto-login username: ec2-user connection > ssh > auth > private key file for authentication > browse: YourKey.ppk session > save session > open - yes, accept unknown thumbrint (only happens once) - passphrase for key "imported-openssh-key": STRONG_PASSWORD_FROM_ABOVE
I use WinSCP on WinXP for file transfer to these Linux boxes.
WinSCP 4.3.5 Installation package winscp435setup.exe > agree with defaults Host: YOUR_EIP_FROM_ABOVE Port: 22 User: ec2-user Password: [blank] Private key file: YourKey.ppk File protocol: SFTP (don't allow scp fallback) > save > login > [enter STRONG_PASSWORD_FROM_ABOVE when pompted]
Find your timezone.
Update system timezone.
sudo ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime sudo vi /etc/sysconfig/clock
The default AMI is already hardened. You don't have root access. You do have the "ec2-user" who can sudo root commands. We can use
lsof to see what ports are active,
iptables to see the local firewall, and
ifconfig to see the local networking config.
[ec2-user@ip-10-0-0-83 ~]$ sudo lsof -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dhclient 946 root 5u IPv4 6283 0t0 UDP *:bootpc sshd 1080 root 3u IPv4 7096 0t0 TCP *:ssh (LISTEN) sshd 1080 root 4u IPv6 7098 0t0 TCP *:ssh (LISTEN) ntpd 1100 ntp 16u IPv4 7195 0t0 UDP *:ntp ntpd 1100 ntp 17u IPv4 7199 0t0 UDP localhost:ntp ntpd 1100 ntp 18u IPv4 7200 0t0 UDP 10.0.0.83:ntp sendmail 1115 root 4u IPv4 7263 0t0 TCP localhost:smtp (LISTEN) sshd 19069 root 3r IPv4 20408 0t0 TCP 10.0.0.83:ssh->YOUR_REMOTE_IP:39904 (ESTABLISHED) sshd 19071 ec2-user 3u IPv4 20408 0t0 TCP 10.0.0.83:ssh->YOUR_REMOTE_IP:39904 (ESTABLISHED) [ec2-user@ip-10-0-0-83 ~]$ sudo lsof -i -P COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dhclient 946 root 5u IPv4 6283 0t0 UDP *:68 sshd 1080 root 3u IPv4 7096 0t0 TCP *:22 (LISTEN) sshd 1080 root 4u IPv6 7098 0t0 TCP *:22 (LISTEN) ntpd 1100 ntp 16u IPv4 7195 0t0 UDP *:123 ntpd 1100 ntp 17u IPv4 7199 0t0 UDP localhost:123 ntpd 1100 ntp 18u IPv4 7200 0t0 UDP 10.0.0.83:123 sendmail 1115 root 4u IPv4 7263 0t0 TCP localhost:25 (LISTEN) sshd 19069 root 3r IPv4 20408 0t0 TCP 10.0.0.83:22->YOUR_REMOTE_IP:39904 (ESTABLISHED) sshd 19071 ec2-user 3u IPv4 20408 0t0 TCP 10.0.0.83:22->YOUR_REMOTE_IP:39904 (ESTABLISHED) [ec2-user@ip-10-0-0-83 ~]$ sudo iptables -L -v Chain INPUT (policy ACCEPT 10 packets, 660 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 7 packets, 980 bytes) pkts bytes target prot opt in out source destination [ec2-user@ip-10-0-0-83 ~]$ sudo ifconfig eth0 Link encap:Ethernet HWaddr E9:02:0E:C5:98:16 inet addr:10.0.0.83 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::cc5:fe02e9ff::9816/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3512 errors:0 dropped:0 overruns:0 frame:0 TX packets:4495 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:242725 (237.0 KiB) TX bytes:355979 (347.6 KiB) Interrupt:25 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
This indicates that the local firewall allows everything and that the box is currently listening on ports 22 (for SSH) and 25 (for SMTP). We can see the open SSH connection to our putty terminal. Note that this box isn't aware of its public elastic IP, that is handled by the internet gateway in our VPC. Since we're protected by our security group, it isn't critical to configure the local firewall now. It's probably easier to install all your software, confirm it's working, then instate a maximally restrictive local firewall.
Make the following bold modifications to further harden the default SSH config. Note that each bold line may be adjacent to deleted lines not shown. Make sure to delete adjacent existing lines with differing values.
sudo vi /etc/ssh/sshd_config
# $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # --- explicitly disable weak authentication systems # Authentication: #LoginGraceTime 2m PermitRootLogin no #StrictModes yes #MaxAuthTries 6 #MaxSessions 10 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys #AuthorizedKeysCommand none #AuthorizedKeysCommandRunAs nobody # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication IgnoreUserKnownHosts yes # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no PermitEmptyPasswords no # EC2 uses keys for remote access # Change to no to disable s/key passwords ChallengeResponseAuthentication no # Kerberos options KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no #KerberosUseKuserok yes # GSSAPI options GSSAPIAuthentication no #GSSAPIAuthentication yes #GSSAPICleanupCredentials yes #GSSAPICleanupCredentials yes #GSSAPIStrictAcceptorCheck yes #GSSAPIKeyExchange no # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no # Leaving enabled as described so that account and session checks are run UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS #AllowAgentForwarding yes #AllowTcpForwarding yes #GatewayPorts no X11Forwarding no #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes # Explicitly enable PrintLastLog yes #TCPKeepAlive yes UseLogin no UsePrivilegeSeparation yes PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs # X11Forwarding no # AllowTcpForwarding no # ForceCommand cvs server
The default image contains a weak ssh_host_key for protocol 1 and a weak ssh_host_dsa_key for protocol 2 and a strong ssh_host_rsa_key for protocol 2. We have disabled protocol 1 above and the strong ssh_host_rsa_key for protocol 2 is used by default so this is okay, but we should clean it up just to be safe.
Confirm our rsa key is strong.
ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key 2048 30:fd:ea:1c:19:98:46:bd:3e:52:02:6c:e2:2e:c7:43 /etc/ssh/ssh_host_rsa_key.pub (RSA)
Delete the other unused keys.
sudo rm -f /etc/ssh/ssh_host_key* sudo rm -f /etc/ssh/ssh_host_dsa_key*
Note: these are not the keys used by ec2-user to connect to your box. They are the keys by which your box identifies itself. The first time you connect, you don't know your box and add are prompted to add its ssh_host_rsa_key.pub to your local trust store. Thereafter when new clients connect to this box, you can avoid that potentially risky opearation by pre-distributing copies of ssh_host_rsa_key.pub or simply the thumbprint 30:fd:ea:1c:19:98:46:bd:3e:52:02:6c:e2:2e:c7:43 displayd in the trust-prompt.
The only copy of the private key used by ec2-user to connect to your ec2 box is stored in the .ppk you created above. The corresponding public key that causes your ec2 box to trust that private key is stored in ec2-user's authorized_keys file.
ssh-rsa AAAAB3NzaC1...CRRqF sshToPublicSubnet
Now is a good time to restart the server both to convince yourself that that disk won't be lost during reboot and to pickup OS updates. For whatever reason they don't seem to appear during first login.
You can stop/start or reboot your instance from the actions dropdown in the EC2 console, or by issuing the restart command from the shell.
sudo shutdown -r now
See above regarding the difference between stop and terminate. Now might be a good time to confirm your termination protection is working by attempting a terminate from the actions dropdown on the EC2 console. The "Yes, Terminate" button should be greyed out.
For yum updates to work the security group must allow outbound on port 80 to something like repo.us-east-1.amazonaws.com. You will get the exact machine name in the error message when you attempt yum. For example:
[ec2-user@ip-10-0-0-197 ~]$ sudo yum list updates Loaded plugins: priorities, security, update-motd, upgrade-helper Could not retrieve mirrorlist http://repo.us-east-1.amazonaws.com/latest/main/mirror.list error was 12: Timeout on http://repo.us-east-1.amazonaws.com/latest/main/mirror.list: (28, 'connect() timed out!') Error: Cannot retrieve repository metadata (repomd.xml) for repository: amzn-main. Please verify its path and try again [ec2-user@ip-10-0-0-197 ~]$ ping repo.us-east-1.amazonaws.com PING s3-1-w.amazonaws.com (188.8.131.52) 56(84) bytes of data. [ec2-user@ip-10-0-0-197 ~]$ ping repo.us-east-1.amazonaws.com PING s3-1-w.amazonaws.com (184.108.40.206) 56(84) bytes of data. [ec2-user@ip-10-0-0-197 ~]$ ping repo.us-east-1.amazonaws.com PING s3-1-w.amazonaws.com (220.127.116.11) 56(84) bytes of data. [ec2-user@ip-10-0-0-197 ~]$ ping repo.us-east-1.amazonaws.com PING s3-1-w.amazonaws.com (18.104.22.168) 56(84) bytes of data. [ec2-user@ip-10-0-0-197 ~]$ ping repo.us-east-1.amazonaws.com PING s3-1-w.amazonaws.com (22.214.171.124) 56(84) bytes of data. [ec2-user@ip-10-0-0-197 ~]$ ping repo.us-east-1.amazonaws.com PING s3-1-w.amazonaws.com (126.96.36.199) 56(84) bytes of data. [ec2-user@ip-10-0-0-197 ~]$ ping repo.us-east-1.amazonaws.com PING s3-1-w.amazonaws.com (188.8.131.52) 56(84) bytes of data.
The problem is that the DNS loadbalances to multiple IPs, so you need the following Security Group rules.
Allow outbound to 184.108.40.206/24 on port 80 Allow outbound to 220.127.116.11/16 on port 80
It's not clear if an instance inside a VPC without an EIP and not in a private subnet supported by a NAT box can successfully yum update. The only resolution on this amazon forum was to move the instance to the public subnet and temporarily attach an EIP for the purpose of the yum update (see "NAT Update" below).
Since I'm periodically using an EIP for direct access to my private boxes, I'm using that strategy for yum updates.
When you restart your putty session, the messge of the day should show that OS updates are available. Install these now.
Using username "ec2-user". Authenticating with public key "imported-openssh-key" Passphrase for key "imported-openssh-key": Last login: Wed Dec 19 16:34:16 2012 from YOUR_REMOTE_IP __| __|_ ) _| ( / Amazon Linux AMI ___|\___|___| https://aws.amazon.com/amazon-linux-ami/2012.09-release-notes/ There are 7 security update(s) out of 39 total update(s) available Run "sudo yum update" to apply all updates. [ec2-user@ip-10-0-0-83 ~]$ sudo yum -y update
The Amazon Linux AMI is designed to be used in conjunction with online package repositories hosted in each Amazon EC2 region. These repositories provide ongoing updates to packages in the Amazon Linux AMI as well as access to hundreds of additional common open source server applications. Security updates are provided via the Amazon Linux AMI yum repositories. Security alerts are published in the Amazon Linux AMI Security Center and are available as an RSS feed.
For development its probably sufficient to just watch the message of the day at login and apply updates whenever they're available. For production you should have a regular update schedule. Reboots are only required for kernel updates, but individual service restarts are required when a service such as httpd is updated.
Because of the updates, restart now.
sudo shutdown -r now
For increased security, you could install all server software like httpd into a chrooted jail, but that probably means foregoing yum updates which is probably more of a vulnerability that not running in a chrooted jail. I've not had much success with my attempts at a chrooted jail, so we won't attempt that here.
For the ssh private key on the public linux box used to ssh from the public box to the private box, the private key must be in openssh format, not the format used by putty on windows.
puttygen conversions > import key > YourKey.pem key passphrase: INVENT_A_STRONG_PASSWORD confirm passphrase: STRONG_PASSWORD_AGAIN conversions > export openssh key > id_rsa > save exit
Use WinSCP to copy the id_rsa file to the public box at /home/ec2-user/ which will be owned by ec2-user. Then putty into the public box and sudo the id_rsa file to a root read only location.
sudo mkdir /whatever sudo chmod 700 /whatever sudo mv /home/ec2-user/id_rsa /whatever sudo chown root:root /whatever/id_rsa sudo chmod 400 /whatever/id_rsa
Thereafter you can use it to ssh from the public box to the private box as follows.
sudo ssh -i /whatever/id_rsa ec2-user@PRIVATE_BOX_IP
However you should consider the security of your final solution. If you have a box in the PublicSG that only has 22 open from your IP and that box is only used to further SSH into PrivateSG, then it has good security. If however you only want to pay for one box in your PublicSG and want to use it to further SSH into your PrivateSG thus exposing that SSH point to attacks on whatever you have open, such as httpd on 80 from anywhere, then it may be more secure to simply SSH directly to your PrivateSG and not allow SSH from PublicSG to PrivateSG and not store the PrivateSG SSH key on the PublicSG box. For this to work you will need to associate an EIP with your PrivateSG box. You can have up to 5 EIPs in a VPC.
The software you install should not be run as root. If it binds to low ports it must be started as root and then changed to a less privileged user. Most server software supports this, buy you may have to create a local user for this purpose.
sudo groupadd local sudo useradd -g local -s /bin/bash -d /home/local -m local sudo passwd local New password: password Retype new password: password
Having created such a user, you may only need to access your server as that user during regular operation since that user should own all the server software files you've installed and be able to restart/backup/etc your server software. This allows you to store the ec2-user credential on a disconnected thumb-drive, thus protecting it from attack. But our SSH config only allows connection via private keys, so we must additionally create a keypair for this user, register it with ssh, and move the private key to our windows machine.
sudo su - local ls -la /home/local ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/local/.ssh/id_rsa): [enter] Created directory '/home/local/.ssh'. Enter passphrase (empty for no passphrase): [enter] Enter same passphrase again: [enter] Your identification has been saved in /home/local/.ssh/id_rsa. Your public key has been saved in /home/local/.ssh/id_rsa.pub. The key fingerprint is: 0d:72:61:a0:87:2c:d8:76:38:52:9e:2d:3e:2d:77:e0 local@ip-10-0-0-61
The id_rsa.pub file has the same format as the authorized_keys, so to allow ssh to this box as this user we only need to rename that file and move the private key to the remote client.
mv /home/local/.ssh/id_rsa.pub /home/local/.ssh/authorized_keys exit sudo chown ec2-user:ec2-user /home/local/.ssh/id_rsa sudo mv /home/local/.ssh/id_rsa /tmp/sshToPrivateBoxAsLocal
Currently our only access to this box is as the ec2-user, so to extract the private key we had to make it accessible to the ec2-user without sudo for export via WinSCP. Copy sshToPrivateBoxAsLocal to your windows box and delete the copy from the server.
puttygen conversions > import key > sshToPrivateBoxAsLocal key passphrase: INVENT_A_STRONG_PASSWORD confirm passphrase: STRONG_PASSWORD_AGAIN save private key > sshToPrivateBoxAsLocal.ppk exit
Delete the raw private key file sshToPrivateBoxAsLocal making sure to not send it to the recycle bin (SHIFT-DELETE). Now you can use this key to putty to your EC2 box as local. You can copy the ec2-user putty confg from above and just update the following fields.
connection > data > auto-login username: local connection > ssh > auth > private key file for authentication > browse: sshToPrivateBoxAsLocal.ppk session > save
From the EC2 console click "Instances" in the sidebar and select your instance. Click "sda1" from "Root Device" in the instance details. This shows the EBS ID and the Snapshot ID. Every disk is a snapshot. The snapshot isn't the entire disk, just the delta since the last snapshot.
From the actions dropdown stop your EC2 instance.
EBS volumes and snapshots operate at a block level, so snapshots can be taken while an instance is running, but only data that is actually on disk (i.e. not in memory) will be included in the snapshot. Consequently it is recommended to stop the instance or at least cause all data to be flushed to disk and freeze the file system before starting the snapshot. You don't have to wait for the snapshot to complete before restarting the system (or unfreezing the file system). The snapshot will include the data as it was at the point in time the snapshot was initiated.
From the EC2 console click "Snapshots" in the sidebar and click "Create Snapshot". Select the EBS ID of your EC2 root device in the "Volume" dropdown. Provide a name and description for the snapshot and click "Create". For an 8GiB disk it should take less than a minute.
If you want a yum package not provided by the aws repository, you'll have to specify the alternate and open the security group. For example, do the following to fetch rlwrap from the redhat repository.
Allow Outbound 443 (HTTPS) 18.104.22.168/32 (mirrors.fedoraproject.org) 80 (HTTP) 22.214.171.124/32 (s3-mirror-us-east-1.fedoraproject.org)
sudo yum -y install rlwrap --enablerepo=epel
EC2 > Instances > Launch Instance > Classic Wizard > Amazon Linux AMI 32bit (or 64bit, see above) > T1 Micro > Your VPC > Your Name Tag > Your Key Pair > Your Security Group > Finish
VPC > Elastic IPs > Allocate New Address > EIP in VPC > Select Newly Created EIP > Associate Address > Your EC2 Instance > Finish
EC2 > Instances > Select Newly Created Instance > Actions > Termination Protection > Enable
Connect with putty as ec2-user
# overwrite ssh config with hardened template sudo vi /etc/ssh/sshd_config # apply updates sudo yum -y update # update timezone sudo ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime sudo vi /etc/sysconfig/clock ZONE="America/New_York" UTC=true # reboot sudo shutdown -r now
Finally create the local user/group that will own your installs and configure ssh keypair access for that user (details above), and archive your ec2-user account. If you are only installing something like apache that creates its own user, then some of the above steps are unnecessary.
You just can't live without a NAT box. As soon as you start loadbalancing and autoscaling, the manual connection of EIPs becomes too burdensome. Luckily it appears that the NAT offered by the VPC wizard is m1.small only because the VPC didn't originally support t1.micro. whiteboardcoder gives a detailed post on how to setup your own t1.micro NAT box. Here it is in brief.
Before we get started, we need to consider security groups. Assuming your private subnet boxes will only reach out to the world on 80 and 443 you'll need to allow these inbound and outbound from your NAT box. The users of your NAT box will also have to allow outbound on 80 and 443 to your NAT box.
A private subnet box will request some webIP and it is the route table that sends it to the NAT. The private box thinks it is outbound to webIP but the security group is outside the private box and applies to the routed NAT box. Hence your private box's security group must allow outbound to the NAT box and doesn't need to allow outbound to the requested webIP. Your NAT box must allow inbound from your private box, and because it will forward the request to the world, must allow outbound to the webIP. The replies (inbound to NAT from webIP, outbound from NAT to privateBox, inbound to privateBox from NAT) are all implicitly allowed.
This raises an important security concern. If you have a private box that requires outbound to 0.0.0.0/0 on 80/443, then any other boxes in that same subnet cannot be restricted via security group to allow outbound only to a specific webIP on 80/443. This is because they only need grant outbound to the NAT box, who by necessity for your first requirement allows outbound to 0.0.0.0/0. Your options are to restrict some boxes via their internal iptables or to put them in a seperate subnet with a seperate (more restrictive) NAT box.
You could have a security group for you box as if it wasn't behind a NAT and additionally a securigy group for your box indicating it is a NAT user. Thus when deployed in a public subnet, your box can function normally and when deployed in a private subnet behind a NAT the extra group allows the NAT connections.
AWS > VPC > Security Groups > Create Security Group Name: NAT-group Description: NAT-group VPC: yourVPC > Yes, Create AWS > VPC > Security Groups > Create Security Group Name: NAT-users Description: NAT-users VPC: yourVPC > Yes, Create AWS > VPC > Security Groups > NAT-group Inbound TCP, 22, yourIP > Add Rule TCP, 80, NAT-USERS-sg-ID > Add Rule TCP, 443, NAT-USERS-sg-ID > Add Rule > Apply Rule Changes Outbound [delete] existing ALL 0.0.0.0/0 rule TCP, 80, 0.0.0.0/0 > Add Rule TCP, 443, 0.0.0.0/0 > Add Rule > Apply Rule Changes AWS > VPC > Security Groups > NAT-users Outbound [delete] existing ALL 0.0.0.0/0 rule TCP, 80, NAT-GROUP-sg-ID > Add Rule TCP, 443, NAT-GROUP-sg-ID > Add Rule > Apply Rule Changes
Assuming you already have SSH keys and a public subnet in a VPC, you can launch a new micro NAT box into it.
AWS > EC2 > AMIs Viewing: Amazon Images [search box]: nat amazon/ami-vpc-nat-1.1.0-beta.x86-64-ebs > Launch Number of Instances: 1 Instance Type: t1.micro VPC: yourPublicSubnet > Continue Termination Protection: Prevent against accidental termination > Continue > Continue Name: NAT > Continue Choose from your existing Key Pairs: yourKeyPair > Continue Choose one or mor of yoru existing Security Groups: NAT-group > Continue > Launch > Close
You must assoicate an EIP with the NAT box. This lets it have outbound connectivity through the internet gateway to the world. It also provides inbound from the world, say for SSH.
AWS > VPC > Elastic IPs > someIP > Associate Address Instance: NAT (everything else default) > Yes, Associate
We also disable the "Source/Dest Check". I did't know the reasoning for this, but there is some good detail on l33tlogic. Disabling the Source/Destination check is what will allow this server to become the NAT server for your VPC. It stops the AWS firewalls from preventing traffic that isn't labeled specifically to your IP address from reaching your server.
AWS > EC2 > Instances > NAT > Actions > Change Source / Dest Check > Yes, Disable
Finally we have to change the route table config. If you created a public and private subnet with an m1.small NAT instance via the AWS wizard, then you just have to tweak the config to use the new t1.micro NAT instance as describe by whiteboardcoder, but if you just had a single public subnet it's a bit more complicated.
EC2 instances are launched into a subnet. A subnet has a route table. A route table says "local traffic (10.0.0.0/16) goes here, and everything else (0.0.0.0/0) goes elsewhere" where elsewhere may be an internet gateway or a NAT box. The act of associating elastic IPs registers an internal box's IP and a public IP with the internet gateway such that it supports inbound and outbound connections between that pair. But that only works for a box in a subnet with a route table that says "everything else goes to the internet gateway". They key is that "local traffic" means everything in the VPC, i.e. instances in two subnets are equally local to eachother. Thus we can put a NAT box in a subnet whose route table says "everything else to the gateway", which means we can attach an EIP to the NAT box giving it inbound and outbound access to the world, (which is why we call it the public subnet). Then we can create a new subnet and new route table that says "everything else goes to the NAT box" (which happens to be in a different subnet). Since the new subnet doesn't have the internet gateway in its route table, instances launched into it can't be given EIPs, so we call it a private subnet.
Of course this means that you'll no longer be able to SSH directly to anything in the private subnet. But you can SSH to a box in the public subnet (like the NAT box) and then SSH into the private subnet, or you can have a proxy-box in the public subnet that port-redirects say 2201 to 22 on box1 and 2202 to 22 on box2. This has the advantage of not requiring any cloud box to contain SSH private keys..
Back to task at hand, we must create a new subnet and new route table. If you already have EC2 instances, some of them will have to be migrated. If the majority are private boxes then it's best to convert the existing subnet into the private subnet and make the new one be the public subnet. Migrating boxes means a change of the underlying hardware and a change of the local IP. This probably means you'll have to tweak config and machine bound passwords after the move.
AWS > VPC > Subnets > Create Subnet VPC: yourVPC Availability Zone: asAbove CIDR Block: 10.0.1.0/24 > Yes, Create // From EC2, note the ID of your NAT instance, i.e. i-abcd1234 AWS > VPC > Route Tables > Create Route Table VPC: yourVPC > Yes, Create Routes Destination: 0.0.0.0/0 Target: yourNATinstanceID > Add > Yes, Create Associations Subnet: yourNewSubnetID > Associate > Yes, Associate
Now we can put our loadbalancer in the public subnet and the autoscaled loadbalanced instances in the private subnet and the instances can be reached publically via the loadbalancer and can reach out via the NAT box.
We are left with the problem of granting restricted access to special ports (like SSH) on these boxes. I agonize over that in Port Forwarding Gateway via iptables on Linux where I end up using the port forwarding ability of SSH itself.