Security model details
HostedFTP implements a security model that ensures that all files and metadata including filenames, folder names, and field names are encrypted in transit, on arrival at our SaaS application at the AWS site and at rest in AWS S3 storage. No data/metadata is exposed in any way; an exclusive design capability of HostedFTP.

Security Implementation – How security is built into the service
HostedFTP was built with security in mind from the ground up:
- File encryption – all customer files are encrypted both in transit and at rest. When you transfer files by HTTPS or FTPES, the files are encrypted in transit by TLS/SSL. When you transfer files by SFTP, the files are encrypted in transit by SSH. As each byte of a file is uploaded from the end-user to the FTP server, it is encrypted in memory using AES 256 bit encryption before it is written to the EBS storage volume. Once the full file is received, it is securely uploaded to S3 for permanent storage. The same process holds in reverse when downloading a file. The requested file is downloaded securely from S3 to EBS. The FTP server accesses this encrypted file from EBS, decrypts the files, and streams the decrypted bytes down to the end-user as requested. At no time does an unencrypted file touch a disk or storage volume.
- Database encryption – customer meta-data stored in RDS is manually encrypted at a database column level using AES 256 bit encryption. The Enterprise Java web application is responsible for encrypting data before it is stored in the database and for decrypting data when it is retrieved from the database.
- Encryption keys – customers own their own encryption keys. AWS does not have access to the encryption keys and cannot decrypt files from S3 or meta-data from RDS.
- Network access – network ACLs are used to limit inbound and outbound traffic to your subnets. The principle of least/minimal privilege is used when configuring the ACL.
- Firewalls – VPC security groups are used to limit inbound and outbound traffic to your EC2 instances. The principle of least/minimal privilege is used when configuring the security group.
- AWS account segmentation – by placing your infrastructure in a separate AWS account from all other customers, there is an immediate segmentation between customers. This removes the possibility of an incorrect security setting allowing infrastructure from one customer inappropriately accessing infrastructure from another customer.
Data Management – How is data stored, protected, archived, backed up, and CURD operations are managed
As a part of our security implementation, there are two primary types of data:
- Files – stored in S3
- File and user meta-data – stored in RDS
- The files themselves are encrypted with AES 256 bit encryption before being written to any EBS storage volume and before being uploaded to S3. The meta-data is also encrypted with AES 256 bit encryption before being written to the database.
- Access to S3 is limited via S3 bucket policies and (Identity and Access Management) IAM role permissions. The S3 bucket itself is given a bucket policy that restricts access to only the elastic/static IP address(es) assigned to your EC2 server(s). Further, these EC2 servers are configured to run under a single IAM role. This IAM role provides read and write access to the S3 bucket containing your files, but no list or delete permissions. Combining these 2 security features ensures that only your authorized EC2 servers running on known static IP addresses will be able to access the file in your S3 bucket.
- Access to RDS is limited by a firewall/security group. Only the EC2 servers running the enterprise java web application under static IP addresses are permitted through the database’s firewall. Further, the MySQL connection is protected by a strong password.
From https://aws.amazon.com/s3/:
“Amazon S3 runs on the world’s largest global cloud infrastructure, was built from the ground up to deliver a customer promise of 99.999999999% of durability. Data is automatically distributed across a minimum of three physical facilities that are geographically separated by at least 10 kilometers within an AWS Region, and Amazon S3 can also automatically replicate data to any other AWS Region.”
We pass along the S3 guarantees of durability and reliability as stated above. In addition, we store all of your files in a second S3 bucket that belongs to a separate AWS account and is created in a different AWS region.
For RDS we use automated backups, database snapshots, and multi-az deployments to provide the highest levels of availability and durability. Please refer to https://aws.amazon.com/rds/details/ for more information on these features.