Amazon S3

Cloud-based object storage service From Wikipedia, the free encyclopedia

Amazon Simple Storage Service (S3) is a service offered by Amazon Web Services (AWS) that provides object storage through a web service interface.[1][2] Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its e-commerce network.[3] Amazon S3 can store any type of object, which allows uses like storage for Internet applications, backups, disaster recovery, data archives, data lakes for analytics, and hybrid cloud storage. AWS launched Amazon S3 in the United States on March 14, 2006,[1][4] then in Europe in November 2007.[5]

Quick Facts Type of site, Available in ...
Amazon S3
Type of site
Cloud storage
Available inEnglish
OwnerAmazon.com
URLaws.amazon.com/s3/
IPv6 supportYes
CommercialYes
RegistrationRequired (included in free tier layer)
LaunchedMarch 14, 2006; 19 years ago (2006-03-14)
Current statusActive
Close

Technical details

Summarize
Perspective

Design

Amazon S3 manages data with an object storage architecture[6] which aims to provide scalability, high availability, and low latency with high durability.[3] The basic storage units of Amazon S3 are objects which are organized into buckets. Each object is identified by a unique, user-assigned key.[7] Buckets can be managed using the console provided by Amazon S3, programmatically with the AWS SDK, or the REST application programming interface. Objects can be up to five terabytes in size.[8][9] Requests are authorized using an access control list associated with each object bucket and support versioning[10] which is disabled by default.[11] Since buckets are typically the size of an entire file system mount in other systems, this access control scheme is very coarse-grained. In other words, unique access controls cannot be associated with individual files.[citation needed] Amazon S3 can be used to replace static web-hosting infrastructure with HTTP client-accessible objects,[12] index document support, and error document support.[13] The Amazon AWS authentication mechanism allows the creation of authenticated URLs, valid for a specified amount of time. Every item in a bucket can also be served as a BitTorrent feed. The Amazon S3 store can act as a seed host for a torrent and any BitTorrent client can retrieve the file. This can drastically reduce the bandwidth cost for the download of popular objects. A bucket can be configured to save HTTP log information to a sibling bucket; this can be used in data mining operations.[14] There are various User Mode File System (FUSE)–based file systems for Unix-like operating systems (for example, Linux) that can be used to mount an S3 bucket as a file system. The semantics of the Amazon S3 file system are not that of a POSIX file system, so the file system may not behave entirely as expected.[15]

Amazon S3 storage classes

Amazon S3 offers nine different storage classes with different levels of durability, availability, and performance requirements.[16]

  • Amazon S3 Standard is the default. It is general purpose storage for frequently accessed data.
  • Amazon S3 Express One Zone is a single-digit millisecond latency storage for frequently accessed data and latency-sensitive applications. It stores data only in one availability zone.[17]
  • Amazon S3 Standard-Infrequent Access (Standard-IA) is designed for less frequently accessed data, such as backups and disaster recovery data.
  • Amazon S3 One Zone-Infrequent Access (One Zone-IA) performs like the Standard-IA, but stores data only in one availability zone.
  • Amazon S3 Intelligent-Tiering moves objects automatically to a more cost-efficient storage class.
  • Amazon S3 on Outposts brings storage to installations not hosted by Amazon.
  • Amazon S3 Glacier Instant Retrieval is a low-cost storage for rarely accessed data, but which still requires rapid retrieval.
  • Amazon S3 Glacier Flexible Retrieval is also a low-cost option for long-lived data; it offers 3 retrieval speeds, ranging from minutes to hours.
  • Amazon S3 Glacier Deep Archive is the lowest cost storage for long-lived archive data that is accessed less than once per year and is retrieved asynchronously.

The Amazon S3 Glacier storage classes above are distinct from Amazon Glacier, which is a separate product with its own APIs.

File size limits

An object in S3 can be between 0 bytes and 5 TB. If an object is larger than 5 TB, it must be divided into chunks prior to uploading. When uploading, Amazon S3 allows a maximum of 5 GB in a single upload operation; hence, objects larger than 5 GB must be uploaded via the S3 multipart upload API.[18]

Uses

Summarize
Perspective

Notable users

  • Photo hosting service SmugMug has used Amazon S3 since April 2006. They experienced a number of initial outages and slowdowns, but after one year they described it as being "considerably more reliable than our own internal storage" and claimed to have saved almost $1 million in storage costs.[19]
  • Netflix uses Amazon S3 as their system of record. Netflix implemented a tool, S3mper,[20] to address the Amazon S3 limitations of eventual consistency.[21] S3mper stores the filesystem metadata: filenames, directory structure, and permissions in Amazon DynamoDB.[22]
  • Reddit is hosted on Amazon S3.[23]
  • Bitcasa,[24] and Tahoe-LAFS-on-S3,[25] among others, use Amazon S3 for online backup and synchronization services. In 2016, Dropbox stopped using Amazon S3 services and developed its own cloud server.[26][27]
  • Swiftype's CEO has mentioned that the company uses Amazon S3.[28]

S3 API and competing services

The broad adoption of Amazon S3 and related tooling has given rise to competing services based on the S3 API. These services use the standard programming interface but are differentiated by their underlying technologies and business models.[29] A standard interface enables better competition from rival providers and allows economies of scale in implementation, among other benefits.[30]

History

Thumb
At AWS Summit 2013 NYC, CTO Werner Vogels announces 2 trillion objects stored in S3.

Amazon Web Services introduced Amazon S3 in 2006.[31][32]

More information Date, Number of Items Stored ...
Date Number of Items Stored
October 2007 10 billion[33]
January 2008 14 billion[33]
October 2008 29 billion[34]
March 2009 52 billion[35]
August 2009 64 billion[36]
March 2010 102 billion[37]
April 2013 2 trillion[38]
March 2021 100 trillion[39]
March 2023 280 trillion[40]
November 2024 400 trillion[40]
Close

In November 2017 AWS added default encryption capabilities at bucket level.[41]

Limitations of Service Level Agreement

Amazon S3 provides a durability guarantee of 99.999999999% (referred to as "11 nines"), primarily addressing data loss from hardware failures. However, this guarantee does not extend to losses resulting from human errors (such as accidental deletion), misconfigurations, third-party failures and subsequent data corruptions, natural disasters, force majeure events, or security breaches. Customers are responsible for monitoring SLA compliance and must submit claims for any unmet SLAs within a designated timeframe. They should understand how deviations from SLAs are calculated, as these parameters may differ from those of other AWS services. These requirements can impose a significant burden on customers. Additionally, SLA percentages and conditions can vary from those of other AWS services. In cases of data loss due to hardware failure attributable to Amazon, the company does not provide monetary compensation; instead, affected users may receive credits if they meet the eligibility criteria.[42][43][44][45][46]

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.