Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

NoSQL databases comparison: Cosmos DB vs DynamoDB vs Cloud Datastore and Bigtable

Which NoSQL DB is the GOAT? See which NoSQL database service reigns supreme between AWS DynamoDB vs Azure Cosmos DB vs GCP Cloud Datastore and Bigtable. Read more!

Jun 08, 2023 • 22 Minute Read

Please set an alt value for this image...
  • Software Development
  • Cloud
  • AWS
  • Data
  • azure

Which NoSQL database service is the best?

Which NoSQL DB is the GOAT? See which database service reigns supreme in all the cloud between AWS DynamoDB vs GCP Cloud Datastore and Bigtable vs Azure Cosmos DB. Read on!

The TL;DR

The landscape of modern application development has changed. Prior to the emergence (and broad usage) of cloud platforms, everything could be thought of in finite terms. There was generally a fixed pool of compute resources available, a limited amount of bandwidth, and a predictable amount of traffic.

All of that has changed.

Vast amounts of compute, storage, and bandwidth can be brought to bear with a few clicks or API calls. The challenge for developers is now about how to manage data — because a modern application stack has a lot of it, and it's always generating more.

NoSQL vs Relational Databases

Relational databases like MySQL, PostgreSQL, and enterprise products from companies like Microsoft and Oracle have long served as the go-to backbone for data storage. However, they have their limitations. Inflexible schemas and notoriously difficult horizontal scaling mean they don't always fit well in a highly scalable and geographically distributed infrastructure stack.

NoSQL database technology was developed to provide an alternative to the issues a traditional relational database presents at scale. NoSQL databases can be scaled up effectively, and are much more flexible in the types of data structures they contain.

This article will examine primary NoSQL offerings of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud (GCP).

First, some additional depth on the technical aspects of NoSQL vs. SQL will be provided, as well as the potential benefits of choosing NoSQL as a datastore.


Interested in upscaling or beginning your journey with Cloud Data? A Cloud Guru's AWS Data Learning Paths offers custom courses fit for beginners and advanced gurus! 


NoSQL Database Pricing Comparison

ServiceGA since...AvailabilityPricingScaling
AWS DynamoDB2012-01-01Generally Globally AvailableDynamoDB PricingQuotas and Limits
GCP Cloud Datastore

GCP Bigtable
Cloud Datastore: 2013-05-01 


Bigtable: 2016-05-06
Datastore: Generally Globally Available


Bigtable: Generally Globally Available
Cloud Datastore Pricing

Bigtable Pricing
Datastore Limits


Bigtable Limits
Azure Cosmos DB2015-04-08Generally Globally AvailableCosmos DB PricingCosmos DB Service Quotas
laptop showing ACG Cloud Webinars

NoSQL for Grownups: DynamoDB Single-Table Modeling w/ Rick Houlihan
In this free on-demand webinar, Rick Houlihan, Sr. Practice Manager at AWS and inventor of single-table DynamoDB design, shows his tricks for modeling complex data access patterns in DynamoDB.


NoSQL Adoption Background

To understand what drives NoSQL adoption, it will be helpful to understand how a relational database works versus a typical NoSQL database, and what some potential limitations might be. As alluded to in the introduction, the primary points of friction are around schema and scaling.

Generally, NoSQL databases are thought of as "document" based — data is stored loosely in a document containing various data structures. In a relational database, data is stored in a table of rows and columns, defined fairly strictly with a schema.


Learn to build scalable, high-performance applications using AWS DynamoDB in this course from ACG.


Schema defines the "what" of a database. In a relational model for a hypothetical database of individuals, it tells users that the first column of data might be "firstName", the second column "lastName", the third column "age" and so on. Each row of data represents an individual person. This schema, or map of the data, defines how data is written to the database.

What if it's decided a column for "favorite candy" needs to be added? In most cases, costly operational work must be undertaken to define and update the schema, including any system that stores a copy of it, as well as potentially costly database downtime. In contrast, the document-based model of NoSQL allows for applications to change and mutate data with few restrictions.

Scaling is the second part of the story where NoSQL shines. Relational databases are hard to scale without complex orchestration, and maintaining highly scaled, highly available relational database clusters capable of high write throughput is a non-trivial engineering challenge. The primary NoSQL offerings of AWS, Azure, and GCP are capable of scaling to potentially billions of daily transactions, with the operational overhead of managing the infrastructure handled almost entirely by the provider.

NoSQL Pricing and Service Limits

Pricing for the premier NoSQL offerings depends heavily on how usage metrics are defined. Each provider measures usage according to different metrics, utilizing both explicit and time-based measurements. AWS provides what is probably the best colloquialism to define these metrics, which this article will refer to as "Billing units" collectively.

Something that is shared by all providers, and is a frequent gotcha for customers: network and bandwidth usage is charged separately from the underlying service. Transferring large amounts of data out of any of these services is likely to incur non-trivial charges on a billing statement, separate from usage.

The primary billing units for AWS are "Write request units" and "Read request units", representing a request to either write or read data from a DynamoDB table. AWS also charges for additional features, such as backups, global replication, streaming, and network egress. An egress only internet gateway is used to enable outbound communication over IPv6 from instances in your VPC to the internet, and prevents hosts outside of your VPC from initiating an IPv6 connection with your instance.

GCP's two primary offerings in the NoSQL space are Bigtable, and Firestore in Datastore mode. The Datastore offering is a traditional, document-based NoSQL service that structures Billing units around usage: "Entity" reads, writes, and deletes are billed as separate operations, as well as data storage, small operations, and network egress. At the time of writing, Google is planning to move users of Datastore to Firestore in Datastore mode in 2021, where usage billing models objects as "Documents". This page provides more detail on the differences between the two. Datastore comes with a monthly quota of free usage as well. Bigtable structures billing units around hourly usage of the underlying nodes in a Bigtable cluster, as well as storage, backups, and network egress. It is also important to note that while the usage is based on provisioned capacity, there are quotas and limits enforced on Bigtable instances.

Azure Cosmos DB provides two billing models for customers to choose from: Provisioned Throughput and Serverless. (See our post on Azure Cosmos DB APIs and their use cases and trade-offs for a deeper dive on Cosmos DB in practice.)

Provisioned Throughput measures usage in request units per second, and bills per hour. Customers can pre-pay for provisioned capacity on annual and multi-year terms as well. In addition, storage and network egress are billed.

In the serverless model, customers only pay for incurred usage, which will be more appropriate for workloads that are not consistent and bursty.


Get a deep dive on Azure Cosmos DB in this 9-hour course from A Cloud Guru.


Large-scale database deployments of any kind can quickly incur sizable costs for any organization, whether it be relational or NoSQL databases. One of the advantages of NoSQL should already be apparent: each of the three providers offers "on-demand", usage-based billing. Customers will only incur charges for actual usage, rather than the higher cost inefficiency of provisioned usage.

The following table highlights some of the more granular differences in pricing and quotas between the big three:

Service*Billing UnitsCost**
Writes(Provisioned)***
AWS DynamoDBWrite Capacity Unit(WCU)25 WCUs/month free, $0.00065 per WCU/hour
GCP BigtableNode hour$0.65/hour per node, usage subject to quotas/limits
Azure Cosmos DB - Single-region write accountRequest Units(RU)400 RU/s/month free
Autoscale: 100 RU/s x 1.5 x 1 region - $0.008/hour
Manual: 100 RU/s x 1 region - $0.008/hour
Writes(On-Demand)***
AWS DynamoDBWrite Request Unit$1.25/1 million write request units
GCP Firestore in Datastore ModeEntity Writes/Document Writes20000 entity writes/day free, $0.099 per 100,000 documents
Azure Cosmos DB - ServerlessRequest Units(RU)$0.25/1 million RUs
Reads(Provisioned)***
AWS DynamoDBRead Capacity Unit(RCU)25 RCUs/month free,$0.00013 per RCU/hr
GCP BigtableNode hour$0.65/hr per node, usage subject to quotas/limits
Azure Cosmos DB - Single-region write accountRequest Units(RU)400 RU/s/month free
Autoscale: 100 RU/s x 1.5 x 1 region - $0.008/hour
Manual: 100 RU/s x 1 region - $0.008/hour
Reads(On-Demand)***
DynamoDBRead Request Unit$0.25/1 million read request units
GCP Firestore in Datastore ModeNode hour$0.65/hour per node, usage subject to quotas/limits
Azure Cosmos DB - ServerlessRequest Units(RU)$0.25/1 million RUs
Backups***
AWS DynamoDBGigabyte(GB)Continuous backup(PITR): $0.20 per GB/month
On-demand backup:$0.10 per GB/month
Restore: $0.15/GB
GCP BigtableGigabyte(GB)$0.029 per GB/month
Azure Cosmos DBGigabyte(GB)Periodic backup: 2 copies free, >2 copies $0.12/GB per copy
Continuous backup: $0.20/GB x regions
Point-in-time restore: $0.15/GB
Storage
AWS DynamoDBGigabyte(GB)25GB/mo free, $0.25 per GB/month
GCP Bigtable

GCP Firestore in Datastore Mode
Gigabyte(GB)SSD storage: $0.19 per GB/month
HDD storage: $0.029 per GB/month

$0.099 per GB/mo
Azure Cosmos DBGigabyte(GB)Transactional storage (row-oriented): $0.25 per GB/month
Analytical storage (column-oriented): $0.03 per GB/month
Replication(Provisioned)
AWS DynamoDBReplicated write capacity unit (rWCU)$0.000975 per rWCU/hour
GCP BigtableGigabyte(GB)Subject to Internet Egress Rates
Azure Cosmos DBRequest Units(RU)Autoscale:
Single-region write account with data distributed across multiple regions (with or without availability zones): 100 RU/second x 1.5 x N regions - $0.008/hour 
Multi-region write (formerly called multi-master) account distributed across multiple regions: 100 RU/second x N regions - $0.016/hour
Standard:
Single-region write account distributed across N regions (excluding availability zones): 100 RU/s x N regions - $0.008/hour
Single-region write account, with regions using availability zones: 100 RU/s x 1.25 x N zones - $0.008/hour
Multi-region write (formerly "multi-master") account with N regions* (with or without availability zones): 100 RU/s x N regions - $0.016/hour
Replication(On-Demand)
AWS DynamoDBReplicated Write Request Unit$1.875/1 million replicated write request units
GCP Firestore in Datastore ModeDocument ActionsMultiplier of single-region price
Azure Cosmos DB - ServerlessRequest Units(RU)1.25 x N regions x $0.25/1 million RUs

* This table does not comprehensively list the billed features of every provider. It primarily aims to provide a high-level comparison of common features.

** Cost can vary by region for each provider. Please check the specific cost links provided earlier in the article for specific regions. US East Coast/North America is utilized here unless specified.

*** GCP Bigtable only offers provisioned usage, as well as backups, while Firestore in Datastore mode only offers on-demand usage billing and no backups.


Cloud Dictionary

Get the Cloud Dictionary of Pain
Speaking cloud doesn’t have to be hard. We analyzed millions of responses to ID the top concepts that trip people up. Grab this cloud guide for succinct definitions of some of the most painful cloud terms.


Pricing between the three providers, despite differences in billing unit nomenclature, has a fair amount of parity. The growing popularity and demand for NoSQL database services has resulted in a competitive marketplace, both from a features and pricing perspective. Each provider is looking to take market share not only from competitors but third-party solutions like MongoDB and Hadoop.

(If you're fluent in AWS speak and want to understand the basics of Azure, check out our AWS user's guide to Microsoft Azure.)

The next section will compare and contrast the features, performance, and potential use cases of each offering.

AWS, Azure and GCP NoSQL Feature comparisons

Evaluating the performance and featureset of a given NoSQL solution is very much dependent on context: What kind of data is being stored? How is that data structured? What kind of performance is needed? Is the workload consistent, or bursty?

All of the technical complexity and nuance of describing complex data storage and its use cases end-to-end could fill several books (and it already has). Instead, this section will try to distill the most important considerations into high-level categories: typical use cases, data models and queries, price efficiency for a hypothetical use case, and how each provider handles consistency.

What is NoSQL used for?

As discussed in the opening section, the general use case for NoSQL is where relational databases generally fall short: horizontal scaling, and unstructured data. Application developers are free to utilize programmatic data structures to write and query data, eschewing rigid SQL schema logic. With the managed service solutions offered by the providers, lean development teams can take advantage of highly scalable solutions without the operational overhead of maintaining the infrastructure.

ServiceUse-Cases
AWS DynamoDBAd Tech, Gaming, Retail, Banking and Finance, Media and entertainment, Software and internet
GCP BigtableFinancial Analysis, IoT, Ad Tech
GCP Firestore in Datastore ModeApplication developers that need a highly scalable, easy-to-use NoSQL document database
Azure Cosmos DBMission-critical applications, Hybrid Cassandra workloads, Near Real-time Analytics, Real-time IoT device telemetry, Real-time retail services

This table highlights some of the commonality between the various use cases of each service.

NoSQL shines when organizations need highly performant solutions that can handle large amounts of read and write transactions, potentially involving unstructured and mutating data. An important point to consider, however, is that while there is overlap between the general use cases enumerated by the providers, the underlying technical details of each implementation have some key differences.

For instance, while DynamoDB and Bigtable share financial analytics as a use case, how they handle and store data differs. DynamoDB models data as key-value and documents, while Bigtable is a wide column store. For low-throughput use cases, this distinction might ultimately be trivial. When the architecture has to scale to billions of transactions per day, the choice of data model becomes fundamentally and critically important.

NoSQL: Schema, Data and Queries

While NoSQL doesn't have the traditional schema of relational databases, that does not mean engineering teams can expect to avoid planning the implementation. On the contrary, effective usage of NoSQL demands that close attention be paid to the type of data that will be read and written, and perhaps most importantly, the type of queries that downstream users are most likely to run.

Some could argue this is where NoSQL technically enforces a "schema": developers generally need to know the structure of potential queries in advance, or they risk running into performance problems at scale.

ServiceData-modelSupported data types
AWS DynamoDBKey-value, DocumentScalar: number, string, binary, Boolean, null
Document: list, map
Set: string set, number set, binary set
GCP BigtableWide column storeAll data generally treated as raw byte strings
GCP Firestore in Datastore ModeDatastore: Key-valueFirestore in Datastore mode: documentarray, Boolean, bytes, data/time, floating-point number, geographical point, integer, map, null, reference, text string
Azure Cosmos DBKey-value, Document, Graph, ColumnWhatever can be serialized as valid JSON

To understand how each service might handle potential data and queries, it will be helpful to understand how they model data. The underlying data model of a NoSQL database is a critical part of the planning and design of any meaningful data infrastructure.

Key Value Databases

Key-value databases (sometimes referred to as stores) function by utilizing associative arrays to store data. Most developers are probably familiar with the underlying data structures of an associative array: a dictionary or hash table. Data is stored as pairs of keys and values. Keys serve to uniquely identify data, stored in values. Values can be simple data types like strings, Booleans, or integers, or they can be complete JSON structures.

A simple example, using a Python dictionary:

{"user1": "jsmith", "user2": "kjackson"}

Querying a key like "user1" will produce the value "jsmith", "user2" for "kjackson" and so on. A more complicated example, with the value being a JSON object:

{"user1": {"name": "John Smith", "userID": "jsmith"}, "user2": {"name": "Karen Jackson", "userID": "kjackson"}}

Querying the "user1" key will return the JSON object as the value. It's up to the query author to develop additional logic to parse the fields inside the object.

DynamoDB, Datastore, and Azure Cosmos DB all provide key-value as potential data model for storing data. Key-value stores are highly performant when the stored data and access patterns are fairly simple, like caching, or storing session data for a website user.

Document Databases

Document databases are what typically come to mind when NoSQL is discussed. In the document model, data exists in a type of key-value model: unique IDs identify values, which are semi-structured documents that are typically modeled after JSON objects. AWS provides a handy example of a hypothetical document that describes a book. Developers typically form queries in a given language, searching for a specific key, and then further parsing the returned document. In contrast to key-value stores, however, document data objects have some structure, and typically provide for metadata associations for various objects and fields in the document.

Firestore in Datastore mode, Azure Cosmos DB, and DynamoDB all can utilize the document model for data storage. Document databases are ideal for data that may only have some or no structure, or where developers might need flexibility to extend and change the data later. Typically these are systems involving content-management, high-scalability, and the need to provide exceptional write performance.

Column Databases

Column databases have been around for some time, following a similar model to the typical relational database, except tables are stored by column, rather than row. Wide-column store databases are actually a separate, NoSQL concept in which the columns can vary in name and format between rows in the same table. Similar data are grouped into columns, with similar columns grouped into "column families". Columns can be partitioned, with single columns occupying an entire disk if needed.

Wide-column data-models are supported by GCP Bigtable and Azure Cosmos DB. They are effective when dealing with very large amounts of data in distributed systems, particularly when the datastore is too large to occupy a single machine or cluster. High-streaming and analytical data are ideal for Wide-column store databases.

What is OLTP and OLAP?

Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) are methods and technology for specific types of data processing and queries that are traditionally associated with relational databases. OLTP is typically applied for small, isolated transactions that occur in large numbers, and require a guarantee of success, such as a banking transaction. OLAP transactions are more complex queries, typically involving joins and aggregations, that produce analytical insights on large datasets.

Historically, NoSQL databases have struggled with providing either OLTP or OLAP. OLTP requires Atomicity, Consistency, Isolation, and Durability (ACID) compliance. Every part of a transaction must succeed; partial success or failure results in the entire transaction being rolled back. NoSQL databases have often eschewed these guarantees in exchange for scaling, availability, and speed.

OLAP demands complex joins and aggregation logic in queries. Due to the generally unstructured nature of NoSQL data objects, joins are difficult to perform, require complex software logic, and multiple expensive queries. Implementing OLAP on NoSQL databases often means bulk exports of data to a secondary ETL platform for aggregation and analysis.

So, do any of the NoSQL services offered by the big three solve these problems? Transactions and analytics are still critical needs for data processing, and being able to combine the performance of NoSQL with either would be a powerful combination.

DynamoDB added transactional capability in 2018, bringing ACID to a NoSQL database. The atomicity and consistency needed for things like financial transactions are now possible in a NoSQL database.

Conversely, solutions like GCP Bigtable explicitly advise that it is not suitable for OLTP/OLAP. GCP Datastore shares the same guidance as well.

For Cosmos DB, the story is murkier: support for OLAP and OLTP was recently announced. However, it would still require linking different runtimes and products to provide a fully featured experience.


Automating AWS Cost Optimization
AWS provides unprecedented value to your business, but using it cost-effectively can be a challenge. In this free, on-demand webinar, you'll get an overview of AWS cost-optimization tools and strategies like data storage optimization.


NoSQL Price Efficiency Comparison

In a prior section, the basic costs for each service were laid out. While this is helpful data, an apples-to-apples comparison will give potential customers at least a rough idea of how "price efficient" each provider offering is for a given use case.

In this context, consider price efficiency to mean: "providing the lowest end-to-end cost for writing, storing, and querying a given amount of information". Consider the following hypothetical example:

Example Database(1 Month of Usage)
Storage SizeMean Object/Entity SizeReads/secondWrites/secondIngress/Egress Bandwidth
350 GB512 BytesPeak: 500
Non-peak: 125
Peak: 100
Non-peak: 25
53.08 GB / 265.5 GB

The table above describes the usage for one month in a hypothetical NoSQL database implementation. This is a simplistic example and does not include things like replication, backups, or any proprietary enhancements or secondary services a provider might offer. In an effort to further simplify the example, some additional assumptions will be made:

  • Peak time is "in-effect" for 8 hours/day during weekdays
  • A 30 day month will be used
  • Free-tier usage will not be factored
  • Total reads for a 30 day month: 518,400,000
  • Total writes for a 30 day month: 103,680,000
  • Entity/document size will always be 512 bytes
  • Storage size is the assumed billed average of total size
  • Region is US East
  • Egress will be intracontinental US

The following tables will utilize provided information from each provider to establish an estimate of the total monthly cost to implement the example database on their service.

DynamoDB - Provisioned

StorageReadsWritesBandwidth
$81.25Peak: $10.40
Non-peak: $8.32
Peak: $10.40
Non-peak: $8.32
$23.90
Total Cost: $142.59/month

DynamoDB - On-demand

StorageReadsWritesBandwidth
$81.25$129.60$129.60$23.90
Total Cost: $364.35/month

GCP Bigtable

Storage(SSD)6 node clusterBandwidth
$66.50$3110.40$31.86
Total Cost: $3208.76/month

GCP Firestore in Datastore Mode

StorageReadsWritesBandwidth
$34.65$171.07$102.64$31.86
Total Cost: $340.22/month

Azure Cosmos DB - Autoscale Provisioned

StorageRequest UnitsBandwidth
$87.50Peak: $28.80
Non: Peak: $24.58
$21.24
Total Cost: $162.12/month

Azure Cosmos DB - Manual Provisioned

StorageRequest UnitsBandwidth
$87.50Peak: $19.20
Non: Peak: $16.38
$21.24
Total Cost: $144.32/month

Azure Cosmos DB - Serverless(On-demand)

StorageRequest UnitsBandwidth
$87.50$155.52$21.24
Total Cost: $264.26/month

The low cost of the provisioned DynamoDB solution jumps out as the clear winner in the "price efficiency" discussion.

The Cosmos DB manual provisioned option comes in second.

GCP Bigtable comes in last by a large margin. However, the target user demographic would likely realize better cost efficiency with larger amounts of data.

Consistency and Replication

In a simple interpretation, achieving consistency in a database means that writing some arbitrary data will make that data immediately available to all proceeding reads of that same data.

For small, monolithic architectures like a single relational database node, this is fairly easy to achieve. However, for large-scale, distributed NoSQL architecture, it can be extremely difficult, if not impossible.

What options are there for achieving consistency in NoSQL services — and is it important as it seems?

Organizations should evaluate the primary use case of their database architecture before spending cycles attempting to implement stronger consistency, or full ACID compliance.

If the database is a backend for a social media platform, strong consistency is probably not very important. A post from one user does not need to immediately be seen by all other users.

Conversely, in sensitive, regulatory-driven environments like banking, strong consistency and full ACID compliance are vitally important. Transactions reflecting debits or credits need to be immediately respected by proceeding transactions, or costly accounting errors will occur.

The following table lays out the consistency options available for each service:

ServiceConsistency Options
AWS DynamoDB-Eventually Consistent
-Strongly Consistent
GCP Bigtable-Eventually Consistent
-Read-your-writes Consistent
-Strongly Consistent
GCP Firestore in Datastore Mode-Strongly Consistent
Azure Cosmos DB-Five defined levels

All providers offer strong consistency as a configuration option, however it comes with some caveats. Strong consistency has a multiplicative effect on billed write usage, particularly in the case of multiple clusters, shards, or regions. It also can affect latency and overall database performance, as each write must be performed successfully multiple times before the operation can be considered "complete" and the data ready for subsequent reads. Organizations should pay careful attention to use cases, typical workload shape, and whether the performance trade off is worth it.

Self-hosting

There are several very capable third-party NoSQL solutions available for self-hosting.

  • For key-value stores, Redis is a very popular solution.
  • MongoDB and CouchDB are well-known and widely used Document databases.
  • For wide-column style databases, Apache offers HBase and Cassandra.

Do they offer any potential advantages over the hosted solutions discussed previously?

The primary and obvious advantage to a self hosted solution is that vendor lock-in is minimalized. Any cloud provider that offers vanilla compute instances can serve as a potential platform for self-hosting. Organizations can choose a provider with the best fitting ecosystem and price, and migrate their existing solution without costly re-training and customization.

Organizations with existing NoSQL deployments may find the cost and complexity of porting to a hosted solution to be too high, losing out on accumulated data and knowledge on maintenance and performance tuning.

For organizations looking to deploy their first NoSQL infrastructure, the managed offerings are more attractive. The operational and administrative burden is handled entirely by the provider, which is especially advantageous for leaner engineering teams that lack the resources to deploy and maintain a database cluster.

Factoring in that bandwidth costs will be incurred either way, in most situations it makes sense to take advantage of the existing engineering acumen of the providers and their managed offerings.

The bottom line: Which NoSQL service is the best?

Which NoSQL service is the best? It's a two-part answer.

From the perspective of price efficiency and strength of the surrounding provider service ecosystem, DynamoDB is the best NoSQL services. Users can take advantage of DynamoDB's performance for a very low cost, choosing to provision up front, or on-demand while they dial in their workloads. Features like strong consistency and transactions can be enabled to offer flexibility for organizations that require some of the stability of relational offerings.

However, that doesn't mean that the other offerings don't merit a strong look.

  • GCP Firestore in Datastore mode is compelling for small teams needing to build quick, scalable MVPs, as it integrates with other GCP-managed offerings like App Engine.
  • GCP Bigtable backs some of Google's biggest services, like GMail and YouTube, and should merit consideration from teams that need to stream, store, and analyze terabytes and petabytes of data quickly.
  • Azure Cosmos DB brings NoSQL to the more traditional hybrid world of enterprise, as many Azure customers typically integrate cloud and on-premise solutions. Cosmos DB offers broad flexibility in both data models and supported data types, so existing Azure customers have access to a powerful NoSQL solution as needed and can utilize traditional SQL queries against a NoSQL backend.

Like any technology choice, there are always trade-offs, but in most cases the best choice will be DynamoDB.


Get the skills you need for a better career.

Master modern tech skills, get certified, and level up your career. Whether you’re starting out or a seasoned pro, you can learn by doing and advance your career in cloud with ACG.


Check out more NoSQL Database Reads

About the Author

Mike Vanbuskirk is a Lead DevOps engineer and technical content creator. He’s worked with some of the largest cloud, e-commerce, and CDN platforms in the world. His current focus is cloud-first architecture and serverless infrastructure.