You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/migrate-containers-partitioned-to-nonpartitioned.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ To support the migration, Azure Cosmos DB provides a system defined partition ke
39
39
40
40
After the container is migrated, you can create documents by populating the `_partitionKey` property along with the other properties of the document. The `_partitionKey` property represents the partition key of your documents.
41
41
42
-
Choosing the right partition key is important to utilize the provisioned throughput optimally. For more information, see [how to choose a partition key](partitioning-overview.md) article.
42
+
Choosing the right partition key is important to utilize the provisioned throughput optimally. For more information, see [how to choose a partition key](partitioning.md) article.
43
43
44
44
> [!NOTE]
45
45
> You can take advantage of system defined partition key only if you are using the latest/V3 version of SDKs in all the languages.
@@ -179,7 +179,7 @@ If new items are inserted with different values for the partition key, querying
Copy file name to clipboardExpand all lines: articles/cosmos-db/migrate-hbase.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ The key differences between the data structure of Azure Cosmos DB and HBase are
54
54
55
55
* In HBase, data is stored by [RowKey](https://hbase.apache.org/book.html#rowkey.design) and horizontally partitioned into regions by the range of RowKey specified during the table creation.
56
56
57
-
* Azure Cosmos DB on the other side distributes data into partitions based on the hash value of a specified [Partition key](partitioning-overview.md).
57
+
* Azure Cosmos DB on the other side distributes data into partitions based on the hash value of a specified [Partition key](partitioning.md).
58
58
59
59
**Column family**
60
60
@@ -129,7 +129,7 @@ You can also use [autoscaling provisioning throughput](provision-throughput-auto
129
129
HBase sorts data according to RowKey. The data is then partitioned into regions and stored in RegionServers. The automatic partitioning divides regions horizontally according to the partitioning policy. This is controlled by the value assigned to HBase parameter `hbase.hregion.max.filesize` (default value is 10 GB). A row in HBase with a given RowKey always belongs to one region. In addition, the data is separated on disk for each column family. This enables filtering at the time of reading and isolation of I/O on HFile.
130
130
131
131
**Azure Cosmos DB**
132
-
Azure Cosmos DB uses [partitioning](partitioning-overview.md) to scale individual containers. For comprehensive details about logical partitions, physical partitions, and partition key selection, see the [partitioning overview](partitioning-overview.md).
132
+
Azure Cosmos DB uses [partitioning](partitioning.md) to scale individual containers. For comprehensive details about logical partitions, physical partitions, and partition key selection, see the [partitioning overview](partitioning.md).
Copy file name to clipboardExpand all lines: articles/cosmos-db/model-partition-example.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ appliesto:
15
15
16
16
# How to model and partition data using a real-world example
17
17
18
-
This article builds on several Azure Cosmos DB concepts like [data modeling](modeling-data.md), [partitioning](partitioning-overview.md), and [provisioned throughput](request-units.md) to demonstrate how to tackle a real-world data design exercise.
18
+
This article builds on several Azure Cosmos DB concepts like [data modeling](modeling-data.md), [partitioning](partitioning.md), and [provisioned throughput](request-units.md) to demonstrate how to tackle a real-world data design exercise.
19
19
20
20
If you usually work with relational databases, you've probably developed habits for designing data models. Because of the specific constraints, but also the unique strengths of Azure Cosmos DB, most of these best practices don't translate well and might drag you into suboptimal solutions. The goal of this article is to guide you through the complete process of modeling a real-world use case on Azure Cosmos DB, from item modeling to entity colocation and container partitioning.
21
21
@@ -595,5 +595,5 @@ The change feed that we use to distribute updates to other containers store all
595
595
After this introduction to practical data modeling and partitioning, you might want to check the following articles to review the concepts:
596
596
597
597
-[Databases, containers, and items in Azure Cosmos DB](resource-model.md)
598
-
-[Partitioning and horizontal scaling in Azure Cosmos DB](partitioning-overview.md)
598
+
-[Partitioning and horizontal scaling in Azure Cosmos DB](partitioning.md)
Copy file name to clipboardExpand all lines: articles/cosmos-db/modeling-data.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -403,7 +403,7 @@ The ability to have a model with precalculated fields is made possible because A
403
403
404
404
## Distinguish between different item types
405
405
406
-
In some scenarios, you might want to mix different item types in the same collection; this design choice is usually the case when you want multiple, related documents to sit in the same [partition](partitioning-overview.md). For example, you could put both books and book reviews in the same collection and partition it by `bookId`. In such a situation, you usually want to add a field to your documents that identifies their type to differentiate them.
406
+
In some scenarios, you might want to mix different item types in the same collection; this design choice is usually the case when you want multiple, related documents to sit in the same [partition](partitioning.md). For example, you could put both books and book reviews in the same collection and partition it by `bookId`. In such a situation, you usually want to add a field to your documents that identifies their type to differentiate them.
407
407
408
408
```json
409
409
Book documents:
@@ -536,6 +536,6 @@ Just as there's no single way to represent a piece of data on a screen, there's
536
536
537
537
## Related content
538
538
539
-
-[Partitioning data in Azure Cosmos DB](partitioning-overview.md)
539
+
-[Partitioning data in Azure Cosmos DB](partitioning.md)
540
540
-[Data modeling and partitioning - a real-world example](model-partition-example.md)
541
541
-[Model and partition your data in Azure Cosmos DB](/training/modules/model-partition-data-azure-cosmos-db/)
Copy file name to clipboardExpand all lines: articles/cosmos-db/optimize-cost-throughput.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ You can provision throughput on databases or containers and each strategy can he
28
28
29
29
* If you provision throughput on a database, all the containers, for example collections/tables/graphs within that database can share the throughput based on the load. Throughput reserved at the database level is shared unevenly, depending on the workload on a specific set of containers.
30
30
31
-
* If you provision throughput on a container, the throughput is guaranteed for that container, backed by the SLA. The choice of a logical partition key is crucial for even distribution of load across all the logical partitions of a container. See [Partitioning](partitioning-overview.md) and [horizontal scaling](partitioning-overview.md) articles for more details.
31
+
* If you provision throughput on a container, the throughput is guaranteed for that container, backed by the SLA. The choice of a logical partition key is crucial for even distribution of load across all the logical partitions of a container. See [Partitioning](partitioning.md) and [horizontal scaling](partitioning.md) articles for more details.
32
32
33
33
The following are some guidelines to decide on a provisioned throughput strategy:
Copy file name to clipboardExpand all lines: articles/cosmos-db/partitioning.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,19 +67,19 @@ If you assign a throughput of 18,000 request units per second (RU/s), each of th
67
67
68
68
## Managing logical partitions
69
69
70
-
Azure Cosmos DB automatically manages the placement of logical partitions on physical partitions to meet the scalability and performance needs of the container. When the throughput and storage requirements of an application increase, Azure Cosmos DB moves logical partitions to spread the load across more physical partitions. Learn more about [physical partitions](partitioning-overview.md#physical-partitions).
70
+
Azure Cosmos DB automatically manages the placement of logical partitions on physical partitions to meet the scalability and performance needs of the container. When the throughput and storage requirements of an application increase, Azure Cosmos DB moves logical partitions to spread the load across more physical partitions. Learn more about [physical partitions](partitioning.md#physical-partitions).
71
71
72
72
Azure Cosmos DB uses hash-based partitioning to distribute logical partitions across physical partitions. Azure Cosmos DB hashes the partition key value of an item. The hashed result determines the logical partition. Then, Azure Cosmos DB allocates the key space of partition key hashes evenly across the physical partitions.
73
73
74
74
Transactions in stored procedures or triggers are allowed only for items in a single logical partition.
75
75
76
76
## Replica sets
77
77
78
-
Each physical partition consists of a set of replicas, also called a [*replica set*](global-dist-under-the-hood.md). Each replica hosts an instance of the database engine. A replica set makes the data store within the physical partition durable, highly available, and consistent. Each replica in the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput allocated to that physical partition. Azure Cosmos DB automatically manages replica sets.
78
+
Each physical partition consists of a set of replicas, also called a [*replica set*](global-distribution.md). Each replica hosts an instance of the database engine. A replica set makes the data store within the physical partition durable, highly available, and consistent. Each replica in the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput allocated to that physical partition. Azure Cosmos DB automatically manages replica sets.
79
79
80
80
Smaller containers usually require a single physical partition, but they still have at least four replicas.
81
81
82
-
This image shows how logical partitions map to physical partitions distributed globally. [Partition set](global-dist-under-the-hood.md#partition-sets) in the image refers to a group of physical partitions that manage the same logical partition keys across multiple regions:
82
+
This image shows how logical partitions map to physical partitions distributed globally. [Partition set](global-distribution.md#partition-sets) in the image refers to a group of physical partitions that manage the same logical partition keys across multiple regions:
83
83
84
84
:::image type="content" source="./media/partitioning-overview/logical-partitions.png" alt-text="Diagram that shows Azure Cosmos DB partitioning." border="false":::
85
85
@@ -165,7 +165,7 @@ The *item ID* is a great partition key choice for the following reasons:
165
165
Consider the following caveats when selecting the *item ID* as the partition key:
166
166
167
167
* If the *item ID* is the partition key, it becomes a unique identifier for your entire container. You can't create items with duplicate *identifiers*.
168
-
* If you have a read-heavy container with many [physical partitions](partitioning-overview.md#physical-partitions), queries are more efficient if they have an equality filter with the *item ID*.
168
+
* If you have a read-heavy container with many [physical partitions](partitioning.md#physical-partitions), queries are more efficient if they have an equality filter with the *item ID*.
169
169
* Stored procedures or triggers can't target multiple logical partitions.
0 commit comments