From cc64143cb9e691d7668621429a96a167cf022214 Mon Sep 17 00:00:00 2001 From: David Coulter Date: Wed, 23 Sep 2020 12:53:00 -0700 Subject: [PATCH] Links: .NET - architecture (#20650) --- .../app-startup.md | 4 +- .../blazor-for-web-forms-developers/index.md | 2 +- .../migration.md | 2 +- .../security-authentication-authorization.md | 24 ++++----- .../cloud-native/application-bundles.md | 4 +- .../application-resiliency-patterns.md | 4 +- .../authentication-authorization.md | 4 +- .../cloud-native/azure-active-directory.md | 2 +- .../cloud-native/azure-caching.md | 8 +-- .../cloud-native/azure-monitor.md | 14 ++--- .../cloud-native/azure-security.md | 16 +++--- .../cloud-native/candidate-apps.md | 4 +- .../cloud-native/centralized-configuration.md | 16 +++--- ...ombine-containers-serverless-approaches.md | 4 +- .../cloud-native/communication-patterns.md | 2 +- docs/architecture/cloud-native/definition.md | 8 +-- .../cloud-native/deploy-containers-azure.md | 6 +-- .../deploy-eshoponcontainers-azure.md | 2 +- docs/architecture/cloud-native/devops.md | 2 +- .../cloud-native/distributed-data.md | 12 ++--- .../cloud-native/elastic-search-in-azure.md | 10 ++-- .../cloud-native/feature-flags.md | 4 +- .../cloud-native/front-end-communication.md | 10 ++-- docs/architecture/cloud-native/grpc.md | 8 +-- .../cloud-native/identity-server.md | 2 +- docs/architecture/cloud-native/identity.md | 2 +- .../cloud-native/infrastructure-as-code.md | 8 +-- .../infrastructure-resiliency-azure.md | 10 ++-- .../leverage-containers-orchestrators.md | 2 +- .../leverage-serverless-functions.md | 6 +-- .../logging-with-elastic-stack.md | 4 +- .../map-eshoponcontainers-azure-services.md | 6 +-- .../monitoring-azure-kubernetes.md | 6 +-- .../cloud-native/other-deployment-options.md | 30 +++++------ .../cloud-native/relational-vs-nosql-data.md | 24 ++++----- docs/architecture/cloud-native/resiliency.md | 2 +- .../cloud-native/resilient-communications.md | 18 +++---- .../scale-containers-serverless.md | 8 +-- ...rvice-mesh-communication-infrastructure.md | 16 +++--- .../service-to-service-communication.md | 32 ++++++------ .../deploy-azure-kubernetes-service.md | 2 +- .../monolithic-applications.md | 2 +- ...chestrate-high-scalability-availability.md | 30 +++++------ .../state-and-data-in-docker-applications.md | 2 +- .../containerized-lifecycle/index.md | 2 +- .../manage-production-docker-environments.md | 10 ++-- .../containerized-lifecycle/what-is-docker.md | 2 +- .../grpc-for-wcf-developers/appendix.md | 4 +- .../application-performance-management.md | 6 +-- .../grpc-for-wcf-developers/docker.md | 2 +- docs/architecture/index.yml | 4 +- ...synchronous-message-based-communication.md | 4 +- ...munication-in-microservice-architecture.md | 4 +- ...nication-versus-the-API-Gateway-pattern.md | 8 +-- .../distributed-data-management.md | 2 +- .../docker-application-state-data.md | 2 +- ...silient-high-availability-microservices.md | 2 +- ...lti-container-microservice-applications.md | 8 +-- .../docker-defined.md | 2 +- .../docker-app-development-workflow.md | 2 +- ...ry-to-implement-resilient-http-requests.md | 2 +- .../domain-events-design-implementation.md | 2 +- .../implement-value-objects.md | 2 +- ...er-implementation-entity-framework-core.md | 2 +- ...pplication-layer-implementation-web-api.md | 2 +- .../net-core-microservice-domain-model.md | 2 +- ...sql-database-persistence-infrastructure.md | 10 ++-- .../background-tasks-with-ihostedservice.md | 2 +- .../data-driven-crud-microservice.md | 6 +-- ...event-based-microservice-communications.md | 2 +- .../microservice-application-design.md | 2 +- .../subscribe-events.md | 6 +-- ...ng-recommendations-for-asp-net-web-apps.md | 6 +-- .../common-web-application-architectures.md | 2 +- .../develop-asp-net-core-mvc-apps.md | 8 +-- .../development-process-for-azure.md | 10 ++-- .../test-asp-net-core-mvc-apps.md | 2 +- .../work-with-data-in-asp-net-core-apps.md | 4 +- .../migrate-modern-applications.md | 2 +- .../modernize-with-azure-containers/index.md | 4 +- ...lift-and-shift-existing-apps-azure-iaas.md | 2 +- ...embrace-transient-failures-in-the-cloud.md | 2 +- ...life-cycle-ci-cd-pipelines-devops-tools.md | 4 +- ...ologies-in-cloud-optimized-applications.md | 2 +- ...your-apps-with-monitoring-and-telemetry.md | 6 +-- ...throughs-technical-get-started-overview.md | 2 +- .../serverless/application-insights.md | 8 +-- .../serverless/architecture-approaches.md | 4 +- .../architecture-deployment-approaches.md | 30 +++++------ .../serverless/azure-functions.md | 16 +++--- .../serverless/durable-azure-functions.md | 6 +-- docs/architecture/serverless/event-grid.md | 44 ++++++++-------- docs/architecture/serverless/index.md | 8 +-- docs/architecture/serverless/logic-apps.md | 6 +-- .../serverless/orchestration-patterns.md | 2 +- .../serverless-architecture-considerations.md | 4 +- .../serverless/serverless-architecture.md | 4 +- .../serverless-business-scenarios.md | 52 +++++++++---------- .../serverless/serverless-design-examples.md | 26 +++++----- 99 files changed, 377 insertions(+), 377 deletions(-) diff --git a/docs/architecture/blazor-for-web-forms-developers/app-startup.md b/docs/architecture/blazor-for-web-forms-developers/app-startup.md index 21bf7b22e5366..7fb9322fb21a5 100644 --- a/docs/architecture/blazor-for-web-forms-developers/app-startup.md +++ b/docs/architecture/blazor-for-web-forms-developers/app-startup.md @@ -73,7 +73,7 @@ public class Startup Like the rest of ASP.NET Core, the Startup class is created with dependency injection principles. The `IConfiguration` is provided to the constructor and stashed in a public property for later access during configuration. -The `ConfigureServices` method introduced in ASP.NET Core allows for the various ASP.NET Core framework services to be configured for the framework's built-in dependency injection container. The various `services.Add*` methods add services that enable features such as authentication, razor pages, MVC controller routing, SignalR, and Blazor Server interactions among many others. This method was not needed in web forms, as the parsing and handling of the ASPX, ASCX, ASHX, and ASMX files was defined by referencing ASP.NET in the web.config configuration file. More information about dependency injection in ASP.NET Core is available in the [online documentation](https://docs.microsoft.com/aspnet/core/fundamentals/dependency-injection). +The `ConfigureServices` method introduced in ASP.NET Core allows for the various ASP.NET Core framework services to be configured for the framework's built-in dependency injection container. The various `services.Add*` methods add services that enable features such as authentication, razor pages, MVC controller routing, SignalR, and Blazor Server interactions among many others. This method was not needed in web forms, as the parsing and handling of the ASPX, ASCX, ASHX, and ASMX files was defined by referencing ASP.NET in the web.config configuration file. More information about dependency injection in ASP.NET Core is available in the [online documentation](/aspnet/core/fundamentals/dependency-injection). The `Configure` method introduces the concept of the HTTP pipeline to ASP.NET Core. In this method, we declare from top to bottom the [Middleware](middleware.md) that will handle every request sent to our application. Most of these features in the default configuration were scattered across the web forms configuration files and are now in one place for ease of reference. @@ -97,7 +97,7 @@ The Grunt, Gulp, and WebPack command-line tools and their associated configurati ``` -More details about both strategies to manage your CSS and JavaScript files are available in the [Bundle and minify static assets in ASP.NET Core](https://docs.microsoft.com/aspnet/core/client-side/bundling-and-minification) documentation. +More details about both strategies to manage your CSS and JavaScript files are available in the [Bundle and minify static assets in ASP.NET Core](/aspnet/core/client-side/bundling-and-minification) documentation. >[!div class="step-by-step"] >[Previous](project-structure.md) diff --git a/docs/architecture/blazor-for-web-forms-developers/index.md b/docs/architecture/blazor-for-web-forms-developers/index.md index 4320fe48051d9..372f75f89930a 100644 --- a/docs/architecture/blazor-for-web-forms-developers/index.md +++ b/docs/architecture/blazor-for-web-forms-developers/index.md @@ -66,7 +66,7 @@ The first part of this book covers what Blazor is and compares it to web app dev ## What this book doesn't cover -This book is an introduction to Blazor, not a comprehensive migration guide. While it does include guidance on how to approach migrating a project from ASP.NET Web Forms to Blazor, it does not attempt to cover every nuance and detail. For more general guidance on migrating from ASP.NET to ASP.NET Core, refer to the [migration guidance](https://docs.microsoft.com/aspnet/core/migration/proper-to-2x/) in the ASP.NET Core documentation. +This book is an introduction to Blazor, not a comprehensive migration guide. While it does include guidance on how to approach migrating a project from ASP.NET Web Forms to Blazor, it does not attempt to cover every nuance and detail. For more general guidance on migrating from ASP.NET to ASP.NET Core, refer to the [migration guidance](/aspnet/core/migration/proper-to-2x/) in the ASP.NET Core documentation. ### Additional resources diff --git a/docs/architecture/blazor-for-web-forms-developers/migration.md b/docs/architecture/blazor-for-web-forms-developers/migration.md index 3da1e99980223..1ecb579dc4c6c 100644 --- a/docs/architecture/blazor-for-web-forms-developers/migration.md +++ b/docs/architecture/blazor-for-web-forms-developers/migration.md @@ -629,7 +629,7 @@ Because Blazor is built on .NET Core, there are considerations in ensuring suppo - Code Access Security (CAS) - Security Transparency -For more information on techniques to identify necessary changes to support running on .NET Core, see [Port your code from .NET Framework to .NET Core](/dotnet/core/porting). +For more information on techniques to identify necessary changes to support running on .NET Core, see [Port your code from .NET Framework to .NET Core](../../core/porting/index.md). ASP.NET Core is a reimagined version of ASP.NET and has some changes that may not initially seem obvious. The main changes are: diff --git a/docs/architecture/blazor-for-web-forms-developers/security-authentication-authorization.md b/docs/architecture/blazor-for-web-forms-developers/security-authentication-authorization.md index 76bbc7a7d10ae..c3c4aa599d125 100644 --- a/docs/architecture/blazor-for-web-forms-developers/security-authentication-authorization.md +++ b/docs/architecture/blazor-for-web-forms-developers/security-authentication-authorization.md @@ -14,7 +14,7 @@ Migrating from an ASP.NET Web Forms application to Blazor will almost certainly Since ASP.NET 2.0, the ASP.NET Web Forms platform has supported a provider model for a variety of features, including membership. The universal membership provider, along with the optional role provider, is very commonly deployed with ASP.NET Web Forms applications. It offers a robust and secure way to manage authentication and authorization that continues to work well today. The most recent offering of these universal providers is available as a NuGet package, [Microsoft.AspNet.Providers](https://www.nuget.org/packages/Microsoft.AspNet.Providers). -The Universal Providers work with a SQL database schema that includes tables like `aspnet_Applications`, `aspnet_Membership`, `aspnet_Roles`, and `aspnet_Users`. When configured by running the [aspnet_regsql.exe command](https://docs.microsoft.com/previous-versions/ms229862(v=vs.140)), the providers install tables and stored procedures that provide all of the necessary queries and commands necessary to work with the underlying data. The database schema and these stored procedures are not compatible with newer ASP.NET Identity and ASP.NET Core Identity systems, so existing data must be migrated into the new system. Figure 1 shows an example table schema configured for universal providers. +The Universal Providers work with a SQL database schema that includes tables like `aspnet_Applications`, `aspnet_Membership`, `aspnet_Roles`, and `aspnet_Users`. When configured by running the [aspnet_regsql.exe command](/previous-versions/ms229862(v=vs.140)), the providers install tables and stored procedures that provide all of the necessary queries and commands necessary to work with the underlying data. The database schema and these stored procedures are not compatible with newer ASP.NET Identity and ASP.NET Core Identity systems, so existing data must be migrated into the new system. Figure 1 shows an example table schema configured for universal providers. ![universal providers schema](./media/security/membership-tables.png) @@ -104,7 +104,7 @@ Typically, ASP.NET Web Forms applications configure security within the `web.con ## ASP.NET Core Identity -Although still tasked with authentication and authorization, ASP.NET Core Identity uses a different set of abstractions and assumptions when compared to the universal providers. For example, the new Identity model supports third party authentication, allowing users to authenticate using a social media account or other trusted authentication provider. ASP.NET Core Identity supports UI for commonly needed pages like login, logout, and register. It leverages EF Core for its data access, and uses EF Core migrations to generate the necessary schema required to supports its data model. This [introduction to Identity on ASP.NET Core](https://docs.microsoft.com/aspnet/core/security/authentication/identity) provides a good overview of what is included with ASP.NET Core Identity and how to get started working with it. If you haven't already set up ASP.NET Core Identity in your application and its database, it will help you get started. +Although still tasked with authentication and authorization, ASP.NET Core Identity uses a different set of abstractions and assumptions when compared to the universal providers. For example, the new Identity model supports third party authentication, allowing users to authenticate using a social media account or other trusted authentication provider. ASP.NET Core Identity supports UI for commonly needed pages like login, logout, and register. It leverages EF Core for its data access, and uses EF Core migrations to generate the necessary schema required to supports its data model. This [introduction to Identity on ASP.NET Core](/aspnet/core/security/authentication/identity) provides a good overview of what is included with ASP.NET Core Identity and how to get started working with it. If you haven't already set up ASP.NET Core Identity in your application and its database, it will help you get started. ### Roles, claims, and policies @@ -123,7 +123,7 @@ services.AddAuthorization(options => }); ``` -You can [learn more about how to create custom policies in the documentation](https://docs.microsoft.com/aspnet/core/security/authorization/policies). +You can [learn more about how to create custom policies in the documentation](/aspnet/core/security/authorization/policies). Whether you're using policies or roles, you can specify that a particular page in your Blazor application require that role or policy with the `[Authorize]` attribute, applied with the `@attribute` directive. @@ -139,7 +139,7 @@ Requiring a policy be satisfied: @attribute [Authorize(Policy ="CanadiansOnly")] ``` -If you need access to a user's authentication state, roles, or claims in your code, there are two primary ways to achieve this. The first is to receive the authentication state as a cascading parameter. The second is to access the state using an injected `AuthenticationStateProvider`. The details of each of these approaches are described in the [Blazor Security documentation](https://docs.microsoft.com/aspnet/core/blazor/security/). +If you need access to a user's authentication state, roles, or claims in your code, there are two primary ways to achieve this. The first is to receive the authentication state as a cascading parameter. The second is to access the state using an injected `AuthenticationStateProvider`. The details of each of these approaches are described in the [Blazor Security documentation](/aspnet/core/blazor/security/). The following code shows how to receive the `AuthenticationState` as a cascading parameter: @@ -245,7 +245,7 @@ If you would rather run a script to apply the new schema to an existing database dotnet ef migrations script -o auth.sql ``` -This will produce a SQL script in the output file `auth.sql` which can then be run against whatever database you like. If you have any trouble running `dotnet ef` commands, [make sure you have the EF Core tools installed on your system](https://docs.microsoft.com/ef/core/miscellaneous/cli/dotnet). +This will produce a SQL script in the output file `auth.sql` which can then be run against whatever database you like. If you have any trouble running `dotnet ef` commands, [make sure you have the EF Core tools installed on your system](/ef/core/miscellaneous/cli/dotnet). In the event you have additional columns on your source tables, you will need to identify the best location for these columns in the new schema. Generally, columns found on the `aspnet_Membership` table should be mapped to the `AspNetUsers` table. Columns on `aspnet_Roles` should be mapped to `AspNetRoles`. Any additional columns on the `aspnet_UsersInRoles` table would be added to the `AspNetUserRoles` table. @@ -253,9 +253,9 @@ It's also worth considering putting any additional columns on separate tables, s ### Migrating data from universal providers to ASP.NET Core Identity -Once you have the destination table schema in place, the next step is to migrate your user and role records to the new schema. A complete list of the schema differences, including which columns map to which new columns, can be found [here](https://docs.microsoft.com/aspnet/core/migration/proper-to-2x/membership-to-core-identity). +Once you have the destination table schema in place, the next step is to migrate your user and role records to the new schema. A complete list of the schema differences, including which columns map to which new columns, can be found [here](/aspnet/core/migration/proper-to-2x/membership-to-core-identity). -To migrate your users from membership to the new identity tables, you should [follow the steps described in the documentation](https://docs.microsoft.com/aspnet/core/migration/proper-to-2x/membership-to-core-identity). After following these steps and the script provided, your users will need to change their password the next time they log in. +To migrate your users from membership to the new identity tables, you should [follow the steps described in the documentation](/aspnet/core/migration/proper-to-2x/membership-to-core-identity). After following these steps and the script provided, your users will need to change their password the next time they log in. It is possible to migrate user passwords but the process is much more involved. Requiring users to update their passwords as part of the migration process, and encouraging them to use new, unique passwords, is likely to enhance the overall security of the application. @@ -334,7 +334,7 @@ If you further had denied access except to those users belonging to a certain ro Note that the `[Authorize]` attribute only works on `@page` components that are reached via the Blazor Router. The attribute does not work with child components, which should instead use `AuthorizeView`. -If you have logic within page markup for determining whether to display some code to a certain user, you can replace this with the `AuthorizeView` component. The [AuthorizeView component](https://docs.microsoft.com/aspnet/core/blazor/security#authorizeview-component) selectively displays UI depending on whether the user is authorized to see it. It also exposes a `context` variable that can be used to access user information. +If you have logic within page markup for determining whether to display some code to a certain user, you can replace this with the `AuthorizeView` component. The [AuthorizeView component](/aspnet/core/blazor/security#authorizeview-component) selectively displays UI depending on whether the user is authorized to see it. It also exposes a `context` variable that can be used to access user information. ```razor @@ -409,10 +409,10 @@ Blazor uses the same security model as ASP.NET Core, which is ASP.NET Core Ident ## References -- [Introduction to Identity on ASP.NET Core](https://docs.microsoft.com/aspnet/core/security/authentication/identity) -- [Migrate from ASP.NET Membership authentication to ASP.NET Core 2.0 Identity](https://docs.microsoft.com/aspnet/core/migration/proper-to-2x/membership-to-core-identity) -- [Migrate Authentication and Identity to ASP.NET Core](https://docs.microsoft.com/aspnet/core/migration/identity) -- [ASP.NET Core Blazor authentication and authorization](https://docs.microsoft.com/aspnet/core/blazor/security/) +- [Introduction to Identity on ASP.NET Core](/aspnet/core/security/authentication/identity) +- [Migrate from ASP.NET Membership authentication to ASP.NET Core 2.0 Identity](/aspnet/core/migration/proper-to-2x/membership-to-core-identity) +- [Migrate Authentication and Identity to ASP.NET Core](/aspnet/core/migration/identity) +- [ASP.NET Core Blazor authentication and authorization](/aspnet/core/blazor/security/) >[!div class="step-by-step"] >[Previous](config.md) diff --git a/docs/architecture/cloud-native/application-bundles.md b/docs/architecture/cloud-native/application-bundles.md index ffe725c473b1d..9176b0abf87c7 100644 --- a/docs/architecture/cloud-native/application-bundles.md +++ b/docs/architecture/cloud-native/application-bundles.md @@ -76,9 +76,9 @@ There are so many great tools in the DevOps space these days and even more fanta ## References - [Azure DevOps](https://azure.microsoft.com/services/devops/) -- [Azure Resource Manager](https://azure.microsoft.com/documentation/articles/resource-group-overview/) +- [Azure Resource Manager](/azure/azure-resource-manager/management/overview) - [Terraform](https://www.terraform.io/) -- [Azure CLI](https://docs.microsoft.com/cli/azure/) +- [Azure CLI](/cli/azure/) >[!div class="step-by-step"] >[Previous](infrastructure-as-code.md) diff --git a/docs/architecture/cloud-native/application-resiliency-patterns.md b/docs/architecture/cloud-native/application-resiliency-patterns.md index c1e19b0e8cb85..460474392bb6b 100644 --- a/docs/architecture/cloud-native/application-resiliency-patterns.md +++ b/docs/architecture/cloud-native/application-resiliency-patterns.md @@ -41,7 +41,7 @@ Next, let's expand on retry and circuit breaker patterns. In a distributed cloud-native environment, calls to services and cloud resources can fail because of transient (short-lived) failures, which typically correct themselves after a brief period of time. Implementing a retry strategy helps a cloud-native service mitigate these scenarios. -The [Retry pattern](https://docs.microsoft.com/azure/architecture/patterns/retry) enables a service to retry a failed request operation a (configurable) number of times with an exponentially increasing wait time. Figure 6-2 shows a retry in action. +The [Retry pattern](/azure/architecture/patterns/retry) enables a service to retry a failed request operation a (configurable) number of times with an exponentially increasing wait time. Figure 6-2 shows a retry in action. ![Retry pattern in action](./media/retry-pattern.png) @@ -65,7 +65,7 @@ To make things worse, executing continual retry operations on a non-responsive s In these situations, it would be preferable for the operation to fail immediately and only attempt to invoke the service if it's likely to succeed. -The [Circuit Breaker pattern](https://docs.microsoft.com/azure/architecture/patterns/circuit-breaker) can prevent an application from repeatedly trying to execute an operation that's likely to fail. After a pre-defined number of failed calls, it blocks all traffic to the service. Periodically, it will allow a trial call to determine whether the fault has resolved. Figure 6-3 shows the Circuit Breaker pattern in action. +The [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker) can prevent an application from repeatedly trying to execute an operation that's likely to fail. After a pre-defined number of failed calls, it blocks all traffic to the service. Periodically, it will allow a trial call to determine whether the fault has resolved. Figure 6-3 shows the Circuit Breaker pattern in action. ![Circuit breaker pattern in action](./media/circuit-breaker-pattern.png) diff --git a/docs/architecture/cloud-native/authentication-authorization.md b/docs/architecture/cloud-native/authentication-authorization.md index 82ee652990d8f..0586a95b370fb 100644 --- a/docs/architecture/cloud-native/authentication-authorization.md +++ b/docs/architecture/cloud-native/authentication-authorization.md @@ -15,8 +15,8 @@ Many organizations still rely on local authentication services like Active Direc ## References -- [Authentication basics](https://docs.microsoft.com/azure/active-directory/develop/authentication-scenarios) -- [Access tokens and claims](https://docs.microsoft.com/azure/active-directory/develop/access-tokens) +- [Authentication basics](/azure/active-directory/develop/authentication-scenarios) +- [Access tokens and claims](/azure/active-directory/develop/access-tokens) - [It may be time to ditch your on premises authentication services](https://oxfordcomputergroup.com/resources/o365-security-native-cloud-authentication/) >[!div class="step-by-step"] diff --git a/docs/architecture/cloud-native/azure-active-directory.md b/docs/architecture/cloud-native/azure-active-directory.md index 736d6f0849e86..a207bb73115cd 100644 --- a/docs/architecture/cloud-native/azure-active-directory.md +++ b/docs/architecture/cloud-native/azure-active-directory.md @@ -14,7 +14,7 @@ Azure AD supports company branded sign-in screens, multi-factory authentication, ## References -- [Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/) +- [Microsoft identity platform](/azure/active-directory/develop/) >[!div class="step-by-step"] >[Previous](authentication-authorization.md) diff --git a/docs/architecture/cloud-native/azure-caching.md b/docs/architecture/cloud-native/azure-caching.md index eecf219d7aeac..ababd976e59fc 100644 --- a/docs/architecture/cloud-native/azure-caching.md +++ b/docs/architecture/cloud-native/azure-caching.md @@ -15,7 +15,7 @@ The benefits of caching are well understood. The technique works by temporarily ## Why? -As discussed in the [Microsoft caching guidance](https://docs.microsoft.com/azure/architecture/best-practices/caching), caching can increase performance, scalability, and availability for individual microservices and the system as a whole. It reduces the latency and contention of handling large volumes of concurrent requests to a data store. As data volume and the number of users increase, the greater the benefits of caching become. +As discussed in the [Microsoft caching guidance](/azure/architecture/best-practices/caching), caching can increase performance, scalability, and availability for individual microservices and the system as a whole. It reduces the latency and contention of handling large volumes of concurrent requests to a data store. As data volume and the number of users increase, the greater the benefits of caching become. Caching is most effective when a client repeatedly reads data that is immutable or that changes infrequently. Examples include reference information such as product and pricing information, or shared static resources that are costly to construct. @@ -33,7 +33,7 @@ Cloud native applications typically implement a distributed caching architecture In the previous figure, note how the cache is independent of and shared by the microservices. In this scenario, the cache is invoked by the [API Gateway](./front-end-communication.md). As discussed in chapter 4, the gateway serves as a front end for all incoming requests. The distributed cache increases system responsiveness by returning cached data whenever possible. Additionally, separating the cache from the services allows the cache to scale up or out independently to meet increased traffic demands. -The previous figure presents a common caching pattern known as the [cache-aside pattern](https://docs.microsoft.com/azure/architecture/patterns/cache-aside). For an incoming request, you first query the cache (step \#1) for a response. If found, the data is returned immediately. If the data doesn't exist in the cache (known as a [cache miss](https://www.techopedia.com/definition/6308/cache-miss)), it's retrieved from a local database in a downstream service (step \#2). It's then written to the cache for future requests (step \#3), and returned to the caller. Care must be taken to periodically evict cached data so that the system remains timely and consistent. +The previous figure presents a common caching pattern known as the [cache-aside pattern](/azure/architecture/patterns/cache-aside). For an incoming request, you first query the cache (step \#1) for a response. If found, the data is returned immediately. If the data doesn't exist in the cache (known as a [cache miss](https://www.techopedia.com/definition/6308/cache-miss)), it's retrieved from a local database in a downstream service (step \#2). It's then written to the cache for future requests (step \#3), and returned to the caller. Care must be taken to periodically evict cached data so that the system remains timely and consistent. As a shared cache grows, it might prove beneficial to partition its data across multiple nodes. Doing so can help minimize contention and improve scalability. Many Caching services support the ability to dynamically add and remove nodes and rebalance data across partitions. This approach typically involves clustering. Clustering exposes a collection of federated nodes as a seamless, single cache. Internally, however, the data is dispersed across the nodes following a predefined distribution strategy that balances the load evenly. @@ -50,9 +50,9 @@ Azure Cache for Redis is more than a simple cache server. It can support a numbe - A message broker - A configuration or discovery server -For advanced scenarios, a copy of the cached data can be [persisted to disk](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-how-to-premium-persistence). If a catastrophic event disables both the primary and replica caches, the cache is reconstructed from the most recent snapshot. +For advanced scenarios, a copy of the cached data can be [persisted to disk](/azure/azure-cache-for-redis/cache-how-to-premium-persistence). If a catastrophic event disables both the primary and replica caches, the cache is reconstructed from the most recent snapshot. -Azure Redis Cache is available across a number of predefined configurations and pricing tiers. The [Premium tier](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-overview#service-tiers) features many enterprise-level features such as clustering, data persistence, geo-replication, and virtual-network isolation. +Azure Redis Cache is available across a number of predefined configurations and pricing tiers. The [Premium tier](/azure/azure-cache-for-redis/cache-overview#service-tiers) features many enterprise-level features such as clustering, data persistence, geo-replication, and virtual-network isolation. >[!div class="step-by-step"] >[Previous](relational-vs-nosql-data.md) diff --git a/docs/architecture/cloud-native/azure-monitor.md b/docs/architecture/cloud-native/azure-monitor.md index 36b39fe3d1283..ccdb0191c0ec7 100644 --- a/docs/architecture/cloud-native/azure-monitor.md +++ b/docs/architecture/cloud-native/azure-monitor.md @@ -15,9 +15,9 @@ No other cloud provider has as mature of a cloud application monitoring solution The first step in any monitoring solution is to gather as much data as possible. The more data gathered, the deeper the insights. Instrumenting systems has traditionally been difficult. Simple Network Management Protocol (SNMP) was the gold standard protocol for collecting machine level information, but it required a great deal of knowledge and configuration. Fortunately, much of this hard work has been eliminated as the most common metrics are gathered automatically by Azure Monitor. -Application level metrics and events aren't possible to instrument automatically because they're specific to the application being deployed. In order to gather these metrics, there are [SDKs and APIs available](https://docs.microsoft.com/azure/azure-monitor/app/api-custom-events-metrics) to directly report such information, such as when a customer signs up or completes an order. Exceptions can also be captured and reported back into Azure Monitor via Application Insights. The SDKs support most every language found in Cloud Native Applications including Go, Python, JavaScript, and the .NET languages. +Application level metrics and events aren't possible to instrument automatically because they're specific to the application being deployed. In order to gather these metrics, there are [SDKs and APIs available](/azure/azure-monitor/app/api-custom-events-metrics) to directly report such information, such as when a customer signs up or completes an order. Exceptions can also be captured and reported back into Azure Monitor via Application Insights. The SDKs support most every language found in Cloud Native Applications including Go, Python, JavaScript, and the .NET languages. -The ultimate goal of gathering information about the state of your application is to ensure that your end users have a good experience. What better way to tell if users are experiencing issues than doing [outside-in web tests](https://docs.microsoft.com/azure/azure-monitor/app/monitor-web-app-availability)? These tests can be as simple as pinging your website from locations around the world or as involved as having agents log into the site and simulate user actions. +The ultimate goal of gathering information about the state of your application is to ensure that your end users have a good experience. What better way to tell if users are experiencing issues than doing [outside-in web tests](/azure/azure-monitor/app/monitor-web-app-availability)? These tests can be as simple as pinging your website from locations around the world or as involved as having agents log into the site and simulate user actions. ## Reporting data @@ -40,11 +40,11 @@ Figure 7-13 shows the results of this Application Insights Query. ![Application Insights query results](./media/application_insights_example.png) **Figure 7-13**. Application Insights query results. -There is a [playground for experimenting with Kusto](https://dataexplorer.azure.com/clusters/help/databases/Samples) queries. Reading [sample queries](https://docs.microsoft.com/azure/kusto/query/samples) can also be instructive. +There is a [playground for experimenting with Kusto](https://dataexplorer.azure.com/clusters/help/databases/Samples) queries. Reading [sample queries](/azure/kusto/query/samples) can also be instructive. ## Dashboards -There are several different dashboard technologies that may be used to surface the information from Azure Monitor. Perhaps the simplest is to just run queries in Application Insights and [plot the data into a chart](https://docs.microsoft.com/azure/azure-monitor/learn/tutorial-app-dashboards). +There are several different dashboard technologies that may be used to surface the information from Azure Monitor. Perhaps the simplest is to just run queries in Application Insights and [plot the data into a chart](/azure/azure-monitor/learn/tutorial-app-dashboards). ![An example of Application Insights charts embedded in the main Azure Dashboard](./media/azure_dashboard.png) **Figure 7-14**. An example of Application Insights charts embedded in the main Azure Dashboard. @@ -57,7 +57,7 @@ These charts can then be embedded in the Azure portal proper through use of the ## Alerts -Sometimes, having data dashboards is insufficient. If nobody is awake to watch the dashboards, then it can still be many hours before a problem is addressed, or even detected. To this end, Azure Monitor also provides a top notch [alerting solution](https://docs.microsoft.com/azure/azure-monitor/platform/alerts-overview). Alerts can be triggered by a wide range of conditions including: +Sometimes, having data dashboards is insufficient. If nobody is awake to watch the dashboards, then it can still be many hours before a problem is addressed, or even detected. To this end, Azure Monitor also provides a top notch [alerting solution](/azure/azure-monitor/platform/alerts-overview). Alerts can be triggered by a wide range of conditions including: - Metric values - Log search queries @@ -69,11 +69,11 @@ When triggered, the alerts can perform a wide variety of tasks. On the simple si As common causes of alerts are identified, the alerts can be enhanced with details about the common causes of the alerts and the steps to take to resolve them. Highly mature cloud-native application deployments may opt to kick off self-healing tasks, which perform actions such as removing failing nodes from a scale set or triggering an autoscaling activity. Eventually it may no longer be necessary to wake up on-call personnel at 2AM to resolve a live-site issue as the system will be able to adjust itself to compensate or at least limp along until somebody arrives at work the next morning. -Azure Monitor automatically leverages machine learning to understand the normal operating parameters of deployed applications. This enables it to detect services that are operating outside of their normal parameters. For instance, the typical weekday traffic on the site might be 10,000 requests per minute. And then, on a given week, suddenly the number of requests hits a highly unusual 20,000 requests per minute. [Smart Detection](https://docs.microsoft.com/azure/azure-monitor/app/proactive-diagnostics) will notice this deviation from the norm and trigger an alert. At the same time, the trend analysis is smart enough to avoid firing false positives when the traffic load is expected. +Azure Monitor automatically leverages machine learning to understand the normal operating parameters of deployed applications. This enables it to detect services that are operating outside of their normal parameters. For instance, the typical weekday traffic on the site might be 10,000 requests per minute. And then, on a given week, suddenly the number of requests hits a highly unusual 20,000 requests per minute. [Smart Detection](/azure/azure-monitor/app/proactive-diagnostics) will notice this deviation from the norm and trigger an alert. At the same time, the trend analysis is smart enough to avoid firing false positives when the traffic load is expected. ## References -- [Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/overview) +- [Azure Monitor](/azure/azure-monitor/overview) >[!div class="step-by-step"] >[Previous](monitoring-azure-kubernetes.md) diff --git a/docs/architecture/cloud-native/azure-security.md b/docs/architecture/cloud-native/azure-security.md index feba5855fa7b3..4d557278dd6bc 100644 --- a/docs/architecture/cloud-native/azure-security.md +++ b/docs/architecture/cloud-native/azure-security.md @@ -19,7 +19,7 @@ No matter if the advantages outweigh the disadvantages of cloud-native applicati - Who should have access to this data? - Are there auditing policies in place around the development and release process? -All these questions are part of a process called [threat modeling](https://docs.microsoft.com/azure/security/azure-security-threat-modeling-tool). This process tries to answer the question of what threats there are to the system, how likely the threats are, and the potential damage from them. +All these questions are part of a process called [threat modeling](/azure/security/azure-security-threat-modeling-tool). This process tries to answer the question of what threats there are to the system, how likely the threats are, and the potential damage from them. Once the list of threats has been established, you need to decide whether they're worth mitigating. Sometimes a threat is so unlikely and expensive to plan for that it isn't worth spending energy on it. For instance, some state level actor could inject changes into the design of a process that is used by millions of devices. Now, instead of running a certain piece of code in [Ring 3](https://en.wikipedia.org/wiki/Protection_ring), that code is run in Ring 0. This allows an exploit that can bypass the hypervisor and run the attack code on the bare metal machines, allowing attacks on all the virtual machines that are running on that hardware. @@ -89,11 +89,11 @@ With the network established, internal resources like storage accounts can be se The nodes in an Azure Kubernetes cluster can participate in a virtual network just like other resources that are more native to Azure. This functionality is called [Azure Container Networking Interface](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md). In effect, it allocates a subnet within the virtual network on which virtual machines and container images are allocated. -Continuing down the path of illustrating the principle of least privilege, not every resource within a Virtual Network needs to talk to every other resource. For instance, in an application that provides a web API over a storage account and a SQL database, it's unlikely that the database and the storage account need to talk to one another. Any data sharing between them would go through the web application. So, a [network security group (NSG)](https://docs.microsoft.com/azure/virtual-network/security-overview) could be used to deny traffic between the two services. +Continuing down the path of illustrating the principle of least privilege, not every resource within a Virtual Network needs to talk to every other resource. For instance, in an application that provides a web API over a storage account and a SQL database, it's unlikely that the database and the storage account need to talk to one another. Any data sharing between them would go through the web application. So, a [network security group (NSG)](/azure/virtual-network/security-overview) could be used to deny traffic between the two services. A policy of denying communication between resources can be annoying to implement, especially coming from a background of using Azure without traffic restrictions. On some other clouds, the concept of network security groups is much more prevalent. For instance, the default policy on AWS is that resources can't communicate among themselves until enabled by rules in an NSG. While slower to develop this, more restrictive environment provides a more secure default. Making use of proper DevOps practices, especially using [Azure Resource Manager or Terraform](infrastructure-as-code.md) to manage permissions can make controlling the rules easier. -Virtual Networks can also be useful when setting up communication between on-premises and cloud resources. A virtual private network can be used to seamlessly attach the two networks together. This allows running a virtual network without any sort of gateway for scenarios where all the users are on-site. There are a number of technologies that can be used to establish this network. The simplest is to use a [site-to-site VPN](https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-about-vpngateways?toc=%2fazure%2fvirtual-network%2ftoc.json#s2smulti) that can be established between many routers and Azure. Traffic is encrypted and tunneled over the Internet at the same cost per byte as any other traffic. For scenarios where more bandwidth or more security is desirable, Azure offers a service called [Express Route](https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-about-vpngateways?toc=%2fazure%2fvirtual-network%2ftoc.json#ExpressRoute) that uses a private circuit between an on-premises network and Azure. It's more costly and difficult to establish but also more secure. +Virtual Networks can also be useful when setting up communication between on-premises and cloud resources. A virtual private network can be used to seamlessly attach the two networks together. This allows running a virtual network without any sort of gateway for scenarios where all the users are on-site. There are a number of technologies that can be used to establish this network. The simplest is to use a [site-to-site VPN](/azure/vpn-gateway/vpn-gateway-about-vpngateways?toc=%252fazure%252fvirtual-network%252ftoc.json#s2smulti) that can be established between many routers and Azure. Traffic is encrypted and tunneled over the Internet at the same cost per byte as any other traffic. For scenarios where more bandwidth or more security is desirable, Azure offers a service called [Express Route](/azure/vpn-gateway/vpn-gateway-about-vpngateways?toc=%252fazure%252fvirtual-network%252ftoc.json#ExpressRoute) that uses a private circuit between an on-premises network and Azure. It's more costly and difficult to establish but also more secure. ## Role-based access control for restricting access to Azure resources @@ -124,7 +124,7 @@ A security principal can take on many roles or, using a more sartorial analogy, Built into Azure are also a number of high-level roles such as Owner, Contributor, Reader, and User Account Administrator. With the Owner role, a security principal can access all resources and assign permissions to others. A contributor has the same level of access to all resources but they can't assign permissions. A Reader can only view existing Azure resources and a User Account Administrator can manage access to Azure resources. -More granular built-in roles such as [DNS Zone Contributor](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#dns-zone-contributor) have rights limited to a single service. Security principals can take on any number of roles. +More granular built-in roles such as [DNS Zone Contributor](/azure/role-based-access-control/built-in-roles#dns-zone-contributor) have rights limited to a single service. Security principals can take on any number of roles. ## Scopes @@ -142,7 +142,7 @@ Deny rules take precedence over allow rules. Now representing the same "allow al ## Checking access -As you can imagine, having a large number of roles and scopes can make figuring out the effective permission of a service principal quite difficult. Piling deny rules on top of that, only serves to increase the complexity. Fortunately, there's a [permissions calculator](https://docs.microsoft.com/azure/role-based-access-control/check-access) that can show the effective permissions for any service principal. It's typically found under the IAM tab in the portal, as shown in Figure 10-3. +As you can imagine, having a large number of roles and scopes can make figuring out the effective permission of a service principal quite difficult. Piling deny rules on top of that, only serves to increase the complexity. Fortunately, there's a [permissions calculator](/azure/role-based-access-control/check-access) that can show the effective permissions for any service principal. It's typically found under the IAM tab in the portal, as shown in Figure 10-3. ![Figure 9-4 Permission calculator for an app service](./media/check-rbac.png) @@ -226,9 +226,9 @@ In any application, there are a number of places where data rests on disk. The a The underpinning of much of Azure is the Azure Storage engine. Virtual machine disks are mounted on top of Azure Storage. Azure Kubernetes Services run on virtual machines that, themselves, are hosted on Azure Storage. Even serverless technologies, such as Azure Functions Apps and Azure Container Instances, run out of disk that is part of Azure Storage. -If Azure Storage is well encrypted, then it provides for a foundation for most everything else to also be encrypted. Azure Storage [is encrypted](https://docs.microsoft.com/azure/storage/common/storage-service-encryption) with [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard). This is a well-regarded encryption technology having been the subject of extensive academic scrutiny over the last 20 or so years. At present, there's no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES. +If Azure Storage is well encrypted, then it provides for a foundation for most everything else to also be encrypted. Azure Storage [is encrypted](/azure/storage/common/storage-service-encryption) with [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard). This is a well-regarded encryption technology having been the subject of extensive academic scrutiny over the last 20 or so years. At present, there's no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES. -By default, the keys used for encrypting Azure Storage are managed by Microsoft. There are extensive protections in place to ensure to prevent malicious access to these keys. However, users with particular encryption requirements can also [provide their own storage keys](https://docs.microsoft.com/azure/storage/common/storage-encryption-keys-powershell) that are managed in Azure Key Vault. These keys can be revoked at any time, which would effectively render the contents of the Storage account using them inaccessible. +By default, the keys used for encrypting Azure Storage are managed by Microsoft. There are extensive protections in place to ensure to prevent malicious access to these keys. However, users with particular encryption requirements can also [provide their own storage keys](/azure/storage/common/storage-encryption-keys-powershell) that are managed in Azure Key Vault. These keys can be revoked at any time, which would effectively render the contents of the Storage account using them inaccessible. Virtual machines use encrypted storage, but it's possible to provide another layer of encryption by using technologies like BitLocker on Windows or DM-Crypt on Linux. These technologies mean that even if the disk image was leaked off of storage, it would remain near impossible to read it. @@ -238,7 +238,7 @@ Databases hosted on Azure SQL use a technology called [Transparent Data Encrypti The encryption parameters are stored in the `master` database and, on startup, are read into memory for the remaining operations. This means that the `master` database must remain unencrypted. The actual key is managed by Microsoft. However, users with exacting security requirements may provide their own key in Key Vault in much the same way as is done for Azure Storage. The Key Vault provides for such services as key rotation and revocation. -The "Transparent" part of TDS comes from the fact that there aren't client changes needed to use an encrypted database. While this approach provides for good security, leaking the database password is enough for users to be able to decrypt the data. There's another approach that encrypts individual columns or tables in a database. [Always Encrypted](https://docs.microsoft.com/azure/sql-database/sql-database-always-encrypted-azure-key-vault) ensures that at no point the encrypted data appears in plain text inside the database. +The "Transparent" part of TDS comes from the fact that there aren't client changes needed to use an encrypted database. While this approach provides for good security, leaking the database password is enough for users to be able to decrypt the data. There's another approach that encrypts individual columns or tables in a database. [Always Encrypted](/azure/sql-database/sql-database-always-encrypted-azure-key-vault) ensures that at no point the encrypted data appears in plain text inside the database. Setting up this tier of encryption requires running through a wizard in SQL Server Management Studio to select the sort of encryption and where in Key Vault to store the associated keys. diff --git a/docs/architecture/cloud-native/candidate-apps.md b/docs/architecture/cloud-native/candidate-apps.md index 60a63de879470..750d68dfae1d2 100644 --- a/docs/architecture/cloud-native/candidate-apps.md +++ b/docs/architecture/cloud-native/candidate-apps.md @@ -35,7 +35,7 @@ The free Microsoft e-book [Modernize existing .NET applications with Azure cloud Monolithic apps that are non-critical largely benefit from a quick lift-and-shift ([Cloud Infrastructure-Ready](../modernize-with-azure-containers/lift-and-shift-existing-apps-azure-iaas.md)) migration. Here, the on-premises workload is rehosted to a cloud-based VM, without changes. This approach uses the [IaaS (Infrastructure as a Service) model](https://azure.microsoft.com/overview/what-is-iaas/). Azure includes several tools such as [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/), [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/), and [Azure Database Migration Service](https://azure.microsoft.com/campaigns/database-migration/) to make such a move easier. While this strategy can yield some cost savings, such applications typically weren't architected to unlock and leverage the benefits of cloud computing. -Monolithic apps that are critical to the business oftentimes benefit from an enhanced lift-and-shift (*Cloud Optimized*) migration. This approach includes deployment optimizations that enable key cloud services - without changing the core architecture of the application. For example, you might [containerize](https://docs.microsoft.com/virtualization/windowscontainers/about/) the application and deploy it to a container orchestrator, like [Azure Kubernetes Services](https://azure.microsoft.com/services/kubernetes-service/), discussed later in this book. Once in the cloud, the application could consume other cloud services such as databases, message queues, monitoring, and distributed caching. +Monolithic apps that are critical to the business oftentimes benefit from an enhanced lift-and-shift (*Cloud Optimized*) migration. This approach includes deployment optimizations that enable key cloud services - without changing the core architecture of the application. For example, you might [containerize](/virtualization/windowscontainers/about/) the application and deploy it to a container orchestrator, like [Azure Kubernetes Services](https://azure.microsoft.com/services/kubernetes-service/), discussed later in this book. Once in the cloud, the application could consume other cloud services such as databases, message queues, monitoring, and distributed caching. Finally, monolithic apps that perform strategic enterprise functions might best benefit from a *Cloud-Native* approach, the subject of this book. This approach provides agility and velocity. But, it comes at a cost of replatforming, rearchitecting, and rewriting code. @@ -75,7 +75,7 @@ With the introduction behind, we now dive into a much more detailed look at clou - [Beyond the Twelve-Factor Application](https://content.pivotal.io/blog/beyond-the-twelve-factor-app) -- [What is Infrastructure as Code](https://docs.microsoft.com/azure/devops/learn/what-is-infrastructure-as-code) +- [What is Infrastructure as Code](/azure/devops/learn/what-is-infrastructure-as-code) - [Uber Engineering's Micro Deploy: Deploying Daily with Confidence](https://eng.uber.com/micro-deploy/) diff --git a/docs/architecture/cloud-native/centralized-configuration.md b/docs/architecture/cloud-native/centralized-configuration.md index b3b223588c079..a9838897c21f8 100644 --- a/docs/architecture/cloud-native/centralized-configuration.md +++ b/docs/architecture/cloud-native/centralized-configuration.md @@ -14,7 +14,7 @@ The Azure cloud presents several great options. ## Azure App Configuration -[Azure App Configuration](https://docs.microsoft.com/azure/azure-app-configuration/overview) is a fully managed Azure service that stores non-secret configuration settings in a secure, centralized location. Stored values can be shared among multiple services and applications. +[Azure App Configuration](/azure/azure-app-configuration/overview) is a fully managed Azure service that stores non-secret configuration settings in a secure, centralized location. Stored values can be shared among multiple services and applications. The service is simple to use and provides several benefits: @@ -49,16 +49,16 @@ The eShopOnContainers application includes local application settings files with ## References - [The eShopOnContainers Architecture](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Architecture) -- [Orchestrating microservices and multi-container applications for high scalability and availability](https://docs.microsoft.com/dotnet/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications) -- [Azure API Management](https://docs.microsoft.com/azure/api-management/api-management-key-concepts) -- [Azure SQL Database Overview](https://docs.microsoft.com/azure/sql-database/sql-database-technical-overview) +- [Orchestrating microservices and multi-container applications for high scalability and availability](../microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md) +- [Azure API Management](/azure/api-management/api-management-key-concepts) +- [Azure SQL Database Overview](/azure/sql-database/sql-database-technical-overview) - [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) -- [Azure Cosmos DB's API for MongoDB](https://docs.microsoft.com/azure/cosmos-db/mongodb-introduction) -- [Azure Service Bus](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) -- [Azure Monitor overview](https://docs.microsoft.com/azure/azure-monitor/overview) +- [Azure Cosmos DB's API for MongoDB](/azure/cosmos-db/mongodb-introduction) +- [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) +- [Azure Monitor overview](/azure/azure-monitor/overview) - [eShopOnContainers: Create Kubernetes cluster in AKS](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Deploy-to-Azure-Kubernetes-Service-(AKS)#create-kubernetes-cluster-in-aks) - [eShopOnContainers: Azure Dev Spaces](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Azure-Dev-Spaces) -- [Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/about) +- [Azure Dev Spaces](/azure/dev-spaces/about) >[!div class="step-by-step"] >[Previous](deploy-eshoponcontainers-azure.md) diff --git a/docs/architecture/cloud-native/combine-containers-serverless-approaches.md b/docs/architecture/cloud-native/combine-containers-serverless-approaches.md index 8b4f79841afd0..e3bd43199ae66 100644 --- a/docs/architecture/cloud-native/combine-containers-serverless-approaches.md +++ b/docs/architecture/cloud-native/combine-containers-serverless-approaches.md @@ -24,11 +24,11 @@ To wrap an Azure Function in a Docker container, install the [Azure Functions Co func init ProjectName --worker-runtime dotnet --docker ``` -When the project is created, it will include a Dockerfile and the worker runtime configured to `dotnet`. Now, you can create and test your function locally. Build and run it using the `docker build` and `docker run` commands. For detailed steps to get started building Azure Functions with Docker support, see the [Create a function on Linux using a custom image](https://docs.microsoft.com/azure/azure-functions/functions-create-function-linux-custom-image) tutorial. +When the project is created, it will include a Dockerfile and the worker runtime configured to `dotnet`. Now, you can create and test your function locally. Build and run it using the `docker build` and `docker run` commands. For detailed steps to get started building Azure Functions with Docker support, see the [Create a function on Linux using a custom image](/azure/azure-functions/functions-create-function-linux-custom-image) tutorial. ## How to combine serverless and Kubernetes with KEDA -In this chapter, you've seen that the Azure Functions' platform automatically scales out to meet demand. When deploying containerized functions to AKS, however, you lose the built-in scaling functionality. To the rescue comes [Kubernetes-based Event Driven (KEDA)](https://docs.microsoft.com/azure/azure-functions/functions-kubernetes-keda). It enables fine-grained autoscaling for `event-driven Kubernetes workloads,` including containerized functions. +In this chapter, you've seen that the Azure Functions' platform automatically scales out to meet demand. When deploying containerized functions to AKS, however, you lose the built-in scaling functionality. To the rescue comes [Kubernetes-based Event Driven (KEDA)](/azure/azure-functions/functions-kubernetes-keda). It enables fine-grained autoscaling for `event-driven Kubernetes workloads,` including containerized functions. KEDA provides event-driven scaling functionality to the Functions' runtime in a Docker container. KEDA can scale from zero instances (when no events are occurring) out to `n instances`, based on load. It enables autoscaling by exposing custom metrics to the Kubernetes autoscaler (Horizontal Pod Autoscaler). Using Functions containers with KEDA makes it possible to replicate serverless function capabilities in any Kubernetes cluster. diff --git a/docs/architecture/cloud-native/communication-patterns.md b/docs/architecture/cloud-native/communication-patterns.md index b0fcc50e12133..08867214ee970 100644 --- a/docs/architecture/cloud-native/communication-patterns.md +++ b/docs/architecture/cloud-native/communication-patterns.md @@ -15,7 +15,7 @@ In a monolithic application, communication is straightforward. The code modules Cloud-native systems implement a microservice-based architecture with many small, independent microservices. Each microservice executes in a separate process and typically runs inside a container that is deployed to a *cluster*. -A cluster groups a pool of virtual machines together to form a highly available environment. They're managed with an orchestration tool, which is responsible for deploying and managing the containerized microservices. Figure 4-1 shows a [Kubernetes](https://kubernetes.io) cluster deployed into the Azure cloud with the fully managed [Azure Kubernetes Services](https://docs.microsoft.com/azure/aks/intro-kubernetes). +A cluster groups a pool of virtual machines together to form a highly available environment. They're managed with an orchestration tool, which is responsible for deploying and managing the containerized microservices. Figure 4-1 shows a [Kubernetes](https://kubernetes.io) cluster deployed into the Azure cloud with the fully managed [Azure Kubernetes Services](/azure/aks/intro-kubernetes). ![A Kubernetes cluster in Azure](./media/kubernetes-cluster-in-azure.png) diff --git a/docs/architecture/cloud-native/definition.md b/docs/architecture/cloud-native/definition.md index fbf333e631a4d..60da953d6d8d3 100644 --- a/docs/architecture/cloud-native/definition.md +++ b/docs/architecture/cloud-native/definition.md @@ -90,7 +90,7 @@ In the book, [Beyond the Twelve-Factor App](https://content.pivotal.io/blog/bey | :-------- | :-------- | :-------- | | 13 | API First | Make everything a service. Assume your code will be consumed by a front-end client, gateway, or another service. | | 14 | Telemetry | On a workstation, you have deep visibility into your application and its behavior. In the cloud, you don't. Make sure your design includes the collection of monitoring, domain-specific, and health/system data. | -| 15 | Authentication/ Authorization | Implement identity from the start. Consider [RBAC (role-based access control)](https://docs.microsoft.com/azure/role-based-access-control/overview) features available in public clouds. | +| 15 | Authentication/ Authorization | Implement identity from the start. Consider [RBAC (role-based access control)](/azure/role-based-access-control/overview) features available in public clouds. | We'll refer to many of the 12+ factors in this chapter and throughout the book. @@ -275,17 +275,17 @@ Backing services are discussed in detail Chapter 5, *Cloud-Native Data Patterns* As you've seen, cloud-native systems embrace microservices, containers, and modern system design to achieve speed and agility. But, that's only part of the story. How do you provision the cloud environments upon which these systems run? How do you rapidly deploy app features and updates? How do you round out the full picture? -Enter the widely accepted practice of [Infrastructure as Code](https://docs.microsoft.com/azure/devops/learn/what-is-infrastructure-as-code), or IaC. +Enter the widely accepted practice of [Infrastructure as Code](/azure/devops/learn/what-is-infrastructure-as-code), or IaC. With IaC, you automate platform provisioning and application deployment. You essentially apply software engineering practices such as testing and versioning to your DevOps practices. Your infrastructure and deployments are automated, consistent, and repeatable. ### Automating infrastructure -Tools like [Azure Resource Manager](https://azure.microsoft.com/documentation/articles/resource-group-overview/), Terraform, and the [Azure CLI](https://docs.microsoft.com/cli/azure/), enable you to declaratively script the cloud infrastructure you require. Resource names, locations, capacities, and secrets are parameterized and dynamic. The script is versioned and checked into source control as an artifact of your project. You invoke the script to provision a consistent and repeatable infrastructure across system environments, such as QA, staging, and production. +Tools like [Azure Resource Manager](/azure/azure-resource-manager/management/overview), Terraform, and the [Azure CLI](/cli/azure/), enable you to declaratively script the cloud infrastructure you require. Resource names, locations, capacities, and secrets are parameterized and dynamic. The script is versioned and checked into source control as an artifact of your project. You invoke the script to provision a consistent and repeatable infrastructure across system environments, such as QA, staging, and production. Under the hood, IaC is idempotent, meaning that you can run the same script over and over without side effects. If the team needs to make a change, they edit and rerun the script. Only the updated resources are affected. -In the article, [What is Infrastructure as Code](https://docs.microsoft.com/azure/devops/learn/what-is-infrastructure-as-code), Author Sam Guckenheimer describes how, "Teams who implement IaC can deliver stable environments rapidly and at scale. Teams avoid manual configuration of environments and enforce consistency by representing the desired state of their environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by configuration drift or missing dependencies. DevOps teams can work together with a unified set of practices and tools to deliver applications and their supporting infrastructure rapidly, reliably, and at scale." +In the article, [What is Infrastructure as Code](/azure/devops/learn/what-is-infrastructure-as-code), Author Sam Guckenheimer describes how, "Teams who implement IaC can deliver stable environments rapidly and at scale. Teams avoid manual configuration of environments and enforce consistency by representing the desired state of their environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by configuration drift or missing dependencies. DevOps teams can work together with a unified set of practices and tools to deliver applications and their supporting infrastructure rapidly, reliably, and at scale." ### Automating deployments diff --git a/docs/architecture/cloud-native/deploy-containers-azure.md b/docs/architecture/cloud-native/deploy-containers-azure.md index 6066076116599..f990dcff31c52 100644 --- a/docs/architecture/cloud-native/deploy-containers-azure.md +++ b/docs/architecture/cloud-native/deploy-containers-azure.md @@ -18,7 +18,7 @@ When containerizing a microservice, you first a build container "image." The ima Once created, container images are stored in container registries. They enable you to build, store, and manage container images. There are many registries available, both public and private. Azure Container Registry (ACR) is a fully managed container registry service in the Azure cloud. It persists your images inside the Azure network, reducing the time to deploy them to Azure container hosts. You can also secure them using the same security and identity procedures that you use for other Azure resources. -You create an Azure Container Registry using the [Azure portal](https://docs.microsoft.com/azure/container-registry/container-registry-get-started-portal), [Azure CLI](https://docs.microsoft.com/azure/container-registry/container-registry-get-started-azure-cli), or [PowerShell tools](https://docs.microsoft.com/azure/container-registry/container-registry-get-started-powershell). Creating a registry in Azure is simple. It requires an Azure subscription, resource group, and a unique name. Figure 3-11 shows the basic options for creating a registry, which will be hosted at `registryname.azurecr.io`. +You create an Azure Container Registry using the [Azure portal](/azure/container-registry/container-registry-get-started-portal), [Azure CLI](/azure/container-registry/container-registry-get-started-azure-cli), or [PowerShell tools](/azure/container-registry/container-registry-get-started-powershell). Creating a registry in Azure is simple. It requires an Azure subscription, resource group, and a unique name. Figure 3-11 shows the basic options for creating a registry, which will be hosted at `registryname.azurecr.io`. ![Create container registry](./media/create-container-registry.png) @@ -52,7 +52,7 @@ As a best practice, developers shouldn't manually push images to a container reg ## ACR Tasks -[ACR Tasks](https://docs.microsoft.com/azure/container-registry/container-registry-tasks-overview) is a set of features available from the Azure Container Registry. It extends your [inner-loop development cycle](https://docs.microsoft.com/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow) by building and managing container images in the Azure cloud. Instead of invoking a `docker build` and `docker push` locally on your development machine, they're automatically handled by ACR Tasks in the cloud. +[ACR Tasks](/azure/container-registry/container-registry-tasks-overview) is a set of features available from the Azure Container Registry. It extends your [inner-loop development cycle](../containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow.md) by building and managing container images in the Azure cloud. Instead of invoking a `docker build` and `docker push` locally on your development machine, they're automatically handled by ACR Tasks in the cloud. The following AZ CLI command both builds a container image and pushes it to ACR: @@ -91,7 +91,7 @@ This information is sufficient to get started. As part of the creation process i - Monitoring - Tags -This [quickstart walks through deploying an AKS cluster using the Azure portal](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough-portal). +This [quickstart walks through deploying an AKS cluster using the Azure portal](/azure/aks/kubernetes-walkthrough-portal). ## Azure Dev Spaces diff --git a/docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md b/docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md index 815b2d194c2b9..309089fc677df 100644 --- a/docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md +++ b/docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md @@ -73,7 +73,7 @@ At the same time, developer John is customizing the Reservations microservice an ![eShopOnContainers Architecture](./media/azure-devspaces-two.png) **Figure 2-8**. Developer John deploys his own version of the Reservations microservice and tests it without conflicting with other developers. -Using Azure Dev Spaces, teams can work directly with AKS while independently changing, deploying, and testing their changes. This approach reduces the need for separate dedicated hosted environments since every developer effectively has their own AKS environment. Developers can work with Azure Dev Spaces using its CLI or launch their application to Azure Dev Spaces directly from Visual Studio. [Learn more about how Azure Dev Spaces works and is configured.](https://docs.microsoft.com/azure/dev-spaces/how-dev-spaces-works) +Using Azure Dev Spaces, teams can work directly with AKS while independently changing, deploying, and testing their changes. This approach reduces the need for separate dedicated hosted environments since every developer effectively has their own AKS environment. Developers can work with Azure Dev Spaces using its CLI or launch their application to Azure Dev Spaces directly from Visual Studio. [Learn more about how Azure Dev Spaces works and is configured.](/azure/dev-spaces/how-dev-spaces-works) ## Azure Functions and Logic Apps (Serverless) diff --git a/docs/architecture/cloud-native/devops.md b/docs/architecture/cloud-native/devops.md index a0908649794e9..9fbf9200b0fd2 100644 --- a/docs/architecture/cloud-native/devops.md +++ b/docs/architecture/cloud-native/devops.md @@ -149,7 +149,7 @@ The stages in the boards aren't the only organizational tool. Depending on the c The description field supports the normal styles you'd expect (bold, italic underscore and strike through) and the ability to insert images. This makes it a powerful tool for use when specifying work or bugs. -Tasks can be rolled up into features, which define a larger unit of work. Features, in turn, can be [rolled up into epics](https://docs.microsoft.com/azure/devops/boards/backlogs/define-features-epics?view=azure-devops). Classifying tasks in this hierarchy makes it much easier to understand how close a large feature is to rolling out. +Tasks can be rolled up into features, which define a larger unit of work. Features, in turn, can be [rolled up into epics](/azure/devops/boards/backlogs/define-features-epics?view=azure-devops). Classifying tasks in this hierarchy makes it much easier to understand how close a large feature is to rolling out. ![Figure 10-6 Work item types configured by default in the Basic process template](./media/board-issue-types.png) diff --git a/docs/architecture/cloud-native/distributed-data.md b/docs/architecture/cloud-native/distributed-data.md index 246fd06d65c4c..56826902b86d4 100644 --- a/docs/architecture/cloud-native/distributed-data.md +++ b/docs/architecture/cloud-native/distributed-data.md @@ -17,7 +17,7 @@ Figure 5-1 contrasts the differences. Experienced developers will easily recognize the architecture on the left-side of figure 5-1. In this *monolithic application*, business service components collocate together in a shared services tier, sharing data from a single relational database. -In many ways, a single database keeps data management simple. Querying data across multiple tables is straightforward. Changes to data update together or they all rollback. [ACID transactions](https://docs.microsoft.com/windows/desktop/cossdk/acid-properties) guarantee strong and immediate consistency. +In many ways, a single database keeps data management simple. Querying data across multiple tables is straightforward. Changes to data update together or they all rollback. [ACID transactions](/windows/desktop/cossdk/acid-properties) guarantee strong and immediate consistency. Designing for cloud-native, we take a different approach. On the right-side of Figure 5-1, note how business functionality segregates into small, independent microservices. Each microservice encapsulates a specific business capability and its own data. The monolithic database decomposes into a distributed data model with many smaller databases, each aligning with a microservice. When the smoke clears, we emerge with a design that exposes a *database per microservice*. @@ -63,7 +63,7 @@ One option discussed in Chapter 4 is a [direct HTTP call](service-to-service-com We could also implement a request-reply pattern with separate inbound and outbound queues for each service. However, this pattern is complicated and requires plumbing to correlate request and response messages. While it does decouple the backend microservice calls, the calling service must still synchronously wait for the call to complete. Network congestion, transient faults, or an overloaded microservice and can result in long-running and even failed operations. -Instead, a widely accepted pattern for removing cross-service dependencies is the [Materialized View Pattern](https://docs.microsoft.com/azure/architecture/patterns/materialized-view), shown in Figure 5-4. +Instead, a widely accepted pattern for removing cross-service dependencies is the [Materialized View Pattern](/azure/architecture/patterns/materialized-view), shown in Figure 5-4. ![Materialized view pattern](./media/materialized-view-pattern.png) @@ -87,7 +87,7 @@ In the preceding figure, five independent microservices participate in a distrib Instead, you must construct this distributed transaction *programmatically*. -A popular pattern for adding distributed transactional support is the Saga pattern. It's implemented by grouping local transactions together programmatically and sequentially invoking each one. If any of the local transactions fail, the Saga aborts the operation and invokes a set of [compensating transactions](https://docs.microsoft.com/azure/architecture/patterns/compensating-transaction). The compensating transactions undo the changes made by the preceding local transactions and restore data consistency. Figure 5-6 shows a failed transaction with the Saga pattern. +A popular pattern for adding distributed transactional support is the Saga pattern. It's implemented by grouping local transactions together programmatically and sequentially invoking each one. If any of the local transactions fail, the Saga aborts the operation and invokes a set of [compensating transactions](/azure/architecture/patterns/compensating-transaction). The compensating transactions undo the changes made by the preceding local transactions and restore data consistency. Figure 5-6 shows a failed transaction with the Saga pattern. ![Roll back in saga pattern](./media/saga-rollback-operation.png) @@ -103,7 +103,7 @@ Large cloud-native applications often support high-volume data requirements. In ### CQRS -[CQRS](https://docs.microsoft.com/azure/architecture/patterns/cqrs), is an architectural pattern that can help maximize performance, scalability, and security. The pattern separates operations that read data from those operations that write data. +[CQRS](/azure/architecture/patterns/cqrs), is an architectural pattern that can help maximize performance, scalability, and security. The pattern separates operations that read data from those operations that write data. For normal scenarios, the same entity model and data repository object are used for *both* read and write operations. @@ -119,11 +119,11 @@ In the previous figure, separate command and query models are implemented. Each This separation enables reads and writes to scale independently. Read operations use a schema optimized for queries, while the writes use a schema optimized for updates. Read queries go against denormalized data, while complex business logic can be applied to the write model. As well, you might impose tighter security on write operations than those exposing reads. -Implementing CQRS can improve application performance for cloud-native services. However, it does result in a more complex design. Apply this principle carefully and strategically to those sections of your cloud-native application that will benefit from it. For more on CQRS, see the Microsoft book [.NET Microservices: Architecture for Containerized .NET Applications](https://docs.microsoft.com/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns). +Implementing CQRS can improve application performance for cloud-native services. However, it does result in a more complex design. Apply this principle carefully and strategically to those sections of your cloud-native application that will benefit from it. For more on CQRS, see the Microsoft book [.NET Microservices: Architecture for Containerized .NET Applications](../microservices/microservice-ddd-cqrs-patterns/apply-simplified-microservice-cqrs-ddd-patterns.md). ### Event sourcing -Another approach to optimizing high volume data scenarios involves [Event Sourcing](https://docs.microsoft.com/azure/architecture/patterns/event-sourcing). +Another approach to optimizing high volume data scenarios involves [Event Sourcing](/azure/architecture/patterns/event-sourcing). A system typically stores the current state of a data entity. If a user changes their phone number, for example, the customer record is updated with the new number. We always know the current state of a data entity, but each update overwrites the previous state. diff --git a/docs/architecture/cloud-native/elastic-search-in-azure.md b/docs/architecture/cloud-native/elastic-search-in-azure.md index 106f3ada088b2..8c780ad504b9d 100644 --- a/docs/architecture/cloud-native/elastic-search-in-azure.md +++ b/docs/architecture/cloud-native/elastic-search-in-azure.md @@ -33,23 +33,23 @@ This chapter presented a detailed look at data in cloud-native systems. We start ### References -- [Command and Query Responsibility Segregation (CQRS) pattern](https://docs.microsoft.com/azure/architecture/patterns/cqrs) +- [Command and Query Responsibility Segregation (CQRS) pattern](/azure/architecture/patterns/cqrs) -- [Event Sourcing pattern](https://docs.microsoft.com/azure/architecture/patterns/event-sourcing) +- [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing) - [Why isn't RDBMS Partition Tolerant in CAP Theorem and why is it Available?](https://stackoverflow.com/questions/36404765/why-isnt-rdbms-partition-tolerant-in-cap-theorem-and-why-is-it-available) -- [Materialized View](https://docs.microsoft.com/azure/architecture/patterns/materialized-view) +- [Materialized View](/azure/architecture/patterns/materialized-view) - [All you really need to know about open source databases](https://www.ibm.com/blogs/systems/all-you-really-need-to-know-about-open-source-databases/) -- [Compensating Transaction pattern](https://docs.microsoft.com/azure/architecture/patterns/compensating-transaction) +- [Compensating Transaction pattern](/azure/architecture/patterns/compensating-transaction) - [Saga Pattern](https://microservices.io/patterns/data/saga.html) - [Saga Patterns | How to implement business transactions using microservices](https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/) -- [Compensating Transaction pattern](https://docs.microsoft.com/azure/architecture/patterns/compensating-transaction) +- [Compensating Transaction pattern](/azure/architecture/patterns/compensating-transaction) - [Getting Behind the 9-Ball: Cosmos DB Consistency Levels Explained](https://blog.jeremylikness.com/blog/2018-03-23_getting-behind-the-9ball-cosmosdb-consistency-levels/) diff --git a/docs/architecture/cloud-native/feature-flags.md b/docs/architecture/cloud-native/feature-flags.md index b607816d7b757..c8dcd78683bf7 100644 --- a/docs/architecture/cloud-native/feature-flags.md +++ b/docs/architecture/cloud-native/feature-flags.md @@ -38,9 +38,9 @@ Note how this approach separates the decision logic from the feature code. In chapter 1, we discussed the `Twelve-Factor App`. The guidance recommended keeping configuration settings external from application executable code. When needed, settings can be read in from the external source. Feature flag configuration values should also be independent from their codebase. By externalizing flag configuration in a separate repository, you can change flag state without modifying and redeploying the application. -[Azure App Configuration](https://docs.microsoft.com/azure/azure-app-configuration/overview) provides a centralized repository for feature flags. With it, you define different kinds of feature flags and manipulate their states quickly and confidently. You add the App Configuration client libraries to your application to enable feature flag functionality. Various programming language frameworks are supported. +[Azure App Configuration](/azure/azure-app-configuration/overview) provides a centralized repository for feature flags. With it, you define different kinds of feature flags and manipulate their states quickly and confidently. You add the App Configuration client libraries to your application to enable feature flag functionality. Various programming language frameworks are supported. -Feature flags can be easily implemented in an [ASP.NET Core service](https://docs.microsoft.com/azure/azure-app-configuration/use-feature-flags-dotnet-core). Installing the .NET Feature Management libraries and App Configuration provider enable you to declaratively add feature flags to your code. They enable `FeatureGate` attributes so that you don't have to manually write if statements across your codebase. +Feature flags can be easily implemented in an [ASP.NET Core service](/azure/azure-app-configuration/use-feature-flags-dotnet-core). Installing the .NET Feature Management libraries and App Configuration provider enable you to declaratively add feature flags to your code. They enable `FeatureGate` attributes so that you don't have to manually write if statements across your codebase. Once configured in your Startup class, you can add feature flag functionality at the controller, action, or middleware level. Figure 10-12 presents controller and action implementation: diff --git a/docs/architecture/cloud-native/front-end-communication.md b/docs/architecture/cloud-native/front-end-communication.md index 2aab778c323ca..d9ba9258b1b7f 100644 --- a/docs/architecture/cloud-native/front-end-communication.md +++ b/docs/architecture/cloud-native/front-end-communication.md @@ -36,7 +36,7 @@ In the previous figure, note how the API Gateway service abstracts the back-end The gateway insulates the client from internal service partitioning and refactoring. If you change a back-end service, you accommodate for it in the gateway without breaking the client. It's also your first line of defense for cross-cutting concerns, such as identity, caching, resiliency, metering, and throttling. Many of these cross-cutting concerns can be off-loaded from the back-end core services to the gateway, simplifying the back-end services. -Care must be taken to keep the API Gateway simple and fast. Typically, business logic is kept out of the gateway. A complex gateway risks becoming a bottleneck and eventually a monolith itself. Larger systems often expose multiple API Gateways segmented by client type (mobile, web, desktop) or back-end functionality. The [Backend for Frontends](https://docs.microsoft.com/azure/architecture/patterns/backends-for-frontends) pattern provides direction for implementing multiple gateways. The pattern is shown in Figure 4-4. +Care must be taken to keep the API Gateway simple and fast. Typically, business logic is kept out of the gateway. A complex gateway risks becoming a bottleneck and eventually a monolith itself. Larger systems often expose multiple API Gateways segmented by client type (mobile, web, desktop) or back-end functionality. The [Backend for Frontends](/azure/architecture/patterns/backends-for-frontends) pattern provides direction for implementing multiple gateways. The pattern is shown in Figure 4-4. ![API Gateway Pattern](./media/backend-for-frontend-pattern.png) @@ -70,7 +70,7 @@ Consider Ocelot for simple cloud-native applications that don't require the rich ## Azure Application Gateway -For simple gateway requirements, you may consider [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview). Available as an Azure [PaaS service](https://azure.microsoft.com/overview/what-is-paas/), it includes basic gateway features such as URL routing, SSL termination, and a Web Application Firewall. The service supports [Layer-7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/) capabilities. With Layer 7, you can route requests based on the actual content of an HTTP message, not just low-level TCP network packets. +For simple gateway requirements, you may consider [Azure Application Gateway](/azure/application-gateway/overview). Available as an Azure [PaaS service](https://azure.microsoft.com/overview/what-is-paas/), it includes basic gateway features such as URL routing, SSL termination, and a Web Application Firewall. The service supports [Layer-7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/) capabilities. With Layer 7, you can route requests based on the actual content of an HTTP message, not just low-level TCP network packets. Throughout this book, we evangelize hosting cloud-native systems in [Kubernetes](https://www.infoworld.com/article/3268073/what-is-kubernetes-your-next-application-platform.html). A container orchestrator, Kubernetes automates the deployment, scaling, and operational concerns of containerized workloads. Azure Application Gateway can be configured as an API gateway for [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/) cluster. @@ -94,7 +94,7 @@ To start, API Management exposes a gateway server that allows controlled access For developers, API Management offers a developer portal that provides access to services, documentation, and sample code for invoking them. Developers can use Swagger/Open API to inspect service endpoints and analyze their usage. The service works across the major development platforms: .NET, Java, Golang, and more. -The publisher portal exposes a management dashboard where administrators expose APIs and manage their behavior. Service access can be granted, service health monitored, and service telemetry gathered. Administrators apply *policies* to each endpoint to affect behavior. [Policies](https://docs.microsoft.com/azure/api-management/api-management-howto-policies) are pre-built statements that execute sequentially for each service call. Policies are configured for an inbound call, outbound call, or invoked upon an error. Policies can be applied at different service scopes as to enable deterministic ordering when combining policies. The product ships with a large number of prebuilt [policies](https://docs.microsoft.com/azure/api-management/api-management-policies). +The publisher portal exposes a management dashboard where administrators expose APIs and manage their behavior. Service access can be granted, service health monitored, and service telemetry gathered. Administrators apply *policies* to each endpoint to affect behavior. [Policies](/azure/api-management/api-management-howto-policies) are pre-built statements that execute sequentially for each service call. Policies are configured for an inbound call, outbound call, or invoked upon an error. Policies can be applied at different service scopes as to enable deterministic ordering when combining policies. The product ships with a large number of prebuilt [policies](/azure/api-management/api-management-policies). Here are examples of how policies can affect the behavior of your cloud-native services: @@ -115,13 +115,13 @@ Azure API Management is available across [four different tiers](https://azure.mi - Standard - Premium -The Developer tier is meant for non-production workloads and evaluation. The other tiers offer progressively more power, features, and higher service level agreements (SLAs). The Premium tier provides [Azure Virtual Network](https://docs.microsoft.com/azure/virtual-network/virtual-networks-overview) and [multi-region support](https://docs.microsoft.com/azure/api-management/api-management-howto-deploy-multi-region). All tiers have a fixed price per hour. +The Developer tier is meant for non-production workloads and evaluation. The other tiers offer progressively more power, features, and higher service level agreements (SLAs). The Premium tier provides [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) and [multi-region support](/azure/api-management/api-management-howto-deploy-multi-region). All tiers have a fixed price per hour. The Azure cloud also offers a [serverless tier](https://azure.microsoft.com/blog/announcing-azure-api-management-for-serverless-architectures/) for Azure API Management. Referred to as the *consumption pricing tier*, the service is a variant of API Management designed around the serverless computing model. Unlike the "pre-allocated" pricing tiers previously shown, the consumption tier provides instant provisioning and pay-per-action pricing. It enables API Gateway features for the following use cases: -- Microservices implemented using serverless technologies such as [Azure Functions](https://docs.microsoft.com/azure/azure-functions/functions-overview) and [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/). +- Microservices implemented using serverless technologies such as [Azure Functions](/azure/azure-functions/functions-overview) and [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/). - Azure backing service resources such as Service Bus queues and topics, Azure storage, and others. - Microservices where traffic has occasional large spikes but remains low the majority of the time. diff --git a/docs/architecture/cloud-native/grpc.md b/docs/architecture/cloud-native/grpc.md index 7f6424419b798..a79ab288530b1 100644 --- a/docs/architecture/cloud-native/grpc.md +++ b/docs/architecture/cloud-native/grpc.md @@ -8,7 +8,7 @@ ms.date: 05/13/2020 # gRPC -So far in this book, we've focused on [REST-based](https://docs.microsoft.com/azure/architecture/best-practices/api-design) communication. We've seen that REST is a flexible architectural style that defines CRUD-based operations against entity resources. Clients interact with resources across HTTP with a request/response communication model. While REST is widely implemented, a newer communication technology, gRPC, has gained tremendous momentum across the cloud-native community. +So far in this book, we've focused on [REST-based](/azure/architecture/best-practices/api-design) communication. We've seen that REST is a flexible architectural style that defines CRUD-based operations against entity resources. Clients interact with resources across HTTP with a request/response communication model. While REST is widely implemented, a newer communication technology, gRPC, has gained tremendous momentum across the cloud-native community. ## What is gRPC? @@ -30,7 +30,7 @@ gRPC uses HTTP/2 for its transport protocol. While compatible with HTTP 1.1, HTT - Built-in streaming enabling requests and responses to asynchronously stream large data sets. - Header compression that reduces network usage. -gRPC is lightweight and highly performant. It can be up to 8x faster than JSON serialization with messages 60-80% smaller. In Microsoft [Windows Communication Foundation (WCF)](https://docs.microsoft.com/dotnet/framework/wcf/whats-wcf) parlance, gRPC performance exceeds the speed and efficiency of the highly optimized [NetTCP bindings](https://docs.microsoft.com/dotnet/api/system.servicemodel.nettcpbinding?view=netframework-4.8). Unlike NetTCP, which favors the Microsoft stack, gRPC is cross-platform. +gRPC is lightweight and highly performant. It can be up to 8x faster than JSON serialization with messages 60-80% smaller. In Microsoft [Windows Communication Foundation (WCF)](../../framework/wcf/whats-wcf.md) parlance, gRPC performance exceeds the speed and efficiency of the highly optimized [NetTCP bindings](/dotnet/api/system.servicemodel.nettcpbinding?view=netframework-4.8). Unlike NetTCP, which favors the Microsoft stack, gRPC is cross-platform. ## Protocol Buffers @@ -44,7 +44,7 @@ Using the proto file, the Protobuf compiler, `protoc`, generates both client and At runtime, each message is serialized as a standard Protobuf representation and exchanged between the client and remote service. Unlike JSON or XML, Protobuf messages are serialized as compiled binary bytes. -The book, [gRPC for WCF Developers](https://docs.microsoft.com/dotnet/architecture/grpc-for-wcf-developers/), available from the Microsoft Architecture site, provides in-depth coverage of gRPC and Protocol Buffers. +The book, [gRPC for WCF Developers](../grpc-for-wcf-developers/index.md), available from the Microsoft Architecture site, provides in-depth coverage of gRPC and Protocol Buffers. ## gRPC support in .NET @@ -92,7 +92,7 @@ The microservice reference architecture, [eShop on Containers](https://github.co **Figure 4-22**. Backend architecture for eShop on Containers -In the previous figure, note how eShop embraces the [Backend for Frontends pattern](https://docs.microsoft.com/azure/architecture/patterns/backends-for-frontends) (BFF) by exposing multiple API gateways. We discussed the BFF pattern earlier in this chapter. Pay close attention to the Aggregator microservice (in gray) that sits between the Web-Shopping API Gateway and backend Shopping microservices. The Aggregator receives a single request from a client, dispatches it to various microservices, aggregates the results, and sends them back to the requesting client. Such operations typically require synchronous communication as to produce an immediate response. In eShop, backend calls from the Aggregator are performed using gRPC as shown in Figure 4-23. +In the previous figure, note how eShop embraces the [Backend for Frontends pattern](/azure/architecture/patterns/backends-for-frontends) (BFF) by exposing multiple API gateways. We discussed the BFF pattern earlier in this chapter. Pay close attention to the Aggregator microservice (in gray) that sits between the Web-Shopping API Gateway and backend Shopping microservices. The Aggregator receives a single request from a client, dispatches it to various microservices, aggregates the results, and sends them back to the requesting client. Such operations typically require synchronous communication as to produce an immediate response. In eShop, backend calls from the Aggregator are performed using gRPC as shown in Figure 4-23. ![gRPC in eShop on Containers](./media/grpc-implementation.png) diff --git a/docs/architecture/cloud-native/identity-server.md b/docs/architecture/cloud-native/identity-server.md index 43ef5b902aeae..9b60f79a93a11 100644 --- a/docs/architecture/cloud-native/identity-server.md +++ b/docs/architecture/cloud-native/identity-server.md @@ -95,7 +95,7 @@ Many cloud-native applications leverage server-side APIs and rich client single ## References - [IdentityServer documentation](https://docs.identityserver.io/en/latest/) -- [Application types](https://docs.microsoft.com/azure/active-directory/develop/app-types) +- [Application types](/azure/active-directory/develop/app-types) - [JavaScript OIDC client](https://docs.identityserver.io/en/latest/quickstarts/4_javascript_client.html) >[!div class="step-by-step"] diff --git a/docs/architecture/cloud-native/identity.md b/docs/architecture/cloud-native/identity.md index 8becfe77ceaef..3a5d2f09fd2f2 100644 --- a/docs/architecture/cloud-native/identity.md +++ b/docs/architecture/cloud-native/identity.md @@ -22,7 +22,7 @@ Typically, the STS is only responsible for authenticating the principal. Determi ## References -- [Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/) +- [Microsoft identity platform](/azure/active-directory/develop/) >[!div class="step-by-step"] >[Previous](azure-monitor.md) diff --git a/docs/architecture/cloud-native/infrastructure-as-code.md b/docs/architecture/cloud-native/infrastructure-as-code.md index 8bd773e136821..d147994663162 100644 --- a/docs/architecture/cloud-native/infrastructure-as-code.md +++ b/docs/architecture/cloud-native/infrastructure-as-code.md @@ -8,13 +8,13 @@ ms.date: 05/13/2020 Cloud-native systems embrace microservices, containers, and modern system design to achieve speed and agility. They provide automated build and release stages to ensure consistent and quality code. But, that's only part of the story. How do you provision the cloud environments upon which these systems run? -Modern cloud-native applications embrace the widely accepted practice of [Infrastructure as Code](https://docs.microsoft.com/azure/devops/learn/what-is-infrastructure-as-code), or `IaC`. With IaC, you automate platform provisioning. You essentially apply software engineering practices such as testing and versioning to your DevOps practices. Your infrastructure and deployments are automated, consistent, and repeatable. Just as continuous delivery automated the traditional model of manual deployments, Infrastructure as Code (IaC) is evolving how application environments are managed. +Modern cloud-native applications embrace the widely accepted practice of [Infrastructure as Code](/azure/devops/learn/what-is-infrastructure-as-code), or `IaC`. With IaC, you automate platform provisioning. You essentially apply software engineering practices such as testing and versioning to your DevOps practices. Your infrastructure and deployments are automated, consistent, and repeatable. Just as continuous delivery automated the traditional model of manual deployments, Infrastructure as Code (IaC) is evolving how application environments are managed. Tools like Azure Resource Manager (ARM), Terraform, and the Azure Command Line Interface (CLI) enable you to declaratively script the cloud infrastructure you require. ## Azure Resource Manager templates -ARM stands for [Azure Resource Manager](https://azure.microsoft.com/documentation/articles/resource-group-overview/). It's an API provisioning engine that is built into Azure and exposed as an API service. ARM enables you to deploy, update, delete, and manage the resources contained in Azure resource group in a single, coordinated operation. You provide the engine with a JSON-based template that specifies the resources you require and their configuration. ARM automatically orchestrates the deployment in the correct order respecting dependencies. The engine ensures idempotency. If a desired resource already exists with the same configuration, provisioning will be ignored. +ARM stands for [Azure Resource Manager](/azure/azure-resource-manager/management/overview). It's an API provisioning engine that is built into Azure and exposed as an API service. ARM enables you to deploy, update, delete, and manage the resources contained in Azure resource group in a single, coordinated operation. You provide the engine with a JSON-based template that specifies the resources you require and their configuration. ARM automatically orchestrates the deployment in the correct order respecting dependencies. The engine ensures idempotency. If a desired resource already exists with the same configuration, provisioning will be ignored. Azure Resource Manager templates are a JSON-based language for defining various resources in Azure. The basic schema looks something like Figure 10-14. @@ -97,7 +97,7 @@ Sometimes Terraform and ARM templates output meaningful values, such as a connec ## Azure CLI Scripts and Tasks -Finally, you can leverage [Azure CLI](https://docs.microsoft.com/cli/azure/) to declaratively script your cloud infrastructure. Azure CLI scripts can be created, found, and shared to provision and configure almost any Azure resource. The CLI is simple to use with a gentle learning curve. Scripts are executed within either PowerShell or Bash. They're also straightforward to debug, especially when compared with ARM templates. +Finally, you can leverage [Azure CLI](/cli/azure/) to declaratively script your cloud infrastructure. Azure CLI scripts can be created, found, and shared to provision and configure almost any Azure resource. The CLI is simple to use with a gentle learning curve. Scripts are executed within either PowerShell or Bash. They're also straightforward to debug, especially when compared with ARM templates. Azure CLI scripts work well when you need to tear down and redeploy your infrastructure. Updating an existing environment can be tricky. Many CLI commands aren't idempotent. That means they'll recreate the resource each time they're run, even if the resource already exists. It's always possible to add code that checks for the existence of each resource before creating it. But, doing so, your script can become bloated and difficult to manage. @@ -119,7 +119,7 @@ Figure 10-17 shows a YAML snippet that lists the version of Azure CLI and the de **Figure 10-17** - Azure CLI script -In the article, [What is Infrastructure as Code](https://docs.microsoft.com/azure/devops/learn/what-is-infrastructure-as-code), Author Sam Guckenheimer describes how, "Teams who implement IaC can deliver stable environments rapidly and at scale. Teams avoid manual configuration of environments and enforce consistency by representing the desired state of their environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by configuration drift or missing dependencies. DevOps teams can work together with a unified set of practices and tools to deliver applications and their supporting infrastructure rapidly, reliably, and at scale." +In the article, [What is Infrastructure as Code](/azure/devops/learn/what-is-infrastructure-as-code), Author Sam Guckenheimer describes how, "Teams who implement IaC can deliver stable environments rapidly and at scale. Teams avoid manual configuration of environments and enforce consistency by representing the desired state of their environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by configuration drift or missing dependencies. DevOps teams can work together with a unified set of practices and tools to deliver applications and their supporting infrastructure rapidly, reliably, and at scale." >[!div class="step-by-step"] >[Previous](feature-flags.md) diff --git a/docs/architecture/cloud-native/infrastructure-resiliency-azure.md b/docs/architecture/cloud-native/infrastructure-resiliency-azure.md index e4225d78f4c9c..bbfa2ed749dc5 100644 --- a/docs/architecture/cloud-native/infrastructure-resiliency-azure.md +++ b/docs/architecture/cloud-native/infrastructure-resiliency-azure.md @@ -35,7 +35,7 @@ your Azure VMs. Failures vary in scope of impact. A hardware failure, such as a failed disk, can affect a single node in a cluster. A failed network switch could affect an entire server rack. Less common failures, such as loss of power, could disrupt a whole datacenter. Rarely, an entire region becomes unavailable. -[Redundancy](https://docs.microsoft.com/azure/architecture/guide/design-principles/redundancy) is one way to provide application resilience. The exact level of redundancy needed depends upon your business requirements and will affect both the cost and complexity of your system. For example, a multi-region deployment is more expensive and more complex to manage than a single-region deployment. You'll need operational procedures to manage failover and failback. The additional cost and complexity might be justified for some business scenarios, but not others. +[Redundancy](/azure/architecture/guide/design-principles/redundancy) is one way to provide application resilience. The exact level of redundancy needed depends upon your business requirements and will affect both the cost and complexity of your system. For example, a multi-region deployment is more expensive and more complex to manage than a single-region deployment. You'll need operational procedures to manage failover and failback. The additional cost and complexity might be justified for some business scenarios, but not others. To architect redundancy, you need to identify the critical paths in your application, and then determine if there's redundancy at each point in the path? If a subsystem should fail, will the application fail over to something else? Finally, you need a clear understanding of those features built into the Azure cloud platform that you can leverage to meet your redundancy requirements. Here are recommendations for architecting redundancy: @@ -45,13 +45,13 @@ To architect redundancy, you need to identify the critical paths in your applica - *Plan for multiregion deployment.* If you deploy your application to a single region, and that region becomes unavailable, your application will also become unavailable. This may be unacceptable under the terms of your application's service level agreements. Instead, consider deploying your application and its services across multiple regions. For example, an Azure Kubernetes Service (AKS) cluster is deployed to a single region. To protect your system from a regional failure, you might deploy your application to multiple AKS clusters across different regions and use the [Paired Regions](https://buildazure.com/2017/01/06/azure-region-pairs-explained/) feature to coordinate platform updates and prioritize recovery efforts. -- *Enable [geo-replication](https://docs.microsoft.com/azure/sql-database/sql-database-active-geo-replication).* Geo-replication for services such as Azure SQL Database and Cosmos DB will create secondary replicas of your data across multiple regions. While both services will automatically replicate data within the same region, geo-replication protects you against a regional outage by enabling you to fail over to a secondary region. Another best practice for geo-replication centers around storing container images. To deploy a service in AKS, you need to store and pull the image from a repository. Azure Container Registry integrates with AKS and can securely store container images. To improve performance and availability, consider geo-replicating your images to a registry in each region where you have an AKS cluster. Each AKS cluster then pulls container images from the local container registry in its region as shown in Figure 6-4: +- *Enable [geo-replication](/azure/sql-database/sql-database-active-geo-replication).* Geo-replication for services such as Azure SQL Database and Cosmos DB will create secondary replicas of your data across multiple regions. While both services will automatically replicate data within the same region, geo-replication protects you against a regional outage by enabling you to fail over to a secondary region. Another best practice for geo-replication centers around storing container images. To deploy a service in AKS, you need to store and pull the image from a repository. Azure Container Registry integrates with AKS and can securely store container images. To improve performance and availability, consider geo-replicating your images to a registry in each region where you have an AKS cluster. Each AKS cluster then pulls container images from the local container registry in its region as shown in Figure 6-4: ![Replicated resources across regions](./media/replicated-resources.png) **Figure 6-4**. Replicated resources across regions -- *Implement a DNS traffic load balancer.* [Azure Traffic Manager](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview) provides high-availability for critical applications by load-balancing at the DNS level. It can route traffic to different regions based on geography, cluster response time, and even application endpoint health. For example, Azure Traffic Manager can direct customers to the closest AKS cluster and application instance. If you have multiple AKS clusters in different regions, use Traffic Manager to control how traffic flows to the applications that run in each cluster. Figure 6-5 shows this scenario. +- *Implement a DNS traffic load balancer.* [Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview) provides high-availability for critical applications by load-balancing at the DNS level. It can route traffic to different regions based on geography, cluster response time, and even application endpoint health. For example, Azure Traffic Manager can direct customers to the closest AKS cluster and application instance. If you have multiple AKS clusters in different regions, use Traffic Manager to control how traffic flows to the applications that run in each cluster. Figure 6-5 shows this scenario. ![AKS and Azure Traffic Manager](./media/aks-traffic-manager.png) @@ -75,7 +75,7 @@ The cloud thrives on scaling. The ability to increase/decrease system resources - *Avoid affinity.* A best practice is to ensure a node doesn't require local affinity, often referred to as a *sticky session*. A request should be able to route to any instance. If you need to persist state, it should be saved to a distributed cache, such as [Azure Redis cache](https://azure.microsoft.com/services/cache/). -- *Take advantage of platform autoscaling features.* Use built-in autoscaling features whenever possible, rather than custom or third-party mechanisms. Where possible, use scheduled scaling rules to ensure that resources are available without a startup delay, but add reactive autoscaling to the rules as appropriate, to cope with unexpected changes in demand. For more information, see [Autoscaling guidance](https://docs.microsoft.com/azure/architecture/best-practices/auto-scaling). +- *Take advantage of platform autoscaling features.* Use built-in autoscaling features whenever possible, rather than custom or third-party mechanisms. Where possible, use scheduled scaling rules to ensure that resources are available without a startup delay, but add reactive autoscaling to the rules as appropriate, to cope with unexpected changes in demand. For more information, see [Autoscaling guidance](/azure/architecture/best-practices/auto-scaling). - *Scale out aggressively.* A final practice would be to scale out aggressively so that you can quickly meet immediate spikes in traffic without losing business. And, then scale in (that is, remove unneeded instances) conservatively to keep the system stable. A simple way to implement this is to set the cool down period, which is the time to wait between scaling operations, to five minutes for adding resources and up to 15 minutes for removing instances. @@ -89,7 +89,7 @@ We encouraged the best practice of implementing programmatic retry operations in - *Azure Service Bus.* The Service Bus client exposes a [RetryPolicy class](xref:Microsoft.ServiceBus.RetryPolicy) that can be configured with a back-off interval, retry count, and , which specifies the maximum time an operation can take. The default policy is nine maximum retry attempts with a 30-second backoff period between attempts. -- *Azure SQL Database.* Retry support is provided when using the [Entity Framework Core](https://docs.microsoft.com/ef/core/miscellaneous/connection-resiliency) library. +- *Azure SQL Database.* Retry support is provided when using the [Entity Framework Core](/ef/core/miscellaneous/connection-resiliency) library. - *Azure Storage.* The storage client library support retry operations. The strategies vary across Azure storage tables, blobs, and queues. As well, alternate retries switch between primary and secondary storage services locations when the geo-redundancy feature is enabled. diff --git a/docs/architecture/cloud-native/leverage-containers-orchestrators.md b/docs/architecture/cloud-native/leverage-containers-orchestrators.md index 7de2b3af28a60..0638eb4a8acc6 100644 --- a/docs/architecture/cloud-native/leverage-containers-orchestrators.md +++ b/docs/architecture/cloud-native/leverage-containers-orchestrators.md @@ -209,7 +209,7 @@ The default behavior when the app runs is configured to use Docker as well. Figu **Figure 3-7**. Visual Studio Docker Run Options -In addition to local development, [Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/) provides a convenient way for multiple developers to work with their own Kubernetes configurations within Azure. As you can see in Figure 3-7, you can also run the application in Azure Dev Spaces. +In addition to local development, [Azure Dev Spaces](/azure/dev-spaces/) provides a convenient way for multiple developers to work with their own Kubernetes configurations within Azure. As you can see in Figure 3-7, you can also run the application in Azure Dev Spaces. Also, at any time you can add Docker support to an existing ASP.NET Core application. From the Visual Studio Solution Explorer, right click on the project and **Add** > **Docker Support**, as shown in Figure 3-8. diff --git a/docs/architecture/cloud-native/leverage-serverless-functions.md b/docs/architecture/cloud-native/leverage-serverless-functions.md index 3eb0311f9f2e4..99e865f10b8f2 100644 --- a/docs/architecture/cloud-native/leverage-serverless-functions.md +++ b/docs/architecture/cloud-native/leverage-serverless-functions.md @@ -35,7 +35,7 @@ Microservices are typically constructed to respond to requests, often from an in Serverless exposes individual short-running functions that are invoked in response to a trigger. This makes them ideal for processing background tasks. -An application might need to send an email as a step in a workflow. Instead of sending the notification as part of a microservice request, place the message details onto a queue. An Azure Function can dequeue the message and asynchronously send the email. Doing so could improve the performance and scalability of the microservice. [Queue-based load leveling](https://docs.microsoft.com/azure/architecture/patterns/queue-based-load-leveling) can be implemented to avoid bottlenecks related to sending the emails. Additionally, this stand-alone service could be reused as a utility across many different applications. +An application might need to send an email as a step in a workflow. Instead of sending the notification as part of a microservice request, place the message details onto a queue. An Azure Function can dequeue the message and asynchronously send the email. Doing so could improve the performance and scalability of the microservice. [Queue-based load leveling](/azure/architecture/patterns/queue-based-load-leveling) can be implemented to avoid bottlenecks related to sending the emails. Additionally, this stand-alone service could be reused as a utility across many different applications. Asynchronous messaging from queues and topics is a common pattern to trigger serverless functions. However, Azure Functions can be triggered by other events, such as changes to Azure Blob Storage. A service that supports image uploads could have an Azure Function responsible for optimizing the image size. The function could be triggered directly by inserts into Azure Blob Storage, keeping complexity out of the microservice operations. @@ -50,9 +50,9 @@ Figure 3-10 shows a cold-start pattern. Note the extra steps required when the a ![Cold versus warm start](./media/cold-start-warm-start.png) **Figure 3-10**. Cold start versus warm start. -To avoid cold starts entirely, you might switch from a [consumption plan to a dedicated plan](https://azure.microsoft.com/blog/understanding-serverless-cold-start/). You can also configure one or more [pre-warmed instances](https://docs.microsoft.com/azure/azure-functions/functions-premium-plan#pre-warmed-instances) with the premium plan upgrade. In these cases, when you need to add another instance, it's already up and ready to go. These options can help mitigate the cold start issue associated with serverless computing. +To avoid cold starts entirely, you might switch from a [consumption plan to a dedicated plan](https://azure.microsoft.com/blog/understanding-serverless-cold-start/). You can also configure one or more [pre-warmed instances](/azure/azure-functions/functions-premium-plan#pre-warmed-instances) with the premium plan upgrade. In these cases, when you need to add another instance, it's already up and ready to go. These options can help mitigate the cold start issue associated with serverless computing. -Cloud providers bill for serverless based on compute execution time and consumed memory. Long running operations or high memory consumption workloads aren't always the best candidates for serverless. Serverless functions favor small chunks of work that can complete quickly. Most serverless platforms require individual functions to complete within a few minutes. Azure Functions defaults to a 5-minute time-out duration, which can be configured up to 10 minutes. The Azure Functions premium plan can mitigate this issue as well, defaulting time-outs to 30 minutes with an unbounded higher limit that can be configured. Compute time isn't calendar time. More advanced functions using the [Azure Durable Functions framework](https://docs.microsoft.com/azure/azure-functions/durable/durable-functions-overview?tabs=csharp) may pause execution over a course of several days. The billing is based on actual execution time - when the function wakes up and resumes processing. +Cloud providers bill for serverless based on compute execution time and consumed memory. Long running operations or high memory consumption workloads aren't always the best candidates for serverless. Serverless functions favor small chunks of work that can complete quickly. Most serverless platforms require individual functions to complete within a few minutes. Azure Functions defaults to a 5-minute time-out duration, which can be configured up to 10 minutes. The Azure Functions premium plan can mitigate this issue as well, defaulting time-outs to 30 minutes with an unbounded higher limit that can be configured. Compute time isn't calendar time. More advanced functions using the [Azure Durable Functions framework](/azure/azure-functions/durable/durable-functions-overview?tabs=csharp) may pause execution over a course of several days. The billing is based on actual execution time - when the function wakes up and resumes processing. Finally, leveraging Azure Functions for application tasks adds complexity. It's wise to first architect your application with a modular, loosely coupled design. Then, identify if there are benefits serverless would offer that justify the additional complexity. diff --git a/docs/architecture/cloud-native/logging-with-elastic-stack.md b/docs/architecture/cloud-native/logging-with-elastic-stack.md index 77946169664d9..0b8ccd05705af 100644 --- a/docs/architecture/cloud-native/logging-with-elastic-stack.md +++ b/docs/architecture/cloud-native/logging-with-elastic-stack.md @@ -100,7 +100,7 @@ The final component of the stack is Kibana. This tool is used to provide interac ## Installing Elastic Stack on Azure -The Elastic stack can be installed on Azure in a number of ways. As always, it's possible to [provision virtual machines and install Elastic Stack on them directly](https://docs.microsoft.com/azure/virtual-machines/linux/tutorial-elasticsearch). This option is preferred by some experienced users as it offers the highest degree of customizability. Deploying on infrastructure as a service introduces significant management overhead forcing those who take that path to take ownership of all the tasks associated with infrastructure as a service such as securing the machines and keeping up-to-date with patches. +The Elastic stack can be installed on Azure in a number of ways. As always, it's possible to [provision virtual machines and install Elastic Stack on them directly](/azure/virtual-machines/linux/tutorial-elasticsearch). This option is preferred by some experienced users as it offers the highest degree of customizability. Deploying on infrastructure as a service introduces significant management overhead forcing those who take that path to take ownership of all the tasks associated with infrastructure as a service such as securing the machines and keeping up-to-date with patches. An option with less overhead is to make use of one of the many Docker containers on which the Elastic Stack has already been configured. These containers can be dropped into an existing Kubernetes cluster and run alongside application code. The [sebp/elk](https://elk-docker.readthedocs.io/) container is a well-documented and tested Elastic Stack container. @@ -108,7 +108,7 @@ Another option is a [recently announced ELK-as-a-service offering](https://devop ## References -- [Install Elastic Stack on Azure](https://docs.microsoft.com/azure/virtual-machines/linux/tutorial-elasticsearch) +- [Install Elastic Stack on Azure](/azure/virtual-machines/linux/tutorial-elasticsearch) >[!div class="step-by-step"] >[Previous](observability-patterns.md) diff --git a/docs/architecture/cloud-native/map-eshoponcontainers-azure-services.md b/docs/architecture/cloud-native/map-eshoponcontainers-azure-services.md index 15531a99742a4..4005727ecb471 100644 --- a/docs/architecture/cloud-native/map-eshoponcontainers-azure-services.md +++ b/docs/architecture/cloud-native/map-eshoponcontainers-azure-services.md @@ -33,7 +33,7 @@ The Azure portal is where you define the API schema and package different APIs i The developer portal serves as the main resource for developers. It provides developers with API documentation, an interactive test console, and reports on their own usage. Developers also use the portal to create and manage their own accounts, including subscription and API key support. -Using APIM, applications can expose several different groups of services, each providing a back end for a particular front-end client. APIM is recommended for complex scenarios. For simpler needs, the lightweight API Gateway Ocelot can be used. The eShopOnContainers app uses Ocelot because of its simplicity and because it can be deployed into the same application environment as the application itself. [Learn more about eShopOnContainers, APIM, and Ocelot.](https://docs.microsoft.com/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern#azure-api-management) +Using APIM, applications can expose several different groups of services, each providing a back end for a particular front-end client. APIM is recommended for complex scenarios. For simpler needs, the lightweight API Gateway Ocelot can be used. The eShopOnContainers app uses Ocelot because of its simplicity and because it can be deployed into the same application environment as the application itself. [Learn more about eShopOnContainers, APIM, and Ocelot.](../microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md#azure-api-management) Another option if your application is using AKS is to deploy the Azure Gateway Ingress Controller as a pod within your AKS cluster. This allows your cluster to integrate with an Azure Application Gateway, allowing the gateway to load-balance traffic to the AKS pods. [Learn more about the Azure Gateway Ingress Controller for AKS](https://github.com/Azure/application-gateway-kubernetes-ingress). @@ -45,13 +45,13 @@ For SQL Server database support, Azure has products for everything from single d The eShopOnContainers application stores the user's current shopping basket between requests. This is managed by the Basket microservice that stores the data in a Redis cache. In development, this cache can be deployed in a container, while in production it can utilize Azure Cache for Redis. Azure Cache for Redis is a fully managed service offering high performance and reliability without the need to deploy and manage Redis instances or containers on your own. -The Locations microservice uses a MongoDB NoSQL database for its persistence. During development, the database can be deployed in its own container, while in production the service can leverage [Azure Cosmos DB's API for MongoDB](https://docs.microsoft.com/azure/cosmos-db/mongodb-introduction). One of the benefits of Azure Cosmos DB is its ability to leverage multiple different communication protocols, including a SQL API and common NoSQL APIs including MongoDB, Cassandra, Gremlin, and Azure Table Storage. Azure Cosmos DB offers a fully managed and globally distributed database as a service that can scale to meet the needs of the services that use it. +The Locations microservice uses a MongoDB NoSQL database for its persistence. During development, the database can be deployed in its own container, while in production the service can leverage [Azure Cosmos DB's API for MongoDB](/azure/cosmos-db/mongodb-introduction). One of the benefits of Azure Cosmos DB is its ability to leverage multiple different communication protocols, including a SQL API and common NoSQL APIs including MongoDB, Cassandra, Gremlin, and Azure Table Storage. Azure Cosmos DB offers a fully managed and globally distributed database as a service that can scale to meet the needs of the services that use it. Distributed data in cloud-native applications is covered in more detail in [chapter 5](distributed-data.md). ## Event Bus -The application uses events to communicate changes between different services. This functionality can be implemented with a variety of implementations, and locally the eShopOnContainers application uses [RabbitMQ](https://www.rabbitmq.com/). When hosted in Azure, the application would leverage [Azure Service Bus](https://docs.microsoft.com/azure/service-bus/) for its messaging. Azure Service Bus is a fully managed integration message broker that allows applications and services to communicate with one another in a decoupled, reliable, asynchronous manner. Azure Service Bus supports individual queues as well as separate *topics* to support publisher-subscriber scenarios. The eShopOnContainers application would leverage topics with Azure Service Bus to support distributing messages from one microservice to any other microservice that needed to react to a given message. +The application uses events to communicate changes between different services. This functionality can be implemented with a variety of implementations, and locally the eShopOnContainers application uses [RabbitMQ](https://www.rabbitmq.com/). When hosted in Azure, the application would leverage [Azure Service Bus](/azure/service-bus/) for its messaging. Azure Service Bus is a fully managed integration message broker that allows applications and services to communicate with one another in a decoupled, reliable, asynchronous manner. Azure Service Bus supports individual queues as well as separate *topics* to support publisher-subscriber scenarios. The eShopOnContainers application would leverage topics with Azure Service Bus to support distributing messages from one microservice to any other microservice that needed to react to a given message. ## Resiliency diff --git a/docs/architecture/cloud-native/monitoring-azure-kubernetes.md b/docs/architecture/cloud-native/monitoring-azure-kubernetes.md index a64b7fc0447b6..93d6f6792f96f 100644 --- a/docs/architecture/cloud-native/monitoring-azure-kubernetes.md +++ b/docs/architecture/cloud-native/monitoring-azure-kubernetes.md @@ -10,16 +10,16 @@ The built-in logging in Kubernetes is primitive. However, there are some great o ## Azure Monitor for Containers -[Azure Monitor for Containers](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-overview) supports consuming logs from not just Kubernetes but also from other orchestration engines such as DC/OS, Docker Swarm, and Red Hat OpenShift. +[Azure Monitor for Containers](/azure/azure-monitor/insights/container-insights-overview) supports consuming logs from not just Kubernetes but also from other orchestration engines such as DC/OS, Docker Swarm, and Red Hat OpenShift. ![Consuming logs from various containers](./media/containers-diagram.png) **Figure 7-10**. Consuming logs from various containers -[Prometheus](https://prometheus.io/) is a popular open source metric monitoring solution. It is part of the Cloud Native Compute Foundation. Typically, using Prometheus requires managing a Prometheus server with its own store. However, [Azure Monitor for Containers provides direct integration with Prometheus metrics endpoints](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-prometheus-integration), so a separate server is not required. +[Prometheus](https://prometheus.io/) is a popular open source metric monitoring solution. It is part of the Cloud Native Compute Foundation. Typically, using Prometheus requires managing a Prometheus server with its own store. However, [Azure Monitor for Containers provides direct integration with Prometheus metrics endpoints](/azure/azure-monitor/insights/container-insights-prometheus-integration), so a separate server is not required. Log and metric information is gathered not just from the containers running in the cluster but also from the cluster hosts themselves. It allows correlating log information from the two making it much easier to track down an error. -Installing the log collectors differs on [Windows](https://docs.microsoft.com/azure/azure-monitor/insights/containers#configure-a-log-analytics-windows-agent-for-kubernetes) and [Linux](https://docs.microsoft.com/azure/azure-monitor/insights/containers#configure-a-log-analytics-linux-agent-for-kubernetes) clusters. But in both cases the log collection is implemented as a Kubernetes [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), meaning that the log collector is run as a container on each of the nodes. +Installing the log collectors differs on [Windows](/azure/azure-monitor/insights/containers#configure-a-log-analytics-windows-agent-for-kubernetes) and [Linux](/azure/azure-monitor/insights/containers#configure-a-log-analytics-linux-agent-for-kubernetes) clusters. But in both cases the log collection is implemented as a Kubernetes [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), meaning that the log collector is run as a container on each of the nodes. No matter which orchestrator or operating system is running the Azure Monitor daemon, the log information is forwarded to the same Azure Monitor tools with which users are familiar. This ensures a parallel experience in environments that mix different log sources such as a hybrid Kubernetes/Azure Functions environment. diff --git a/docs/architecture/cloud-native/other-deployment-options.md b/docs/architecture/cloud-native/other-deployment-options.md index 899aa20e67c61..3fce56059e1ae 100644 --- a/docs/architecture/cloud-native/other-deployment-options.md +++ b/docs/architecture/cloud-native/other-deployment-options.md @@ -22,13 +22,13 @@ To deploy to [Azure App Service for Containers](https://azure.microsoft.com/serv ## How to deploy an app to Azure Container Instances -To deploy to [Azure Container Instances (ACI)](https://docs.microsoft.com/azure/container-instances/), you need an Azure Container Registry (ACR) and credentials for accessing it. Once you push your container image to the repository, it's available to pull into ACI. You can work with ACI using the Azure portal or command-line interface. ACR provides tight integration with ACI. Figure 3-14 shows how to push an individual container image to ACR. +To deploy to [Azure Container Instances (ACI)](/azure/container-instances/), you need an Azure Container Registry (ACR) and credentials for accessing it. Once you push your container image to the repository, it's available to pull into ACI. You can work with ACI using the Azure portal or command-line interface. ACR provides tight integration with ACI. Figure 3-14 shows how to push an individual container image to ACR. ![Azure Container Registry Run Instance](./media/acr-runinstance-contextmenu.png) **Figure 3-14**. Azure Container Registry Run Instance -Creating an instance in ACI can be done quickly. Specify the image registry, Azure resource group information, the amount of memory to allocate, and the port on which to listen. This [quickstart shows how to deploy a container instance to ACI using the Azure portal](https://docs.microsoft.com/azure/container-instances/container-instances-quickstart-portal). +Creating an instance in ACI can be done quickly. Specify the image registry, Azure resource group information, the amount of memory to allocate, and the port on which to listen. This [quickstart shows how to deploy a container instance to ACI using the Azure portal](/azure/container-instances/container-instances-quickstart-portal). Once the deployment completes, find the newly deployed container's IP address and communicate with it over the port you specified. @@ -39,22 +39,22 @@ Azure Container Instances offers the fastest way to run simple container workloa - [What is Kubernetes?](https://blog.newrelic.com/engineering/what-is-kubernetes/) - [Installing Kubernetes with Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) - [MiniKube vs Docker Desktop](https://medium.com/containers-101/local-kubernetes-for-windows-minikube-vs-docker-desktop-25a1c6d3b766) -- [Visual Studio Tools for Docker](https://docs.microsoft.com/dotnet/standard/containerized-lifecycle-architecture/design-develop-containerized-apps/visual-studio-tools-for-docker) +- [Visual Studio Tools for Docker](/dotnet/standard/containerized-lifecycle-architecture/design-develop-containerized-apps/visual-studio-tools-for-docker) - [Understanding serverless cold start](https://azure.microsoft.com/blog/understanding-serverless-cold-start/) -- [Pre-warmed Azure Functions instances](https://docs.microsoft.com/azure/azure-functions/functions-premium-plan#pre-warmed-instances) -- [Create a function on Linux using a custom image](https://docs.microsoft.com/azure/azure-functions/functions-create-function-linux-custom-image) +- [Pre-warmed Azure Functions instances](/azure/azure-functions/functions-premium-plan#pre-warmed-instances) +- [Create a function on Linux using a custom image](/azure/azure-functions/functions-create-function-linux-custom-image) - [Run Azure Functions in a Docker Container](https://markheath.net/post/azure-functions-docker) -- [Create a function on Linux using a custom image](https://docs.microsoft.com/azure/azure-functions/functions-create-function-linux-custom-image) -- [Azure Functions with Kubernetes Event Driven Autoscaling](https://docs.microsoft.com/azure/azure-functions/functions-kubernetes-keda) +- [Create a function on Linux using a custom image](/azure/azure-functions/functions-create-function-linux-custom-image) +- [Azure Functions with Kubernetes Event Driven Autoscaling](/azure/azure-functions/functions-kubernetes-keda) - [Canary Release](https://martinfowler.com/bliki/CanaryRelease.html) -- [Azure Dev Spaces with VS Code](https://docs.microsoft.com/azure/dev-spaces/quickstart-netcore) -- [Azure Dev Spaces with Visual Studio](https://docs.microsoft.com/azure/dev-spaces/quickstart-netcore-visualstudio) -- [AKS Multiple Node Pools](https://docs.microsoft.com/azure/aks/use-multiple-node-pools) -- [AKS Cluster Autoscaler](https://docs.microsoft.com/azure/aks/cluster-autoscaler) -- [Tutorial: Scale applications in AKS](https://docs.microsoft.com/azure/aks/tutorial-kubernetes-scale) -- [Azure Functions scale and hosting](https://docs.microsoft.com/azure/azure-functions/functions-scale) -- [Azure Container Instances Docs](https://docs.microsoft.com/azure/container-instances/) -- [Deploy Container Instance from ACR](https://docs.microsoft.com/azure/container-instances/container-instances-using-azure-container-registry#deploy-with-azure-portal) +- [Azure Dev Spaces with VS Code](/azure/dev-spaces/quickstart-netcore) +- [Azure Dev Spaces with Visual Studio](/azure/dev-spaces/quickstart-netcore-visualstudio) +- [AKS Multiple Node Pools](/azure/aks/use-multiple-node-pools) +- [AKS Cluster Autoscaler](/azure/aks/cluster-autoscaler) +- [Tutorial: Scale applications in AKS](/azure/aks/tutorial-kubernetes-scale) +- [Azure Functions scale and hosting](/azure/azure-functions/functions-scale) +- [Azure Container Instances Docs](/azure/container-instances/) +- [Deploy Container Instance from ACR](/azure/container-instances/container-instances-using-azure-container-registry#deploy-with-azure-portal) >[!div class="step-by-step"] >[Previous](scale-containers-serverless.md) diff --git a/docs/architecture/cloud-native/relational-vs-nosql-data.md b/docs/architecture/cloud-native/relational-vs-nosql-data.md index 0b393f4189756..1f43ae2b4c218 100644 --- a/docs/architecture/cloud-native/relational-vs-nosql-data.md +++ b/docs/architecture/cloud-native/relational-vs-nosql-data.md @@ -48,7 +48,7 @@ Relational databases typically provide consistency and availability, but not par Many relational database systems support built-in replication features where copies of the primary database can be made to other secondary server instances. Write operations are made to the primary instance and replicated to each of the secondaries. Upon a failure, the primary instance can fail over to a secondary to provide high availability. Secondaries can also be used to distribute read operations. While writes operations always go against the primary replica, read operations can be routed to any of the secondaries to reduce system load. -Data can also be horizontally partitioned across multiple nodes, such as with [sharding](https://docs.microsoft.com/azure/sql-database/sql-database-elastic-scale-introduction). But, sharding dramatically increases operational overhead by spitting data across many pieces that cannot easily communicate. It can be costly and time consuming to manage. It can end up impacting performance, table joins, and referential integrity. +Data can also be horizontally partitioned across multiple nodes, such as with [sharding](/azure/sql-database/sql-database-elastic-scale-introduction). But, sharding dramatically increases operational overhead by spitting data across many pieces that cannot easily communicate. It can be costly and time consuming to manage. It can end up impacting performance, table joins, and referential integrity. If data replicas were to lose network connectivity in a "highly consistent" relational database cluster, you wouldn't be able to write to the database. The system would reject the write operation as it can't replicate that change to the other data replica. Every data replica has to update before the transaction can complete. @@ -105,15 +105,15 @@ You can provision an Azure database in minutes by selecting the amount of proces ## Azure SQL Database Development teams with expertise in Microsoft SQL Server should consider -[Azure SQL Database](https://docs.microsoft.com/azure/sql-database/). It's a fully managed relational database-as-a-service (DBaaS) based on the Microsoft SQL Server Database Engine. The service shares many features found in the on-premises version of SQL Server and runs the latest stable version of the SQL Server Database Engine. +[Azure SQL Database](/azure/sql-database/). It's a fully managed relational database-as-a-service (DBaaS) based on the Microsoft SQL Server Database Engine. The service shares many features found in the on-premises version of SQL Server and runs the latest stable version of the SQL Server Database Engine. For use with a cloud-native microservice, Azure SQL Database is available with three deployment options: -- A Single Database represents a fully managed SQL Database running on an [Azure SQL Database server](https://docs.microsoft.com/azure/sql-database/sql-database-servers) in the Azure cloud. The database is considered [*contained*](https://docs.microsoft.com/sql/relational-databases/databases/contained-databases) as it has no configuration dependencies on the underlying database server. +- A Single Database represents a fully managed SQL Database running on an [Azure SQL Database server](/azure/sql-database/sql-database-servers) in the Azure cloud. The database is considered [*contained*](/sql/relational-databases/databases/contained-databases) as it has no configuration dependencies on the underlying database server. -- A [Managed Instance](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance) is a fully managed instance of the Microsoft SQL Server Database Engine that provides near-100% compatibility with an on-premises SQL Server. This option supports larger databases, up to 35 TB and is placed in an [Azure Virtual Network](https://docs.microsoft.com/azure/virtual-network/virtual-networks-overview) for better isolation. +- A [Managed Instance](/azure/sql-database/sql-database-managed-instance) is a fully managed instance of the Microsoft SQL Server Database Engine that provides near-100% compatibility with an on-premises SQL Server. This option supports larger databases, up to 35 TB and is placed in an [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) for better isolation. -- [Azure SQL Database serverless](https://docs.microsoft.com/azure/sql-database/sql-database-serverless) is a compute tier for a single database that automatically scales based on workload demand. It bills only for the amount of compute used per second. The service is well suited for workloads with intermittent, unpredictable usage patterns, interspersed with periods of inactivity. The serverless compute tier also automatically pauses databases during inactive periods so that only storage charges are billed. It automatically resumes when activity returns. +- [Azure SQL Database serverless](/azure/sql-database/sql-database-serverless) is a compute tier for a single database that automatically scales based on workload demand. It bills only for the amount of compute used per second. The service is well suited for workloads with intermittent, unpredictable usage patterns, interspersed with periods of inactivity. The serverless compute tier also automatically pauses databases during inactive periods so that only storage charges are billed. It automatically resumes when activity returns. Beyond the traditional Microsoft SQL Server stack, Azure also features managed versions of three popular open-source databases. @@ -147,7 +147,7 @@ MariaDB has a strong community and is used by many large enterprises. While Orac Azure Database for PostgreSQL is available with two deployment options: -- The [Single Server](https://docs.microsoft.com/azure/postgresql/concepts-servers) deployment option is a central administrative point for multiple databases to which you can deploy many databases. The pricing is structured per-server based upon cores and storage. +- The [Single Server](/azure/postgresql/concepts-servers) deployment option is a central administrative point for multiple databases to which you can deploy many databases. The pricing is structured per-server based upon cores and storage. - The [Hyperscale (Citus) option](https://azure.microsoft.com/blog/get-high-performance-scaling-for-your-azure-database-workloads-with-hyperscale/) is powered by Citus Data technology. It enables high performance by *horizontally scaling* a single database across hundreds of nodes to deliver fast performance and scale. This option allows the engine to fit more data in memory, parallelize queries across hundreds of nodes, and index data faster. @@ -171,7 +171,7 @@ You can distribute Cosmos databases across regions or around the world, placing Cosmos DB supports [active/active](https://kemptechnologies.com/white-papers/unfog-confusion-active-passive-activeactive-load-balancing/) clustering at the global level, enabling you to configure any of your database regions to support *both writes and reads*. -The [Multi-Master](https://docs.microsoft.com/azure/cosmos-db/multi-master-benefits) protocol is an important feature in Cosmos DB that enables the following functionality: +The [Multi-Master](/azure/cosmos-db/multi-master-benefits) protocol is an important feature in Cosmos DB that enables the following functionality: - Unlimited elastic write and read scalability. @@ -179,7 +179,7 @@ The [Multi-Master](https://docs.microsoft.com/azure/cosmos-db/multi-master-benef - Guaranteed reads and writes served in less than 10 milliseconds at the 99th percentile. -With the Cosmos DB [Multi-Homing APIs](https://docs.microsoft.com/azure/cosmos-db/distribute-data-globally), your microservice is automatically aware of the nearest Azure region and sends requests to it. The nearest region is identified by Cosmos DB without any configuration changes. Should a region become unavailable, the Multi-Homing feature will automatically route requests to the next nearest available region. +With the Cosmos DB [Multi-Homing APIs](/azure/cosmos-db/distribute-data-globally), your microservice is automatically aware of the nearest Azure region and sends requests to it. The nearest region is identified by Cosmos DB without any configuration changes. Should a region become unavailable, the Multi-Homing feature will automatically route requests to the next nearest available region. ### Multi-model support @@ -198,7 +198,7 @@ Development teams can migrate existing Mongo, Gremlin, or Cassandra databases in > Internally, Cosmos stores the data in a simple struct format made up of primitive data types. For each request, the database engine translates the primitive data into the model representation you've selected. -In the previous table, note the [Table API](https://docs.microsoft.com/azure/cosmos-db/table-introduction) option. This API is an evolution of Azure Table Storage. Both share the same underlying table model, but the Cosmos DB Table API adds premium enhancements not available in the Azure Storage API. The following table contrasts the features. +In the previous table, note the [Table API](/azure/cosmos-db/table-introduction) option. This API is an evolution of Azure Table Storage. Both share the same underlying table model, but the Cosmos DB Table API adds premium enhancements not available in the Azure Storage API. The following table contrasts the features. | | Azure Table Storage | Azure Cosmos DB | | :-------- | :-------- |:-------- | @@ -216,7 +216,7 @@ Earlier in the *Relational vs. NoSQL* section, we discussed the subject of *data Most distributed databases allow developers to choose between two consistency models: strong consistency and eventual consistency. *Strong consistency* is the gold standard of data programmability. It guarantees that a query will always return the most current data - even if the system must incur latency waiting for an update to replicate across all database copies. While a database configured for *eventual consistency* will return data immediately, even if that data isn't the most current copy. The latter option enables higher availability, greater scale, and increased performance. -Azure Cosmos DB offers five well-defined [consistency models](https://docs.microsoft.com/azure/cosmos-db/consistency-levels) shown in Figure 5-13. +Azure Cosmos DB offers five well-defined [consistency models](/azure/cosmos-db/consistency-levels) shown in Figure 5-13. ![Cosmos DB consistency graph](./media/cosmos-consistency-level-graph.png) @@ -236,7 +236,7 @@ In the article [Getting Behind the 9-Ball: Cosmos DB Consistency Levels Explaine ### Partitioning -Azure Cosmos DB embraces automatic [partitioning](https://docs.microsoft.com/azure/cosmos-db/partitioning-overview) to scale a database to meet the performance needs of your cloud-native services. +Azure Cosmos DB embraces automatic [partitioning](/azure/cosmos-db/partitioning-overview) to scale a database to meet the performance needs of your cloud-native services. You manage data in Cosmos DB data by creating databases, containers, and items. @@ -250,7 +250,7 @@ To partition the container, items are divided into distinct subsets called logi Note in the previous figure how each item includes a partition key of either ‘city’ or ‘airport’. The key determines the item’s logical partition. Items with a city code are assigned to the container on the left, and items with an airport code, to the container on the right. Combining the partition key value with the ID value creates an item's index, which uniquely identifies the item. -Internally, Cosmos DB automatically manages the placement of [logical partitions](https://docs.microsoft.com/azure/cosmos-db/partition-data) on physical partitions to satisfy the scalability and performance needs of the container. As application throughput and storage requirements increase, Azure Cosmos DB redistributes logical partitions across a greater number of servers. Redistribution operations are managed by Cosmos DB and invoked without interruption or downtime. +Internally, Cosmos DB automatically manages the placement of [logical partitions](/azure/cosmos-db/partition-data) on physical partitions to satisfy the scalability and performance needs of the container. As application throughput and storage requirements increase, Azure Cosmos DB redistributes logical partitions across a greater number of servers. Redistribution operations are managed by Cosmos DB and invoked without interruption or downtime. ## NewSQL databases diff --git a/docs/architecture/cloud-native/resiliency.md b/docs/architecture/cloud-native/resiliency.md index 7abf787c41f27..9705c582f2632 100644 --- a/docs/architecture/cloud-native/resiliency.md +++ b/docs/architecture/cloud-native/resiliency.md @@ -21,7 +21,7 @@ Operating in this environment, a service must be sensitive to many different cha - Unexpected network latency - the time for a service request to travel to the receiver and back. -- [Transient faults](https://docs.microsoft.com/azure/architecture/best-practices/transient-faults) - short-lived network connectivity errors. +- [Transient faults](/azure/architecture/best-practices/transient-faults) - short-lived network connectivity errors. - Blockage by a long-running synchronous operation. diff --git a/docs/architecture/cloud-native/resilient-communications.md b/docs/architecture/cloud-native/resilient-communications.md index 30f533f01af85..049064d036dc4 100644 --- a/docs/architecture/cloud-native/resilient-communications.md +++ b/docs/architecture/cloud-native/resilient-communications.md @@ -25,7 +25,7 @@ You can address these concerns with different libraries and frameworks, but the ## Service mesh -A better approach is an evolving technology entitled *Service Mesh*. A [service mesh](https://www.nginx.com/blog/what-is-a-service-mesh/) is a configurable infrastructure layer with built-in capabilities to handle service communication and the other challenges mentioned above. It decouples these concerns by moving them into a service proxy. The proxy is deployed into a separate process (called a [sidecar](https://docs.microsoft.com/azure/architecture/patterns/sidecar)) to provide isolation from business code. However, the sidecar is linked to the service - it's created with it and shares its lifecycle. Figure 6-7 shows this scenario. +A better approach is an evolving technology entitled *Service Mesh*. A [service mesh](https://www.nginx.com/blog/what-is-a-service-mesh/) is a configurable infrastructure layer with built-in capabilities to handle service communication and the other challenges mentioned above. It decouples these concerns by moving them into a service proxy. The proxy is deployed into a separate process (called a [sidecar](/azure/architecture/patterns/sidecar)) to provide isolation from business code. However, the sidecar is linked to the service - it's created with it and shares its lifecycle. Figure 6-7 shows this scenario. ![Service mesh with a side car](./media/service-mesh-with-side-car.png) @@ -69,28 +69,28 @@ As previously discussed, Envoy is deployed as a sidecar to each microservice in The Azure cloud embraces Istio and provides direct support for it within Azure Kubernetes Services. The following links can help you get started: -- [Installing Istio in AKS](https://docs.microsoft.com/azure/aks/istio-install) -- [Using AKS and Istio](https://docs.microsoft.com/azure/aks/istio-scenario-routing) +- [Installing Istio in AKS](/azure/aks/istio-install) +- [Using AKS and Istio](/azure/aks/istio-scenario-routing) ### References - [Polly](http://www.thepollyproject.org/) -- [Retry pattern](https://docs.microsoft.com/azure/architecture/patterns/retry) +- [Retry pattern](/azure/architecture/patterns/retry) -- [Circuit Breaker pattern](https://docs.microsoft.com/azure/architecture/patterns/circuit-breaker) +- [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker) - [Resilience in Azure whitepaper](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/Resilience%20in%20Azure.pdf) - [network latency](https://www.techopedia.com/definition/8553/network-latency) -- [Redundancy](https://docs.microsoft.com/azure/architecture/guide/design-principles/redundancy) +- [Redundancy](/azure/architecture/guide/design-principles/redundancy) -- [geo-replication](https://docs.microsoft.com/azure/sql-database/sql-database-active-geo-replication) +- [geo-replication](/azure/sql-database/sql-database-active-geo-replication) -- [Azure Traffic Manager](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview) +- [Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview) -- [Autoscaling guidance](https://docs.microsoft.com/azure/architecture/best-practices/auto-scaling) +- [Autoscaling guidance](/azure/architecture/best-practices/auto-scaling) - [Istio](https://istio.io/docs/concepts/what-is-istio/) diff --git a/docs/architecture/cloud-native/scale-containers-serverless.md b/docs/architecture/cloud-native/scale-containers-serverless.md index 42c7b99e19d72..945ddaed0c14a 100644 --- a/docs/architecture/cloud-native/scale-containers-serverless.md +++ b/docs/architecture/cloud-native/scale-containers-serverless.md @@ -12,17 +12,17 @@ There are two ways to scale an application: up or out. The former refers to addi Upgrading an existing host server with increased CPU, memory, disk I/O speed, and network I/O speed is known as *scaling up*. Scaling up a cloud-native application involves choosing more capable resources from the cloud vendor. For example, you can a new node pool with larger VMs in your Kubernetes cluster. Then, migrate your containerized services to the new pool. -Serverless apps scale up by choosing the [premium Functions plan](https://docs.microsoft.com/azure/azure-functions/functions-scale) or premium instance sizes from a dedicated app service plan. +Serverless apps scale up by choosing the [premium Functions plan](/azure/azure-functions/functions-scale) or premium instance sizes from a dedicated app service plan. ## Scaling out cloud-native apps -Cloud-native applications often experience large fluctuations in demand and require scale on a moment's notice. They favor scaling out. Scaling out is done horizontally by adding additional machines (called nodes) or application instances to an existing cluster. In Kubernetes, you can scale manually by adjusting configuration settings for the app (for example, [scaling a node pool](https://docs.microsoft.com/azure/aks/use-multiple-node-pools#scale-a-node-pool-manually)), or through autoscaling. +Cloud-native applications often experience large fluctuations in demand and require scale on a moment's notice. They favor scaling out. Scaling out is done horizontally by adding additional machines (called nodes) or application instances to an existing cluster. In Kubernetes, you can scale manually by adjusting configuration settings for the app (for example, [scaling a node pool](/azure/aks/use-multiple-node-pools#scale-a-node-pool-manually)), or through autoscaling. AKS clusters can autoscale in one of two ways: -First, the [Horizontal Pod Autoscaler](https://docs.microsoft.com/azure/aks/tutorial-kubernetes-scale#autoscale-pods) monitors resource demand and automatically scales your POD replicas to meet it. When traffic increases, additional replicas are automatically provisioned to scale out your services. Likewise, when demand decreases, they're removed to scale-in your services. You define the metric on which to scale, for example, CPU usage. You can also specify the minimum and maximum number of replicas to run. AKS monitors that metric and scales accordingly. +First, the [Horizontal Pod Autoscaler](/azure/aks/tutorial-kubernetes-scale#autoscale-pods) monitors resource demand and automatically scales your POD replicas to meet it. When traffic increases, additional replicas are automatically provisioned to scale out your services. Likewise, when demand decreases, they're removed to scale-in your services. You define the metric on which to scale, for example, CPU usage. You can also specify the minimum and maximum number of replicas to run. AKS monitors that metric and scales accordingly. -Next, the [AKS Cluster Autoscaler](https://docs.microsoft.com/azure/aks/cluster-autoscaler) feature enables you to automatically scale compute nodes across a Kubernetes cluster to meet demand. With it, you can automatically add new VMs to the underlying Azure Virtual Machine Scale Set whenever more compute capacity of is required. It also removes nodes when no longer required. +Next, the [AKS Cluster Autoscaler](/azure/aks/cluster-autoscaler) feature enables you to automatically scale compute nodes across a Kubernetes cluster to meet demand. With it, you can automatically add new VMs to the underlying Azure Virtual Machine Scale Set whenever more compute capacity of is required. It also removes nodes when no longer required. Figure 3-13 shows the relationship between these two scaling services. diff --git a/docs/architecture/cloud-native/service-mesh-communication-infrastructure.md b/docs/architecture/cloud-native/service-mesh-communication-infrastructure.md index 76b8c31abb75e..6f509b03c60f4 100644 --- a/docs/architecture/cloud-native/service-mesh-communication-infrastructure.md +++ b/docs/architecture/cloud-native/service-mesh-communication-infrastructure.md @@ -13,7 +13,7 @@ We explored different approaches for implementing synchronous HTTP communication A more modern approach to microservice communication centers around a new and rapidly evolving technology entitled *Service Mesh*. A [service mesh](https://www.nginx.com/blog/what-is-a-service-mesh/) is a configurable infrastructure layer with built-in capabilities to handle service-to-service communication, resiliency, and many cross-cutting concerns. It moves the responsibility for these concerns out of the microservices and into service mesh layer. Communication is abstracted away from your microservices. -A key component of a service mesh is a proxy. In a cloud-native application, an instance of a proxy is typically colocated with each microservice. While they execute in separate processes, the two are closely linked and share the same lifecycle. This pattern, known as the [Sidecar pattern](https://docs.microsoft.com/azure/architecture/patterns/sidecar), and is shown in Figure 4-24. +A key component of a service mesh is a proxy. In a cloud-native application, an instance of a proxy is typically colocated with each microservice. While they execute in separate processes, the two are closely linked and share the same lifecycle. This pattern, known as the [Sidecar pattern](/azure/architecture/patterns/sidecar), and is shown in Figure 4-24. ![Service mesh with a side car](./media/service-mesh-with-side-car.png) @@ -35,12 +35,12 @@ In this chapter, we discussed cloud-native communication patterns. We started by Special emphasis was on managed Azure services that can help implement communication in cloud-native systems: -- [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview) +- [Azure Application Gateway](/azure/application-gateway/overview) - [Azure API Management](https://azure.microsoft.com/services/api-management/) - [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) -- [Azure Storage Queues](https://docs.microsoft.com/azure/storage/queues/storage-queues-introduction) -- [Azure Service Bus](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) -- [Azure Event Grid](https://docs.microsoft.com/azure/event-grid/overview) +- [Azure Storage Queues](/azure/storage/queues/storage-queues-introduction) +- [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) +- [Azure Event Grid](/azure/event-grid/overview) - [Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) We next move to distributed data in cloud-native systems and the benefits and challenges that it presents. @@ -49,7 +49,7 @@ We next move to distributed data in cloud-native systems and the benefits and ch - [.NET Microservices: Architecture for Containerized .NET applications](https://dotnet.microsoft.com/download/thank-you/microservices-architecture-ebook) -- [Designing Interservice Communication for Microservices](https://docs.microsoft.com/azure/architecture/microservices/design/interservice-communication) +- [Designing Interservice Communication for Microservices](/azure/architecture/microservices/design/interservice-communication) - [Azure SignalR Service, a fully managed service to add real-time functionality](https://azure.microsoft.com/blog/azure-signalr-service-a-fully-managed-service-to-add-real-time-functionality/) @@ -59,9 +59,9 @@ We next move to distributed data in cloud-native systems and the benefits and ch - [gRPC Documentation](https://grpc.io/docs/guides/) -- [gRPC for WCF Developers](https://docs.microsoft.com/dotnet/architecture/grpc-for-wcf-developers/) +- [gRPC for WCF Developers](../grpc-for-wcf-developers/index.md) -- [Comparing gRPC Services with HTTP APIs](https://docs.microsoft.com/aspnet/core/grpc/comparison?view=aspnetcore-3.0) +- [Comparing gRPC Services with HTTP APIs](/aspnet/core/grpc/comparison?view=aspnetcore-3.0) - [Building gRPC Services with .NET video](https://channel9.msdn.com/Shows/The-Cloud-Native-Show/Building-Microservices-with-gRPC-and-NET) diff --git a/docs/architecture/cloud-native/service-to-service-communication.md b/docs/architecture/cloud-native/service-to-service-communication.md index d05cabab0082e..c91c3561a3110 100644 --- a/docs/architecture/cloud-native/service-to-service-communication.md +++ b/docs/architecture/cloud-native/service-to-service-communication.md @@ -49,7 +49,7 @@ The large degree of coupling in the previous image suggests the services weren't ### Materialized View pattern -A popular option for removing microservice coupling is the [Materialized View pattern](https://docs.microsoft.com/azure/architecture/patterns/materialized-view). With this pattern, a microservice stores its own local, denormalized copy of data that's owned by other services. Instead of the Shopping Basket microservice querying the Product Catalog and Pricing microservices, it maintains its own local copy of that data. This pattern eliminates unnecessary coupling and improves reliability and response time. The entire operation executes inside a single process. We explore this pattern and other data concerns in Chapter 5. +A popular option for removing microservice coupling is the [Materialized View pattern](/azure/architecture/patterns/materialized-view). With this pattern, a microservice stores its own local, denormalized copy of data that's owned by other services. Instead of the Shopping Basket microservice querying the Product Catalog and Pricing microservices, it maintains its own local copy of that data. This pattern eliminates unnecessary coupling and improves reliability and response time. The entire operation executes inside a single process. We explore this pattern and other data concerns in Chapter 5. ### Service Aggregator Pattern @@ -89,7 +89,7 @@ In chapter 1, we talked about *backing services*. Backing services are ancillary Azure storage queues offer a simple queueing infrastructure that is fast, affordable, and backed by Azure storage accounts. -[Azure Storage Queues](https://docs.microsoft.com/azure/storage/queues/storage-queues-introduction) feature a REST-based queuing mechanism with reliable and persistent messaging. They provide a minimal feature set, but are inexpensive and store millions of messages. Their capacity ranges up to 500 TB. A single message can be up to 64 KB in size. +[Azure Storage Queues](/azure/storage/queues/storage-queues-introduction) feature a REST-based queuing mechanism with reliable and persistent messaging. They provide a minimal feature set, but are inexpensive and store millions of messages. Their capacity ranges up to 500 TB. A single message can be up to 64 KB in size. You can access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. Storage queues can scale out to large numbers of concurrent clients to handle traffic spikes. @@ -117,13 +117,13 @@ Azure Storage queues are an economical option to implement command messaging in For more complex messaging requirements, consider Azure Service Bus queues. -Sitting atop a robust message infrastructure, [Azure Service Bus](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) supports a *brokered messaging model*. Messages are reliably stored in a broker (the queue) until received by the consumer. The queue guarantees First-In/First-Out (FIFO) message delivery, respecting the order in which messages were added to the queue. +Sitting atop a robust message infrastructure, [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) supports a *brokered messaging model*. Messages are reliably stored in a broker (the queue) until received by the consumer. The queue guarantees First-In/First-Out (FIFO) message delivery, respecting the order in which messages were added to the queue. -The size of a message can be much larger, up to 256 KB. Messages are persisted in the queue for an unlimited period of time. Service Bus supports not only HTTP-based calls, but also provides full support for the [AMQP protocol](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-amqp-overview). AMQP is an open-standard across vendors that supports a binary protocol and higher degrees of reliability. +The size of a message can be much larger, up to 256 KB. Messages are persisted in the queue for an unlimited period of time. Service Bus supports not only HTTP-based calls, but also provides full support for the [AMQP protocol](/azure/service-bus-messaging/service-bus-amqp-overview). AMQP is an open-standard across vendors that supports a binary protocol and higher degrees of reliability. -Service Bus provides a rich set of features, including [transaction support](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-transactions) and a [duplicate detection feature](https://docs.microsoft.com/azure/service-bus-messaging/duplicate-detection). The queue guarantees "at most once delivery" per message. It automatically discards a message that has already been sent. If a producer is in doubt, it can resend the same message, and Service Bus guarantees that only one copy will be processed. Duplicate detection frees you from having to build additional infrastructure plumbing. +Service Bus provides a rich set of features, including [transaction support](/azure/service-bus-messaging/service-bus-transactions) and a [duplicate detection feature](/azure/service-bus-messaging/duplicate-detection). The queue guarantees "at most once delivery" per message. It automatically discards a message that has already been sent. If a producer is in doubt, it can resend the same message, and Service Bus guarantees that only one copy will be processed. Duplicate detection frees you from having to build additional infrastructure plumbing. -Two more enterprise features are partitioning and sessions. A conventional Service Bus queue is handled by a single message broker and stored in a single message store. But, [Service Bus Partitioning](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-partitioning) spreads the queue across multiple message brokers and message stores. The overall throughput is no longer limited by the performance of a single message broker or messaging store. A temporary outage of a messaging store doesn't render a partitioned queue unavailable. +Two more enterprise features are partitioning and sessions. A conventional Service Bus queue is handled by a single message broker and stored in a single message store. But, [Service Bus Partitioning](/azure/service-bus-messaging/service-bus-partitioning) spreads the queue across multiple message brokers and message stores. The overall throughput is no longer limited by the performance of a single message broker or messaging store. A temporary outage of a messaging store doesn't render a partitioned queue unavailable. [Service Bus Sessions](https://codingcanvas.com/azure-service-bus-sessions/) provide a way to group-related messages. Imagine a workflow scenario where messages must be processed together and the operation completed at the end. To take advantage, sessions must be explicitly enabled for the queue and each related messaged must contain the same session ID. @@ -143,7 +143,7 @@ Message queuing is an effective way to implement communication where a producer To address this scenario, we move to the third type of message interaction, the *event*. One microservice announces that an action had occurred. Other microservices, if interested, react to the action, or event. -Eventing is a two-step process. For a given state change, a microservice publishes an event to a message broker, making it available to any other interested microservice. The interested microservice is notified by subscribing to the event in the message broker. You use the [Publish/Subscribe](https://docs.microsoft.com/azure/architecture/patterns/publisher-subscriber) pattern to implement [event-based communication](https://docs.microsoft.com/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/integration-event-based-microservice-communications). +Eventing is a two-step process. For a given state change, a microservice publishes an event to a message broker, making it available to any other interested microservice. The interested microservice is notified by subscribing to the event in the message broker. You use the [Publish/Subscribe](/azure/architecture/patterns/publisher-subscriber) pattern to implement [event-based communication](/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/integration-event-based-microservice-communications). Figure 4-15 shows a shopping basket microservice publishing an event with two other microservices subscribing to it. @@ -153,7 +153,7 @@ Figure 4-15 shows a shopping basket microservice publishing an event with two ot Note the *event bus* component that sits in the middle of the communication channel. It's a custom class that encapsulates the message broker and decouples it from the underlying application. The ordering and inventory microservices independently operate the event with no knowledge of each other, nor the shopping basket microservice. When the registered event is published to the event bus, they act upon it. -With eventing, we move from queuing technology to *topics*. A [topic](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions) is similar to a queue, but supports a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing microservices can choose to receive and act upon that message. Figure 4-16 shows a topic architecture. +With eventing, we move from queuing technology to *topics*. A [topic](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions) is similar to a queue, but supports a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing microservices can choose to receive and act upon that message. Figure 4-16 shows a topic architecture. ![Topic architecture](./media/topic-architecture.png) @@ -165,17 +165,17 @@ The Azure cloud supports two different topic services: Azure Service Bus Topics ### Azure Service Bus Topics -Sitting on top of the same robust brokered message model of Azure Service Bus queues are [Azure Service Bus Topics](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). A topic can receive messages from multiple independent publishers and send messages to up to 2,000 subscribers. Subscriptions can be dynamically added or removed at runtime without stopping the system or recreating the topic. +Sitting on top of the same robust brokered message model of Azure Service Bus queues are [Azure Service Bus Topics](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). A topic can receive messages from multiple independent publishers and send messages to up to 2,000 subscribers. Subscriptions can be dynamically added or removed at runtime without stopping the system or recreating the topic. -Many advanced features from Azure Service Bus queues are also available for topics, including [Duplicate Detection](https://docs.microsoft.com/azure/service-bus-messaging/duplicate-detection) and [Transaction support](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-transactions). By default, Service Bus topics are handled by a single message broker and stored in a single message store. But, [Service Bus Partitioning](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-partitioning) scales a topic by spreading it across many message brokers and message stores. +Many advanced features from Azure Service Bus queues are also available for topics, including [Duplicate Detection](/azure/service-bus-messaging/duplicate-detection) and [Transaction support](/azure/service-bus-messaging/service-bus-transactions). By default, Service Bus topics are handled by a single message broker and stored in a single message store. But, [Service Bus Partitioning](/azure/service-bus-messaging/service-bus-partitioning) scales a topic by spreading it across many message brokers and message stores. -[Scheduled Message Delivery](https://docs.microsoft.com/azure/service-bus-messaging/message-sequencing) tags a message with a specific time for processing. The message won't appear in the topic before that time. [Message Deferral](https://docs.microsoft.com/azure/service-bus-messaging/message-deferral) enables you to defer a retrieval of a message to a later time. Both are commonly used in workflow processing scenarios where operations are processed in a particular order. You can postpone processing of received messages until prior work has been completed. +[Scheduled Message Delivery](/azure/service-bus-messaging/message-sequencing) tags a message with a specific time for processing. The message won't appear in the topic before that time. [Message Deferral](/azure/service-bus-messaging/message-deferral) enables you to defer a retrieval of a message to a later time. Both are commonly used in workflow processing scenarios where operations are processed in a particular order. You can postpone processing of received messages until prior work has been completed. Service Bus topics are a robust and proven technology for enabling publish/subscribe communication in your cloud-native systems. ### Azure Event Grid -While Azure Service Bus is a battle-tested messaging broker with a full set of enterprise features, [Azure Event Grid](https://docs.microsoft.com/azure/event-grid/overview) is the new kid on the block. +While Azure Service Bus is a battle-tested messaging broker with a full set of enterprise features, [Azure Event Grid](/azure/event-grid/overview) is the new kid on the block. At first glance, Event Grid may look like just another topic-based messaging system. However, it's different in many ways. Focused on event-driven workloads, it enables real-time event processing, deep Azure integration, and an open-platform - all on serverless infrastructure. It's designed for contemporary cloud-native and serverless applications @@ -201,9 +201,9 @@ Event Grid is a fully managed serverless cloud service. It dynamically scales ba ### Streaming messages in the Azure cloud -Azure Service Bus and Event Grid provide great support for applications that expose single, discrete events like a new document has been inserted into a Cosmos DB. But, what if your cloud-native system needs to process a *stream of related events*? [Event streams](https://docs.microsoft.com/archive/msdn-magazine/2015/february/microsoft-azure-the-rise-of-event-stream-oriented-systems) are more complex. They're typically time-ordered, interrelated, and must be processed as a group. +Azure Service Bus and Event Grid provide great support for applications that expose single, discrete events like a new document has been inserted into a Cosmos DB. But, what if your cloud-native system needs to process a *stream of related events*? [Event streams](/archive/msdn-magazine/2015/february/microsoft-azure-the-rise-of-event-stream-oriented-systems) are more complex. They're typically time-ordered, interrelated, and must be processed as a group. -[Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) is a data streaming platform and event ingestion service that collects, transforms, and stores events. It's fine-tuned to capture streaming data, such as continuous event notifications emitted from a telemetry context. The service is highly scalable and can store and [process millions of events per second](https://docs.microsoft.com/azure/event-hubs/event-hubs-about). Shown in Figure 4-18, it's often a front door for an event pipeline, decoupling ingest stream from event consumption. +[Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) is a data streaming platform and event ingestion service that collects, transforms, and stores events. It's fine-tuned to capture streaming data, such as continuous event notifications emitted from a telemetry context. The service is highly scalable and can store and [process millions of events per second](/azure/event-hubs/event-hubs-about). Shown in Figure 4-18, it's often a front door for an event pipeline, decoupling ingest stream from event consumption. ![Azure Event Hub](./media/azure-event-hub.png) @@ -211,9 +211,9 @@ Azure Service Bus and Event Grid provide great support for applications that exp Event Hub supports low latency and configurable time retention. Unlike queues and topics, Event Hubs keep event data after it's been read by a consumer. This feature enables other data analytic services, both internal and external, to replay the data for further analysis. Events stored in event hub are only deleted upon expiration of the retention period, which is one day by default, but configurable. -Event Hub supports common event publishing protocols including HTTPS and AMQP. It also supports Kafka 1.0. [Existing Kafka applications can communicate with Event Hub](https://docs.microsoft.com/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview) using the Kafka protocol providing an alternative to managing large Kafka clusters. Many open-source cloud-native systems embrace Kafka. +Event Hub supports common event publishing protocols including HTTPS and AMQP. It also supports Kafka 1.0. [Existing Kafka applications can communicate with Event Hub](/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview) using the Kafka protocol providing an alternative to managing large Kafka clusters. Many open-source cloud-native systems embrace Kafka. -Event Hubs implements message streaming through a [partitioned consumer model](https://docs.microsoft.com/azure/event-hubs/event-hubs-features) in which each consumer only reads a specific subset, or partition, of the message stream. This pattern enables tremendous horizontal scale for event processing and provides other stream-focused features that are unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event hub. As newer events arrive, they're added to the end of this sequence. Figure 4-19 shows partitioning in an Event Hub. +Event Hubs implements message streaming through a [partitioned consumer model](/azure/event-hubs/event-hubs-features) in which each consumer only reads a specific subset, or partition, of the message stream. This pattern enables tremendous horizontal scale for event processing and provides other stream-focused features that are unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event hub. As newer events arrive, they're added to the end of this sequence. Figure 4-19 shows partitioning in an Event Hub. ![Event Hub partitioning](./media/event-hub-partitioning.png) diff --git a/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/deploy-azure-kubernetes-service.md b/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/deploy-azure-kubernetes-service.md index 3d7ef1e26a669..b68cd8903dd8c 100644 --- a/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/deploy-azure-kubernetes-service.md +++ b/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/deploy-azure-kubernetes-service.md @@ -6,7 +6,7 @@ ms.date: 08/06/2020 # Deploy to Azure Kubernetes Service (AKS) -You can interact with AKS using your preferred client operating system (Windows, macOS, or Linux) with Azure command-line interface(Azure CLI) installed. For more details, refer [Azure CLI documentation](https://docs.microsoft.com/cli/azure/?view=azure-cli-latest) and [Installation guide](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) for the available environments. +You can interact with AKS using your preferred client operating system (Windows, macOS, or Linux) with Azure command-line interface(Azure CLI) installed. For more details, refer [Azure CLI documentation](/cli/azure/?view=azure-cli-latest) and [Installation guide](/cli/azure/install-azure-cli?view=azure-cli-latest) for the available environments. ## Create the AKS environment in Azure diff --git a/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/monolithic-applications.md b/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/monolithic-applications.md index d6265c6bddd72..a7b38fbd25757 100644 --- a/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/monolithic-applications.md +++ b/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/monolithic-applications.md @@ -31,7 +31,7 @@ From an infrastructure perspective, each server can run many applications within Finally, from an availability perspective, monolithic applications must be deployed as a whole; that means that in case you must *stop and start*, all functionality and all users will be affected during the deployment window. In certain situations, the use of Azure and containers can minimize these situations and reduce the probability of downtime of your application, as you can see in Figure 4-3. -You can deploy monolithic applications in Azure by using dedicated VMs for each instance. Using [Azure VM Scale Sets](https://docs.microsoft.com/azure/virtual-machine-scale-sets/), you can scale the VMs easily. +You can deploy monolithic applications in Azure by using dedicated VMs for each instance. Using [Azure VM Scale Sets](/azure/virtual-machine-scale-sets/), you can scale the VMs easily. You can also use [Azure App Services](https://azure.microsoft.com/services/app-service/) to run monolithic applications and easily scale instances without having to manage the VMs. Azure App Services can run single instances of Docker containers, as well, simplifying the deployment. diff --git a/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/orchestrate-high-scalability-availability.md b/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/orchestrate-high-scalability-availability.md index caf885ff4d2b3..e3caf0f74d0ef 100644 --- a/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/orchestrate-high-scalability-availability.md +++ b/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/orchestrate-high-scalability-availability.md @@ -33,8 +33,8 @@ The concepts of a cluster and a scheduler are closely related, so the products p |:---:|:---| | **Kubernetes**
![An image of the Kubernetes logo.](./media/orchestrate-high-scalability-availability/kubernetes-container-orchestration-system-logo.png) | [*Kubernetes*](https://kubernetes.io/) is an open-source product that provides functionality that ranges from cluster infrastructure and container scheduling to orchestrating capabilities. It lets you automate deployment, scaling, and operations of application containers across clusters of hosts.

*Kubernetes* provides a container-centric infrastructure that groups application containers into logical units for easy management and discovery.

*Kubernetes* is mature in Linux, less mature in Windows. | | **Azure Kubernetes Service (AKS)**
![An image of the Azure Kubernetes Service logo.](./media/orchestrate-high-scalability-availability/azure-kubernetes-service-logo.png) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) is a managed Kubernetes container orchestration service in Azure that simplifies Kubernetes cluster's management, deployment, and operations. | -| **Azure Service Fabric**
![An image of the Azure Service Fabric logo.](./media/orchestrate-high-scalability-availability/azure-service-fabric-logo.png) | [Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview) is a Microsoft microservices platform for building applications. It's an [orchestrator](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-resource-manager-introduction) of services and creates clusters of machines. Service Fabric can deploy services as containers or as plain processes. It can even mix services in processes with services in containers within the same application and cluster.

*Service Fabric* clusters can be deployed in Azure, on-premises or in any cloud. However, deployment in Azure is simplified with a managed approach.

*Service Fabric* provides additional and optional prescriptive [Service Fabric programming models](https://azure.microsoft.com/documentation/articles/service-fabric-choose-framework/) like [stateful services](https://azure.microsoft.com/documentation/articles/service-fabric-reliable-services-introduction/) and [Reliable Actors](https://azure.microsoft.com/documentation/articles/service-fabric-reliable-actors-introduction/).

*Service Fabric* is mature in Windows (years evolving in Windows), less mature in Linux.

Both Linux and Windows containers are supported in Service Fabric since 2017. | -| **Azure Service Fabric Mesh**
![An image of the Azure Service Fabric Mesh logo.](./media/orchestrate-high-scalability-availability/azure-service-fabric-mesh-logo.png) | [*Azure Service Fabric Mesh*](https://docs.microsoft.com/azure/service-fabric-mesh/service-fabric-mesh-overview) offers the same reliability, mission-critical performance and scale as Service Fabric, but also offers a fully managed and serverless platform. You don't need to manage a cluster, VMs, storage or networking configuration. You just focus on your application's development.

*Service Fabric Mesh* supports both Windows and Linux containers, allowing you to develop with any programming language and framework of your choice. +| **Azure Service Fabric**
![An image of the Azure Service Fabric logo.](./media/orchestrate-high-scalability-availability/azure-service-fabric-logo.png) | [Service Fabric](/azure/service-fabric/service-fabric-overview) is a Microsoft microservices platform for building applications. It's an [orchestrator](/azure/service-fabric/service-fabric-cluster-resource-manager-introduction) of services and creates clusters of machines. Service Fabric can deploy services as containers or as plain processes. It can even mix services in processes with services in containers within the same application and cluster.

*Service Fabric* clusters can be deployed in Azure, on-premises or in any cloud. However, deployment in Azure is simplified with a managed approach.

*Service Fabric* provides additional and optional prescriptive [Service Fabric programming models](/azure/service-fabric/service-fabric-choose-framework) like [stateful services](/azure/service-fabric/service-fabric-reliable-services-introduction) and [Reliable Actors](/azure/service-fabric/service-fabric-reliable-actors-introduction).

*Service Fabric* is mature in Windows (years evolving in Windows), less mature in Linux.

Both Linux and Windows containers are supported in Service Fabric since 2017. | +| **Azure Service Fabric Mesh**
![An image of the Azure Service Fabric Mesh logo.](./media/orchestrate-high-scalability-availability/azure-service-fabric-mesh-logo.png) | [*Azure Service Fabric Mesh*](/azure/service-fabric-mesh/service-fabric-mesh-overview) offers the same reliability, mission-critical performance and scale as Service Fabric, but also offers a fully managed and serverless platform. You don't need to manage a cluster, VMs, storage or networking configuration. You just focus on your application's development.

*Service Fabric Mesh* supports both Windows and Linux containers, allowing you to develop with any programming language and framework of your choice. ## Using container-based orchestrators in Azure @@ -64,7 +64,7 @@ In the development environment that [Docker announced in July 2018](https://blog ## Get started with Azure Kubernetes Service (AKS) -To begin using AKS, you deploy an AKS cluster from the Azure portal or by using the CLI. For more information on deploying a Kubernetes cluster to Azure, see [Deploy an Azure Kubernetes Service (AKS) cluster](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough-portal). +To begin using AKS, you deploy an AKS cluster from the Azure portal or by using the CLI. For more information on deploying a Kubernetes cluster to Azure, see [Deploy an Azure Kubernetes Service (AKS) cluster](/azure/aks/kubernetes-walkthrough-portal). There are no fees for any of the software installed by default as part of AKS. All default options are implemented with open-source software. AKS is available for multiple virtual machines in Azure. You're charged only for the compute instances you choose, as well as the other underlying infrastructure resources consumed, such as storage and networking. There are no incremental charges for AKS itself. @@ -76,7 +76,7 @@ When deploying an application to a Kubernetes cluster, you can use the original Helm Charts helps you define, version, install, share, upgrade, or rollback even the most complex Kubernetes application. -Going further, Helm usage is also recommended because additional Kubernetes environments in Azure, such as [Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/azure-dev-spaces) are also based on Helm charts. +Going further, Helm usage is also recommended because additional Kubernetes environments in Azure, such as [Azure Dev Spaces](/azure/dev-spaces/azure-dev-spaces) are also based on Helm charts. Helm is maintained by the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) in collaboration with Microsoft, Google, Bitnami, and the Helm contributor community. @@ -84,7 +84,7 @@ For further implementation information on Helm charts and Kubernetes, see the se ## Use Azure Dev Spaces for you Kubernetes application lifecycle -[Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/azure-dev-spaces) provides a rapid, iterative Kubernetes development experience for teams. With minimal dev machine setup, you can iteratively run and debug containers directly in Azure Kubernetes Service (AKS). You can develop on Windows, Mac, or Linux using familiar tools like Visual Studio, Visual Studio Code, or the command line. +[Azure Dev Spaces](/azure/dev-spaces/azure-dev-spaces) provides a rapid, iterative Kubernetes development experience for teams. With minimal dev machine setup, you can iteratively run and debug containers directly in Azure Kubernetes Service (AKS). You can develop on Windows, Mac, or Linux using familiar tools like Visual Studio, Visual Studio Code, or the command line. As mentioned, Azure Dev Spaces uses Helm charts when deploying container-based applications. @@ -102,7 +102,7 @@ Azure Dev Spaces provides the concept of a space, which allows you to work in is For a concrete example, see the [eShopOnContainers wiki page on Azure Dev Spaces](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Azure-Dev-Spaces). -For further information, see [Team Development with Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/team-development-netcore). +For further information, see [Team Development with Azure Dev Spaces](/azure/dev-spaces/team-development-netcore). ## Additional resources @@ -125,7 +125,7 @@ Service Fabric provides two broad areas to help you build applications that use - A platform that provides system services to deploy, scale, upgrade, detect, and restart failed services, discover service location, manage state, and monitor health. These system services in effect enable many of the characteristics of microservices described previously. -- Programming APIs, or frameworks, to help you build applications as microservices: [reliable actors and reliable services](https://docs.microsoft.com/azure/service-fabric/service-fabric-choose-framework). You can choose any code to build your microservice, but these APIs make the job more straightforward, and they integrate with the platform at a deeper level. This way you can get health and diagnostics information, or you can take advantage of reliable state management. +- Programming APIs, or frameworks, to help you build applications as microservices: [reliable actors and reliable services](/azure/service-fabric/service-fabric-choose-framework). You can choose any code to build your microservice, but these APIs make the job more straightforward, and they integrate with the platform at a deeper level. This way you can get health and diagnostics information, or you can take advantage of reliable state management. Service Fabric is agnostic with respect to how you build your service, and you can use any technology. However, it provides built-in programming APIs that make it easier to build microservices. @@ -137,11 +137,11 @@ As shown in Figure 4-10, you can create and run microservices in Service Fabric In the first image, you see microservices as processes, where each node runs one process for each microservice. In the second image, you see microservices as containers, where each node runs Docker with several containers, one container per microservice. Service Fabric clusters based on Linux and Windows hosts can run Docker Linux containers and Windows Containers, respectively. -For up-to-date information about containers support in Azure Service Fabric, see [Service Fabric and containers](https://docs.microsoft.com/azure/service-fabric/service-fabric-containers-overview). +For up-to-date information about containers support in Azure Service Fabric, see [Service Fabric and containers](/azure/service-fabric/service-fabric-containers-overview). -Service Fabric is a good example of a platform where you can define a different logical architecture (business microservices or Bounded Contexts) than the physical implementation. For example, if you implement [Stateful Reliable Services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) in [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview), which are introduced in the next section, "[Stateless versus stateful microservices](#stateless-versus-stateful-microservices)," you have a business microservice concept with multiple physical services. +Service Fabric is a good example of a platform where you can define a different logical architecture (business microservices or Bounded Contexts) than the physical implementation. For example, if you implement [Stateful Reliable Services](/azure/service-fabric/service-fabric-reliable-services-introduction) in [Azure Service Fabric](/azure/service-fabric/service-fabric-overview), which are introduced in the next section, "[Stateless versus stateful microservices](#stateless-versus-stateful-microservices)," you have a business microservice concept with multiple physical services. -As shown in Figure 4-10, and thinking from a logical/business microservice perspective, when implementing a Service Fabric Stateful Reliable Service, you usually will need to implement two tiers of services. The first is the back-end stateful reliable service, which handles multiple partitions (each partition is a stateful service). The second is the front-end service, or Gateway service, in charge of routing and data aggregation across multiple partitions or stateful service instances. That Gateway service also handles client-side communication with retry loops accessing the back-end service. It's called a Gateway service if you implement your custom service, or alternatively you can also use the out-of-the-box Service Fabric [reverse proxy](https://docs.microsoft.com/azure/service-fabric/service-fabric-reverseproxy). +As shown in Figure 4-10, and thinking from a logical/business microservice perspective, when implementing a Service Fabric Stateful Reliable Service, you usually will need to implement two tiers of services. The first is the back-end stateful reliable service, which handles multiple partitions (each partition is a stateful service). The second is the front-end service, or Gateway service, in charge of routing and data aggregation across multiple partitions or stateful service instances. That Gateway service also handles client-side communication with retry loops accessing the back-end service. It's called a Gateway service if you implement your custom service, or alternatively you can also use the out-of-the-box Service Fabric [reverse proxy](/azure/service-fabric/service-fabric-reverseproxy). ![Diagram showing several stateful services in containers.](./media/orchestrate-high-scalability-availability/service-fabric-stateful-business-microservice.png) @@ -149,7 +149,7 @@ As shown in Figure 4-10, and thinking from a logical/business microservice persp In any case, when you use Service Fabric Stateful Reliable Services, you also have a logical or business microservice (Bounded Context) that's composed of multiple physical services. Each of them, the Gateway service, and Partition service could be implemented as ASP.NET Web API services, as shown in Figure 4-11. Service Fabric has prescription to support several stateful reliable services in containers. -In Service Fabric, you can group and deploy groups of services as a [Service Fabric Application](https://docs.microsoft.com/azure/service-fabric/service-fabric-application-model), which is the unit of packaging and deployment for the orchestrator or cluster. Therefore, the Service Fabric Application could be mapped to this autonomous business and logical microservice boundary or Bounded Context, as well, so you could deploy these services autonomously. +In Service Fabric, you can group and deploy groups of services as a [Service Fabric Application](/azure/service-fabric/service-fabric-application-model), which is the unit of packaging and deployment for the orchestrator or cluster. Therefore, the Service Fabric Application could be mapped to this autonomous business and logical microservice boundary or Bounded Context, as well, so you could deploy these services autonomously. ### Service Fabric and containers @@ -167,7 +167,7 @@ Note that you can mix services in processes, and services in containers, in the **Figure 4-13**. Business microservice mapped to a Service Fabric application with containers and stateful services -For more information about container support in Azure Service Fabric, see [Service Fabric and containers](https://docs.microsoft.com/azure/service-fabric/service-fabric-containers-overview). +For more information about container support in Azure Service Fabric, see [Service Fabric and containers](/azure/service-fabric/service-fabric-containers-overview). ## Stateless versus stateful microservices @@ -181,11 +181,11 @@ But the services themselves can also be stateful in Service Fabric, which means In stateless services, the state (persistence, database) is kept out of the microservice. In stateful services, state is kept inside the microservice. A stateless approach is perfectly valid and is easier to implement than stateful microservices, since the approach is similar to traditional and well-known patterns. But stateless microservices impose latency between the process and data sources. They also involve more moving pieces when you're trying to improve performance with additional cache and queues. The result is that you can end up with complex architectures that have too many tiers. -In contrast, [stateful microservices](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction#when-to-use-reliable-services-apis) can excel in advanced scenarios, because there's no latency between the domain logic and data. Heavy data processing, gaming back ends, databases as a service, and other low-latency scenarios all benefit from stateful services, which enable local state for faster access. +In contrast, [stateful microservices](/azure/service-fabric/service-fabric-reliable-services-introduction#when-to-use-reliable-services-apis) can excel in advanced scenarios, because there's no latency between the domain logic and data. Heavy data processing, gaming back ends, databases as a service, and other low-latency scenarios all benefit from stateful services, which enable local state for faster access. Stateless and stateful services are complementary. For instance, as you can see in the right diagram in Figure 4-14, a stateful service can be split into multiple partitions. To access those partitions, you might need a stateless service acting as a gateway service that knows how to address each partition based on partition keys. -Stateful services do have drawbacks. They impose a high complexity level to be scaled out. Functionality that would usually be implemented by external database systems must be addressed for tasks such as data replication across stateful microservices and data partitioning. However, this is one of the areas where an orchestrator like [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-platform-architecture) with its [stateful reliable services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction#when-to-use-reliable-services-apis) can help the most—by simplifying the development and lifecycle of stateful microservices using the [Reliable Services API](https://docs.microsoft.com/azure/service-fabric/service-fabric-work-with-reliable-collections) and [Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction). +Stateful services do have drawbacks. They impose a high complexity level to be scaled out. Functionality that would usually be implemented by external database systems must be addressed for tasks such as data replication across stateful microservices and data partitioning. However, this is one of the areas where an orchestrator like [Azure Service Fabric](/azure/service-fabric/service-fabric-reliable-services-platform-architecture) with its [stateful reliable services](/azure/service-fabric/service-fabric-reliable-services-introduction#when-to-use-reliable-services-apis) can help the most—by simplifying the development and lifecycle of stateful microservices using the [Reliable Services API](/azure/service-fabric/service-fabric-work-with-reliable-collections) and [Reliable Actors](/azure/service-fabric/service-fabric-reliable-actors-introduction). Other microservice frameworks that allow stateful services, support the Actor pattern, and improve fault tolerance and latency between business logic and data are Microsoft [Orleans](https://github.com/dotnet/orleans), from Microsoft Research, and [Akka.NET](https://getakka.net/). Both frameworks are currently improving their support for Docker. @@ -203,7 +203,7 @@ As shown in figure 4-15, applications hosted on Service Fabric Mesh run and scal Under the covers, Service Fabric Mesh consists of clusters of thousands of machines. All cluster operations are hidden from the developer. You simply need to upload your containers and specify resources you need, availability requirements, and resource limits. Service Fabric Mesh automatically allocates the infrastructure requested by your application deployment and also handles infrastructure failures, making sure your applications are highly available. You only need to care about the health and responsiveness of your application, not the infrastructure. -For further information, see the [Service Fabric Mesh documentation](https://docs.microsoft.com/azure/service-fabric-mesh/). +For further information, see the [Service Fabric Mesh documentation](/azure/service-fabric-mesh/). ## Choosing orchestrators in Azure diff --git a/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/state-and-data-in-docker-applications.md b/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/state-and-data-in-docker-applications.md index ea4bf12fa8daa..174aa2cb95d0f 100644 --- a/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/state-and-data-in-docker-applications.md +++ b/docs/architecture/containerized-lifecycle/design-develop-containerized-apps/state-and-data-in-docker-applications.md @@ -21,7 +21,7 @@ From remote storage: - [Azure Storage](https://azure.microsoft.com/documentation/services/storage/) provides geo-distributable storage, providing a good long-term persistence solution for containers. -- Remote relational databases like [Azure SQL Database](https://azure.microsoft.com/services/sql-database/), NoSQL databases like [Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db/introduction), or cache services like [Redis](https://redis.io/). +- Remote relational databases like [Azure SQL Database](https://azure.microsoft.com/services/sql-database/), NoSQL databases like [Azure Cosmos DB](/azure/cosmos-db/introduction), or cache services like [Redis](https://redis.io/). From the Docker container: diff --git a/docs/architecture/containerized-lifecycle/index.md b/docs/architecture/containerized-lifecycle/index.md index 506fe8c98d859..de131877efcbe 100644 --- a/docs/architecture/containerized-lifecycle/index.md +++ b/docs/architecture/containerized-lifecycle/index.md @@ -11,7 +11,7 @@ ms.date: 07/30/2020 This guide is a general overview for developing and deploying containerized ASP.NET Core applications with Docker, using the Microsoft platform and tools. The guide includes a high-level introduction to Azure DevOps, for implementing CI/CD pipelines, as well as Azure Container Registry (ACR), and Azure Kubernetes Services AKS for deployment. -For low-level, development-related details you can see the [.NET Microservices: Architecture for Containerized .NET Applications](https://docs.microsoft.com/dotnet/architecture/microservices/) guide and it related reference application [eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers). +For low-level, development-related details you can see the [.NET Microservices: Architecture for Containerized .NET Applications](../microservices/index.md) guide and it related reference application [eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers). ## Send us your feedback! diff --git a/docs/architecture/containerized-lifecycle/run-manage-monitor-docker-environments/manage-production-docker-environments.md b/docs/architecture/containerized-lifecycle/run-manage-monitor-docker-environments/manage-production-docker-environments.md index db68477bdd5fb..d3f2cf4ad33ed 100644 --- a/docs/architecture/containerized-lifecycle/run-manage-monitor-docker-environments/manage-production-docker-environments.md +++ b/docs/architecture/containerized-lifecycle/run-manage-monitor-docker-environments/manage-production-docker-environments.md @@ -19,10 +19,10 @@ Table 6-1 lists common management tools related to their orchestrators, schedule | Management tools | Description | Related orchestrators | |------------------|-------------|-----------------------| -| [Azure Monitor for Containers](https://docs.microsoft.com/azure/monitoring/monitoring-container-insights-overview) | Azure dedicated Kubernetes management tool | Azure Kubernetes Services (AKS) | +| [Azure Monitor for Containers](/azure/monitoring/monitoring-container-insights-overview) | Azure dedicated Kubernetes management tool | Azure Kubernetes Services (AKS) | | [Kubernetes Web UI (dashboard)](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) | Kubernetes management tool, can monitor and manage local Kubernetes cluster | Azure Kubernetes Service (AKS)
Local Kubernetes | -| [Azure portal for Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-creation-via-portal)
[Azure Service Fabric Explorer](https://docs.microsoft.com/azure/service-fabric/service-fabric-visualizing-your-cluster) | Online and desktop version for managing Service Fabric clusters, on Azure, on premises, local development, and other clouds | Azure Service Fabric | -| [Container Monitoring (Azure Monitor)](https://docs.microsoft.com/azure/azure-monitor/insights/containers) | General container management y monitoring solution. Can manage Kubernetes clusters through [Azure Monitor for Containers](https://docs.microsoft.com/azure/monitoring/monitoring-container-insights-overview). | Azure Service Fabric
Azure Kubernetes Service (AKS)
Mesosphere DC/OS and others. | +| [Azure portal for Service Fabric](/azure/service-fabric/service-fabric-cluster-creation-via-portal)
[Azure Service Fabric Explorer](/azure/service-fabric/service-fabric-visualizing-your-cluster) | Online and desktop version for managing Service Fabric clusters, on Azure, on premises, local development, and other clouds | Azure Service Fabric | +| [Container Monitoring (Azure Monitor)](/azure/azure-monitor/insights/containers) | General container management y monitoring solution. Can manage Kubernetes clusters through [Azure Monitor for Containers](/azure/monitoring/monitoring-container-insights-overview). | Azure Service Fabric
Azure Kubernetes Service (AKS)
Mesosphere DC/OS and others. | ## Azure Service Fabric @@ -30,9 +30,9 @@ Another choice for cluster-deployment and management is Azure Service Fabric. [S The following are Service Fabric management tools: -- [Azure portal for Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-creation-via-portal) cluster-related operations (create/update/delete) a cluster or configure its infrastructure (VMs, load balancer, networking, etc.) +- [Azure portal for Service Fabric](/azure/service-fabric/service-fabric-cluster-creation-via-portal) cluster-related operations (create/update/delete) a cluster or configure its infrastructure (VMs, load balancer, networking, etc.) -- [Azure Service Fabric Explorer](https://docs.microsoft.com/azure/service-fabric/service-fabric-visualizing-your-cluster) is a specialized web UI and desktop multi-platform tool that provides insights and certain operations on the Service Fabric cluster, from the nodes/VMs point of view and from the application and services point of view. +- [Azure Service Fabric Explorer](/azure/service-fabric/service-fabric-visualizing-your-cluster) is a specialized web UI and desktop multi-platform tool that provides insights and certain operations on the Service Fabric cluster, from the nodes/VMs point of view and from the application and services point of view. >[!div class="step-by-step"] >[Previous](run-microservices-based-applications-in-production.md) diff --git a/docs/architecture/containerized-lifecycle/what-is-docker.md b/docs/architecture/containerized-lifecycle/what-is-docker.md index c061cb09ae62e..0a3c20bd6b110 100644 --- a/docs/architecture/containerized-lifecycle/what-is-docker.md +++ b/docs/architecture/containerized-lifecycle/what-is-docker.md @@ -23,7 +23,7 @@ To run [Windows Containers](/virtualization/windowscontainers/about/), there are - **Hyper-V Containers** expand on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host isn't shared with the Hyper-V Containers, providing better isolation. -The images for these containers are created and work just the same way. The difference is in how the container is created from the image—running a Hyper-V Container requires an extra parameter. For details, see [Hyper-V Containers](https://docs.microsoft.com/virtualization/windowscontainers/manage-containers/hyperv-container). +The images for these containers are created and work just the same way. The difference is in how the container is created from the image—running a Hyper-V Container requires an extra parameter. For details, see [Hyper-V Containers](/virtualization/windowscontainers/manage-containers/hyperv-container). ## Comparing Docker containers with virtual machines diff --git a/docs/architecture/grpc-for-wcf-developers/appendix.md b/docs/architecture/grpc-for-wcf-developers/appendix.md index 8c6bc38001376..22e288d33bdc8 100644 --- a/docs/architecture/grpc-for-wcf-developers/appendix.md +++ b/docs/architecture/grpc-for-wcf-developers/appendix.md @@ -6,7 +6,7 @@ ms.date: 09/02/2019 # Appendix A - Transactions -Windows Communication Foundation (WCF) supports distributed transactions, allowing you to perform atomic operations across multiple services. This functionality is based on the [Microsoft Distributed Transaction Coordinator](https://docs.microsoft.com/previous-versions/windows/desktop/ms684146(v=vs.85)). +Windows Communication Foundation (WCF) supports distributed transactions, allowing you to perform atomic operations across multiple services. This functionality is based on the [Microsoft Distributed Transaction Coordinator](/previous-versions/windows/desktop/ms684146(v=vs.85)). In the newer microservices landscape, this type of automated distributed transaction processing isn't possible. There are too many different technologies involved, including relational databases, NoSQL data stores, and messaging systems. There might also be a mix of operating systems, programming languages, and frameworks in use in a single environment. @@ -16,7 +16,7 @@ If possible, it's best to avoid distributed transactions altogether. If two item If that isn't possible, then one alternative is to use the [Saga pattern](https://microservices.io/patterns/data/saga.html). In a saga, updates are processed sequentially; as each update succeeds, the next one is triggered. These triggers can be propagated from service to service, or managed by a saga coordinator or orchestrator. If an update fails at any point during the process, the services that have already completed their updates apply specific logic to reverse them. -Another option is to use Domain Driven Design (DDD) and Command/Query Responsibility Segregation (CQRS), as described in the [.NET Microservices e-book](https://docs.microsoft.com/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/). In particular, using domain events or [event sourcing](https://martinfowler.com/eaaDev/EventSourcing.html) can help to ensure that updates are consistently, if not immediately, applied. +Another option is to use Domain Driven Design (DDD) and Command/Query Responsibility Segregation (CQRS), as described in the [.NET Microservices e-book](../microservices/microservice-ddd-cqrs-patterns/index.md). In particular, using domain events or [event sourcing](https://martinfowler.com/eaaDev/EventSourcing.html) can help to ensure that updates are consistently, if not immediately, applied. >[!div class="step-by-step"] >[Previous](application-performance-management.md) diff --git a/docs/architecture/grpc-for-wcf-developers/application-performance-management.md b/docs/architecture/grpc-for-wcf-developers/application-performance-management.md index 66842596ef840..10693fb753047 100644 --- a/docs/architecture/grpc-for-wcf-developers/application-performance-management.md +++ b/docs/architecture/grpc-for-wcf-developers/application-performance-management.md @@ -20,7 +20,7 @@ In production environments like Kubernetes, it's important to monitor applicatio ## Logging in ASP.NET Core gRPC -ASP.NET Core provides built-in support for logging, in the form of the [Microsoft.Extensions.Logging](https://www.nuget.org/packages/Microsoft.Extensions.Logging) NuGet package. The core parts of this library are included with the Web SDK, so there's no need to install it manually. By default, log messages are written to the standard output (the "console") and to any attached debugger. To write logs to persistent external data stores, you might need to import [optional logging sink packages](https://docs.microsoft.com/aspnet/core/fundamentals/logging/?view=aspnetcore-3.0#third-party-logging-providers). +ASP.NET Core provides built-in support for logging, in the form of the [Microsoft.Extensions.Logging](https://www.nuget.org/packages/Microsoft.Extensions.Logging) NuGet package. The core parts of this library are included with the Web SDK, so there's no need to install it manually. By default, log messages are written to the standard output (the "console") and to any attached debugger. To write logs to persistent external data stores, you might need to import [optional logging sink packages](/aspnet/core/fundamentals/logging/?view=aspnetcore-3.0#third-party-logging-providers). The ASP.NET Core gRPC framework writes detailed diagnostic logging messages to this logging framework, so they can be processed and stored along with your application's own messages. @@ -48,7 +48,7 @@ For more information about writing log messages and available logging sinks and The .NET Core runtime provides a set of components for emitting and observing metrics. These include APIs such as the and classes. These APIs can emit basic numeric data that can be consumed by external processes, like the [dotnet-counters global tool](../../core/diagnostics/dotnet-counters.md), or Event Tracing for Windows. For more information about using `EventCounter` in your own code, see [EventCounter introduction](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.Tracing/documentation/EventCounterTutorial.md). -For more advanced metrics and for writing metric data to a wider range of data stores, you might try an open-source project called [App Metrics](https://www.app-metrics.io). This suite of libraries provides an extensive set of types to instrument your code. It also offers packages to write metrics to different kinds of targets that include time-series databases, such as Prometheus and InfluxDB, and [Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview). The [App.Metrics.AspNetCore.Mvc](https://www.nuget.org/packages/App.Metrics.AspNetCore.Mvc/) NuGet package even adds a comprehensive set of basic metrics that are automatically generated via integration with the ASP.NET Core framework. The project website provides [templates](https://www.app-metrics.io/samples/grafana/) for displaying those metrics with the [Grafana](https://grafana.com/) visualization platform. +For more advanced metrics and for writing metric data to a wider range of data stores, you might try an open-source project called [App Metrics](https://www.app-metrics.io). This suite of libraries provides an extensive set of types to instrument your code. It also offers packages to write metrics to different kinds of targets that include time-series databases, such as Prometheus and InfluxDB, and [Application Insights](/azure/azure-monitor/app/app-insights-overview). The [App.Metrics.AspNetCore.Mvc](https://www.nuget.org/packages/App.Metrics.AspNetCore.Mvc/) NuGet package even adds a comprehensive set of basic metrics that are automatically generated via integration with the ASP.NET Core framework. The project website provides [templates](https://www.app-metrics.io/samples/grafana/) for displaying those metrics with the [Grafana](https://grafana.com/) visualization platform. ### Produce metrics @@ -93,7 +93,7 @@ public class StockData : Stocks.StocksBase ### Store and visualize metrics data -The best way to store metrics data is in a *time-series database*, a specialized data store designed to record numerical data series marked with timestamps. The most popular of these databases are [Prometheus](https://prometheus.io/) and [InfluxDB](https://www.influxdata.com/products/influxdb-overview/). Microsoft Azure also provides dedicated metrics storage through the [Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/overview) service. +The best way to store metrics data is in a *time-series database*, a specialized data store designed to record numerical data series marked with timestamps. The most popular of these databases are [Prometheus](https://prometheus.io/) and [InfluxDB](https://www.influxdata.com/products/influxdb-overview/). Microsoft Azure also provides dedicated metrics storage through the [Azure Monitor](/azure/azure-monitor/overview) service. The current go-to solution for visualizing metrics data is [Grafana](https://grafana.com), which works with a wide range of storage providers. The following image shows an example Grafana dashboard that displays metrics from the Linkerd service mesh running the StockData sample: diff --git a/docs/architecture/grpc-for-wcf-developers/docker.md b/docs/architecture/grpc-for-wcf-developers/docker.md index 35b6c201dc7ec..4bcead83eb573 100644 --- a/docs/architecture/grpc-for-wcf-developers/docker.md +++ b/docs/architecture/grpc-for-wcf-developers/docker.md @@ -132,7 +132,7 @@ The `-ti` flag connects your current terminal to the container's terminal, and r ## Push the image to a registry -After you've verified that the image works, push it to a Docker registry to make it available on other systems. Internal networks will need to provision a Docker registry. This can be as simple as running [Docker's own `registry` image](https://docs.docker.com/registry/deploying/) (the Docker registry runs in a Docker container), but there are various more comprehensive solutions available. For external sharing and cloud use, there are various managed registries available, such as [Azure Container Registry](https://docs.microsoft.com/azure/container-registry/) or [Docker Hub](https://docs.docker.com/docker-hub/repos/). +After you've verified that the image works, push it to a Docker registry to make it available on other systems. Internal networks will need to provision a Docker registry. This can be as simple as running [Docker's own `registry` image](https://docs.docker.com/registry/deploying/) (the Docker registry runs in a Docker container), but there are various more comprehensive solutions available. For external sharing and cloud use, there are various managed registries available, such as [Azure Container Registry](/azure/container-registry/) or [Docker Hub](https://docs.docker.com/docker-hub/repos/). To push to Docker Hub, prefix the image name with your user or organization name. diff --git a/docs/architecture/index.yml b/docs/architecture/index.yml index b6c7c588343c2..7df07cee879d1 100644 --- a/docs/architecture/index.yml +++ b/docs/architecture/index.yml @@ -40,7 +40,7 @@ landingContent: - text: Hello World Microservice tutorial url: https://dotnet.microsoft.com/learn/aspnet/microservice-tutorial/intro - text: Create and deploy a cloud-native ASP.NET Core microservice - url: https://docs.microsoft.com/learn/modules/microservices-aspnet-core + url: /learn/modules/microservices-aspnet-core # Card - title: Migrate .NET apps to Azure @@ -68,4 +68,4 @@ landingContent: - linkListType: concept links: - text: Modernizing desktop apps on Windows 10 with .NET Core 3.1 - url: modernize-desktop/index.md + url: modernize-desktop/index.md \ No newline at end of file diff --git a/docs/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication.md b/docs/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication.md index dcd6c8b0add63..a64219ed02f92 100644 --- a/docs/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication.md +++ b/docs/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication.md @@ -54,7 +54,7 @@ An important point is that you might want to communicate to multiple microservic In asynchronous event-driven communication, one microservice publishes events to an event bus and many microservices can subscribe to it, to get notified and act on it. Your implementation will determine what protocol to use for event-driven, message-based communications. [AMQP](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol) can help achieve reliable queued communication. -When you use an event bus, you might want to use an abstraction level (like an event bus interface) based on a related implementation in classes with code using the API from a message broker like [RabbitMQ](https://www.rabbitmq.com/) or a service bus like [Azure Service Bus with Topics](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). Alternatively, you might want to use a higher-level service bus like NServiceBus, MassTransit, or Brighter to articulate your event bus and publish/subscribe system. +When you use an event bus, you might want to use an abstraction level (like an event bus interface) based on a related implementation in classes with code using the API from a message broker like [RabbitMQ](https://www.rabbitmq.com/) or a service bus like [Azure Service Bus with Topics](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). Alternatively, you might want to use a higher-level service bus like NServiceBus, MassTransit, or Brighter to articulate your event bus and publish/subscribe system. ## A note about messaging technologies for production systems @@ -70,7 +70,7 @@ A challenge when implementing an event-driven architecture across multiple micro - Using [transaction log mining](https://www.scoop.it/t/sql-server-transaction-log-mining). -- Using full [Event Sourcing](https://docs.microsoft.com/azure/architecture/patterns/event-sourcing) pattern. +- Using full [Event Sourcing](/azure/architecture/patterns/event-sourcing) pattern. - Using the [Outbox pattern](https://www.kamilgrzybek.com/design/the-outbox-pattern/): a transactional database table as a message queue that will be the base for an event-creator component that would create the event and publish it. diff --git a/docs/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture.md b/docs/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture.md index 899f325300772..ca31b025a7bc0 100644 --- a/docs/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture.md +++ b/docs/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture.md @@ -29,7 +29,7 @@ The second axis defines if the communication has a single receiver or multiple r - Single receiver. Each request must be processed by exactly one receiver or service. An example of this communication is the [Command pattern](https://en.wikipedia.org/wiki/Command_pattern). -- Multiple receivers. Each request can be processed by zero to multiple receivers. This type of communication must be asynchronous. An example is the [publish/subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) mechanism used in patterns like [Event-driven architecture](https://microservices.io/patterns/data/event-driven-architecture.html). This is based on an event-bus interface or message broker when propagating data updates between multiple microservices through events; it's usually implemented through a service bus or similar artifact like [Azure Service Bus](https://azure.microsoft.com/services/service-bus/) by using [topics and subscriptions](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). +- Multiple receivers. Each request can be processed by zero to multiple receivers. This type of communication must be asynchronous. An example is the [publish/subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) mechanism used in patterns like [Event-driven architecture](https://microservices.io/patterns/data/event-driven-architecture.html). This is based on an event-bus interface or message broker when propagating data updates between multiple microservices through events; it's usually implemented through a service bus or similar artifact like [Azure Service Bus](https://azure.microsoft.com/services/service-bus/) by using [topics and subscriptions](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions). A microservice-based application will often use a combination of these communication styles. The most common type is single-receiver communication with a synchronous protocol like HTTP/HTTPS when invoking a regular Web API HTTP service. Microservices also typically use messaging protocols for asynchronous communication between microservices. @@ -75,7 +75,7 @@ When a client uses request/response communication, it sends a request to a servi **Figure 4-16**. Using HTTP request/response communication (synchronous or asynchronous) -When a client uses request/response communication, it assumes that the response will arrive in a short time, typically less than a second, or a few seconds at most. For delayed responses, you need to implement asynchronous communication based on [messaging patterns](https://docs.microsoft.com/azure/architecture/patterns/category/messaging) and [messaging technologies](https://en.wikipedia.org/wiki/Message-oriented_middleware), which is a different approach that we explain in the next section. +When a client uses request/response communication, it assumes that the response will arrive in a short time, typically less than a second, or a few seconds at most. For delayed responses, you need to implement asynchronous communication based on [messaging patterns](/azure/architecture/patterns/category/messaging) and [messaging technologies](https://en.wikipedia.org/wiki/Message-oriented_middleware), which is a different approach that we explain in the next section. A popular architectural style for request/response communication is [REST](https://en.wikipedia.org/wiki/Representational_state_transfer). This approach is based on, and tightly coupled to, the [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) protocol, embracing HTTP verbs like GET, POST, and PUT. REST is the most commonly used architectural communication approach when creating services. You can implement REST services when you develop ASP.NET Core Web API services. diff --git a/docs/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md b/docs/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md index 56b2150a605c7..d33985b7dc2b0 100644 --- a/docs/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md +++ b/docs/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md @@ -19,7 +19,7 @@ In this approach, each microservice has a public endpoint, sometimes with a diff `http://eshoponcontainers.westus.cloudapp.azure.com:88/` -In a production environment based on a cluster, that URL would map to the load balancer used in the cluster, which in turn distributes the requests across the microservices. In production environments, you could have an Application Delivery Controller (ADC) like [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/application-gateway-introduction) between your microservices and the Internet. This acts as a transparent tier that not only performs load balancing, but secures your services by offering SSL termination. This improves the load of your hosts by offloading CPU-intensive SSL termination and other routing duties to the Azure Application Gateway. In any case, a load balancer and ADC are transparent from a logical application architecture point of view. +In a production environment based on a cluster, that URL would map to the load balancer used in the cluster, which in turn distributes the requests across the microservices. In production environments, you could have an Application Delivery Controller (ADC) like [Azure Application Gateway](/azure/application-gateway/application-gateway-introduction) between your microservices and the Internet. This acts as a transparent tier that not only performs load balancing, but secures your services by offering SSL termination. This improves the load of your hosts by offloading CPU-intensive SSL termination and other routing duties to the Azure Application Gateway. In any case, a load balancer and ADC are transparent from a logical application architecture point of view. A direct client-to-microservice communication architecture could be good enough for a small microservice-based application, especially if the client app is a server-side web application like an ASP.NET MVC app. However, when you build large and complex microservice-based applications (for example, when handling dozens of microservice types), and especially when the client apps are remote mobile apps or SPA web applications, that approach faces a few issues. @@ -89,13 +89,13 @@ An API Gateway can offer multiple features. Depending on the product it might of **Reverse proxy or gateway routing.** The API Gateway offers a reverse proxy to redirect or route requests (layer 7 routing, usually HTTP requests) to the endpoints of the internal microservices. The gateway provides a single endpoint or URL for the client apps and then internally maps the requests to a group of internal microservices. This routing feature helps to decouple the client apps from the microservices but it's also convenient when modernizing a monolithic API by sitting the API Gateway in between the monolithic API and the client apps, then you can add new APIs as new microservices while still using the legacy monolithic API until it's split into many microservices in the future. Because of the API Gateway, the client apps won't notice if the APIs being used are implemented as internal microservices or a monolithic API and more importantly, when evolving and refactoring the monolithic API into microservices, thanks to the API Gateway routing, client apps won't be impacted with any URI change. -For more information, see [Gateway routing pattern](https://docs.microsoft.com/azure/architecture/patterns/gateway-routing). +For more information, see [Gateway routing pattern](/azure/architecture/patterns/gateway-routing). **Requests aggregation.** As part of the gateway pattern you can aggregate multiple client requests (usually HTTP requests) targeting multiple internal microservices into a single client request. This pattern is especially convenient when a client page/screen needs information from several microservices. With this approach, the client app sends a single request to the API Gateway that dispatches several requests to the internal microservices and then aggregates the results and sends everything back to the client app. The main benefit and goal of this design pattern is to reduce chattiness between the client apps and the backend API, which is especially important for remote apps out of the datacenter where the microservices live, like mobile apps or requests coming from SPA apps that come from JavaScript in client remote browsers. For regular web apps performing the requests in the server environment (like an ASP.NET Core MVC web app), this pattern is not so important as the latency is very much smaller than for remote client apps. Depending on the API Gateway product you use, it might be able to perform this aggregation. However, in many cases it's more flexible to create aggregation microservices under the scope of the API Gateway, so you define the aggregation in code (that is, C# code): -For more information, see [Gateway aggregation pattern](https://docs.microsoft.com/azure/architecture/patterns/gateway-aggregation). +For more information, see [Gateway aggregation pattern](/azure/architecture/patterns/gateway-aggregation). **Cross-cutting concerns or gateway offloading.** Depending on the features offered by each API Gateway product, you can offload functionality from individual microservices to the gateway, which simplifies the implementation of each microservice by consolidating cross-cutting concerns into one tier. This is especially convenient for specialized features that can be complex to implement properly in every internal microservice, such as the following functionality: @@ -109,7 +109,7 @@ For more information, see [Gateway aggregation pattern](https://docs.microsoft.c - Headers, query strings, and claims transformation - IP whitelisting -For more information, see [Gateway offloading pattern](https://docs.microsoft.com/azure/architecture/patterns/gateway-offloading). +For more information, see [Gateway offloading pattern](/azure/architecture/patterns/gateway-offloading). ## Using products with API Gateway features diff --git a/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md b/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md index 2460cfc6001e3..58d92e9576f5f 100644 --- a/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md +++ b/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md @@ -19,7 +19,7 @@ A second challenge is how to implement queries that retrieve data from several m **API Gateway.** For simple data aggregation from multiple microservices that own different databases, the recommended approach is an aggregation microservice referred to as an API Gateway. However, you need to be careful about implementing this pattern, because it can be a choke point in your system, and it can violate the principle of microservice autonomy. To mitigate this possibility, you can have multiple fined-grained API Gateways each one focusing on a vertical "slice" or business area of the system. The API Gateway pattern is explained in more detail in the [API Gateway section](direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md#why-consider-api-gateways-instead-of-direct-client-to-microservice-communication) later. -**CQRS with query/reads tables.** Another solution for aggregating data from multiple microservices is the [Materialized View pattern](https://docs.microsoft.com/azure/architecture/patterns/materialized-view). In this approach, you generate, in advance (prepare denormalized data before the actual queries happen), a read-only table with the data that's owned by multiple microservices. The table has a format suited to the client app's needs. +**CQRS with query/reads tables.** Another solution for aggregating data from multiple microservices is the [Materialized View pattern](/azure/architecture/patterns/materialized-view). In this approach, you generate, in advance (prepare denormalized data before the actual queries happen), a read-only table with the data that's owned by multiple microservices. The table has a format suited to the client app's needs. Consider something like the screen for a mobile app. If you have a single database, you might pull together the data for that screen using a SQL query that performs a complex join involving multiple tables. However, when you have multiple databases, and each database is owned by a different microservice, you cannot query those databases and create a SQL join. Your complex query becomes a challenge. You can address the requirement using a CQRS approach—you create a denormalized table in a different database that's used just for queries. The table can be designed specifically for the data you need for the complex query, with a one-to-one relationship between fields needed by your application's screen and the columns in the query table. It could also serve for reporting purposes. diff --git a/docs/architecture/microservices/architect-microservice-container-applications/docker-application-state-data.md b/docs/architecture/microservices/architect-microservice-container-applications/docker-application-state-data.md index 8051493bab138..c73ef8002e018 100644 --- a/docs/architecture/microservices/architect-microservice-container-applications/docker-application-state-data.md +++ b/docs/architecture/microservices/architect-microservice-container-applications/docker-application-state-data.md @@ -21,7 +21,7 @@ From remote storage: - [Azure Storage](https://azure.microsoft.com/documentation/services/storage/), which provides geo-distributable storage, providing a good long-term persistence solution for containers. -- Remote relational databases like [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) or NoSQL databases like [Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db/introduction), or cache services like [Redis](https://redis.io/). +- Remote relational databases like [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) or NoSQL databases like [Azure Cosmos DB](/azure/cosmos-db/introduction), or cache services like [Redis](https://redis.io/). From the Docker container: diff --git a/docs/architecture/microservices/architect-microservice-container-applications/resilient-high-availability-microservices.md b/docs/architecture/microservices/architect-microservice-container-applications/resilient-high-availability-microservices.md index e77c0987596d5..84551eb58cbd3 100644 --- a/docs/architecture/microservices/architect-microservice-container-applications/resilient-high-availability-microservices.md +++ b/docs/architecture/microservices/architect-microservice-container-applications/resilient-high-availability-microservices.md @@ -34,7 +34,7 @@ Logs provide information about how an application or service is running, includi In monolithic server-based applications, you can simply write logs to a file on disk (a logfile) and then analyze it with any tool. Since application execution is limited to a fixed server or VM, it generally isn't too complex to analyze the flow of events. However, in a distributed application where multiple services are executed across many nodes in an orchestrator cluster, being able to correlate distributed events is a challenge. -A microservice-based application should not try to store the output stream of events or logfiles by itself, and not even try to manage the routing of the events to a central place. It should be transparent, meaning that each process should just write its event stream to a standard output that underneath will be collected by the execution environment infrastructure where it's running. An example of these event stream routers is [Microsoft.Diagnostic.EventFlow](https://github.com/Azure/diagnostics-eventflow), which collects event streams from multiple sources and publishes it to output systems. These can include simple standard output for a development environment or cloud systems like [Azure Monitor](https://azure.microsoft.com/services/monitor//) and [Azure Diagnostics](https://docs.microsoft.com/azure/azure-monitor/platform/diagnostics-extension-overview). There are also good third-party log analysis platforms and tools that can search, alert, report, and monitor logs, even in real time, like [Splunk](https://www.splunk.com/goto/Splunk_Log_Management?ac=ga_usa_log_analysis_phrase_Mar17&_kk=logs%20analysis&gclid=CNzkzIrex9MCFYGHfgodW5YOtA). +A microservice-based application should not try to store the output stream of events or logfiles by itself, and not even try to manage the routing of the events to a central place. It should be transparent, meaning that each process should just write its event stream to a standard output that underneath will be collected by the execution environment infrastructure where it's running. An example of these event stream routers is [Microsoft.Diagnostic.EventFlow](https://github.com/Azure/diagnostics-eventflow), which collects event streams from multiple sources and publishes it to output systems. These can include simple standard output for a development environment or cloud systems like [Azure Monitor](https://azure.microsoft.com/services/monitor//) and [Azure Diagnostics](/azure/azure-monitor/platform/diagnostics-extension-overview). There are also good third-party log analysis platforms and tools that can search, alert, report, and monitor logs, even in real time, like [Splunk](https://www.splunk.com/goto/Splunk_Log_Management?ac=ga_usa_log_analysis_phrase_Mar17&_kk=logs%20analysis&gclid=CNzkzIrex9MCFYGHfgodW5YOtA). ### Orchestrators managing health and diagnostics information diff --git a/docs/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md b/docs/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md index 73705caa457a2..e2ddb0211ec69 100644 --- a/docs/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md +++ b/docs/architecture/microservices/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md @@ -62,7 +62,7 @@ In the development environment, [Docker announced in July 2018](https://blog.doc ## Getting started with Azure Kubernetes Service (AKS) -To begin using AKS, you deploy an AKS cluster from the Azure portal or by using the CLI. For more information on deploying a Kubernetes cluster in Azure, see [Deploy an Azure Kubernetes Service (AKS) cluster](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough-portal). +To begin using AKS, you deploy an AKS cluster from the Azure portal or by using the CLI. For more information on deploying a Kubernetes cluster in Azure, see [Deploy an Azure Kubernetes Service (AKS) cluster](/azure/aks/kubernetes-walkthrough-portal). There are no fees for any of the software installed by default as part of AKS. All default options are implemented with open-source software. AKS is available for multiple virtual machines in Azure. You're charged only for the compute instances you choose, as well as the other underlying infrastructure resources consumed, such as storage and networking. There are no incremental charges for AKS itself. @@ -74,7 +74,7 @@ When deploying an application to a Kubernetes cluster, you can use the original Helm Charts helps you define, version, install, share, upgrade or rollback even the most complex Kubernetes application. -Going further, Helm usage is also recommended because additional Kubernetes environments in Azure, such as [Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/azure-dev-spaces) are also based on Helm charts. +Going further, Helm usage is also recommended because additional Kubernetes environments in Azure, such as [Azure Dev Spaces](/azure/dev-spaces/azure-dev-spaces) are also based on Helm charts. Helm is maintained by the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) - in collaboration with Microsoft, Google, Bitnami and the Helm contributor community. @@ -82,7 +82,7 @@ For more implementation information on Helm charts and Kubernetes, see the [Usin ## Use Azure Dev Spaces for your Kubernetes application lifecycle -[Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/azure-dev-spaces) provides a rapid, iterative Kubernetes development experience for teams. With minimal dev machine setup, you can iteratively run and debug containers directly in Azure Kubernetes Service (AKS). Develop on Windows, Mac, or Linux using familiar tools like Visual Studio, Visual Studio Code, or the command line. +[Azure Dev Spaces](/azure/dev-spaces/azure-dev-spaces) provides a rapid, iterative Kubernetes development experience for teams. With minimal dev machine setup, you can iteratively run and debug containers directly in Azure Kubernetes Service (AKS). Develop on Windows, Mac, or Linux using familiar tools like Visual Studio, Visual Studio Code, or the command line. As mentioned, Azure Dev Spaces uses Helm charts when deploying the container-based applications. @@ -102,7 +102,7 @@ This feature is based on URL prefixes, so when using any dev space prefix in the To get a practical view on a concrete example, see the [eShopOnContainers wiki page on Azure Dev Spaces](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Azure-Dev-Spaces). -For further information check the article on [Team Development with Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/team-development-netcore). +For further information check the article on [Team Development with Azure Dev Spaces](/azure/dev-spaces/team-development-netcore). ## Additional resources diff --git a/docs/architecture/microservices/container-docker-introduction/docker-defined.md b/docs/architecture/microservices/container-docker-introduction/docker-defined.md index a3d9ecb8b9874..4a6c4af7f9b4c 100644 --- a/docs/architecture/microservices/container-docker-introduction/docker-defined.md +++ b/docs/architecture/microservices/container-docker-introduction/docker-defined.md @@ -23,7 +23,7 @@ To run [Windows Containers](/virtualization/windowscontainers/about/), there are - Hyper-V Containers expand on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host isn't shared with the Hyper-V Containers, providing better isolation. -The images for these containers are created the same way and function the same. The difference is in how the container is created from the image running a Hyper-V Container requires an extra parameter. For details, see [Hyper-V Containers](https://docs.microsoft.com/virtualization/windowscontainers/manage-containers/hyperv-container). +The images for these containers are created the same way and function the same. The difference is in how the container is created from the image running a Hyper-V Container requires an extra parameter. For details, see [Hyper-V Containers](/virtualization/windowscontainers/manage-containers/hyperv-container). ## Comparing Docker containers with virtual machines diff --git a/docs/architecture/microservices/docker-application-development-process/docker-app-development-workflow.md b/docs/architecture/microservices/docker-application-development-process/docker-app-development-workflow.md index e098fe7078d09..31c1cdf26fcf2 100644 --- a/docs/architecture/microservices/docker-application-development-process/docker-app-development-workflow.md +++ b/docs/architecture/microservices/docker-application-development-process/docker-app-development-workflow.md @@ -551,7 +551,7 @@ In addition, you need to perform step 2 (adding Docker support to your projects) ## Using PowerShell commands in a Dockerfile to set up Windows Containers -[Windows Containers](https://docs.microsoft.com/virtualization/windowscontainers/about/index) allow you to convert your existing Windows applications into Docker images and deploy them with the same tools as the rest of the Docker ecosystem. To use Windows Containers, you run PowerShell commands in the Dockerfile, as shown in the following example: +[Windows Containers](/virtualization/windowscontainers/about/index) allow you to convert your existing Windows applications into Docker images and deploy them with the same tools as the rest of the Docker ecosystem. To use Windows Containers, you run PowerShell commands in the Dockerfile, as shown in the following example: ```dockerfile FROM mcr.microsoft.com/windows/servercore diff --git a/docs/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md b/docs/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md index 77c752862b191..23cde27228f05 100644 --- a/docs/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md +++ b/docs/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests.md @@ -17,7 +17,7 @@ Therefore, `HttpClient` is intended to be instantiated once and reused throughou Another issue that developers run into is when using a shared instance of `HttpClient` in long-running processes. In a situation where the HttpClient is instantiated as a singleton or a static object, it fails to handle the DNS changes as described in this [issue](https://github.com/dotnet/runtime/issues/18348) of the dotnet/runtime GitHub repository. -However, the issue isn't really with `HttpClient` per se, but with the [default constructor for HttpClient](https://docs.microsoft.com/dotnet/api/system.net.http.httpclient.-ctor?view=netcore-3.1#System_Net_Http_HttpClient__ctor), because it creates a new concrete instance of , which is the one that has *sockets exhaustion* and DNS changes issues mentioned above. +However, the issue isn't really with `HttpClient` per se, but with the [default constructor for HttpClient](/dotnet/api/system.net.http.httpclient.-ctor?view=netcore-3.1#System_Net_Http_HttpClient__ctor), because it creates a new concrete instance of , which is the one that has *sockets exhaustion* and DNS changes issues mentioned above. To address the issues mentioned above and to make `HttpClient` instances manageable, .NET Core 2.1 introduced the interface which can be used to configure and create `HttpClient` instances in an app through Dependency Injection (DI). It also provides extensions for Polly-based middleware to take advantage of delegating handlers in HttpClient. diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation.md index 7576363ef3a25..4c142f81c7016 100644 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation.md +++ b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation.md @@ -336,7 +336,7 @@ Finally, it's important to mention that you might sometimes want to propagate ev As stated, use domain events to explicitly implement side effects of changes within your domain. To use DDD terminology, use domain events to explicitly implement side effects across one or multiple aggregates. Additionally, and for better scalability and less impact on database locks, use eventual consistency between aggregates within the same domain. -The reference app uses [MediatR](https://github.com/jbogard/MediatR) to propagate domain events synchronously across aggregates, within a single transaction. However, you could also use some AMQP implementation like [RabbitMQ](https://www.rabbitmq.com/) or [Azure Service Bus](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) to propagate domain events asynchronously, using eventual consistency but, as mentioned above, you have to consider the need for compensatory actions in case of failures. +The reference app uses [MediatR](https://github.com/jbogard/MediatR) to propagate domain events synchronously across aggregates, within a single transaction. However, you could also use some AMQP implementation like [RabbitMQ](https://www.rabbitmq.com/) or [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) to propagate domain events asynchronously, using eventual consistency but, as mentioned above, you have to consider the need for compensatory actions in case of failures. ## Additional resources diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/implement-value-objects.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/implement-value-objects.md index a5a84ba7f7c66..d20228054cc47 100644 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/implement-value-objects.md +++ b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/implement-value-objects.md @@ -121,7 +121,7 @@ You just saw how to define a value object in your domain model. But how can you ### Background and older approaches using EF Core 1.1 -As background, a limitation when using EF Core 1.0 and 1.1 was that you could not use [complex types](xref:System.ComponentModel.DataAnnotations.Schema.ComplexTypeAttribute) as defined in EF 6.x in the traditional .NET Framework. Therefore, if using EF Core 1.0 or 1.1, you needed to store your value object as an EF entity with an ID field. Then, so it looked more like a value object with no identity, you could hide its ID so you make clear that the identity of a value object is not important in the domain model. You could hide that ID by using the ID as a [shadow property](https://docs.microsoft.com/ef/core/modeling/shadow-properties ). Since that configuration for hiding the ID in the model is set up in the EF infrastructure level, it would be kind of transparent for your domain model. +As background, a limitation when using EF Core 1.0 and 1.1 was that you could not use [complex types](xref:System.ComponentModel.DataAnnotations.Schema.ComplexTypeAttribute) as defined in EF 6.x in the traditional .NET Framework. Therefore, if using EF Core 1.0 or 1.1, you needed to store your value object as an EF entity with an ID field. Then, so it looked more like a value object with no identity, you could hide its ID so you make clear that the identity of a value object is not important in the domain model. You could hide that ID by using the ID as a [shadow property](/ef/core/modeling/shadow-properties). Since that configuration for hiding the ID in the model is set up in the EF infrastructure level, it would be kind of transparent for your domain model. In the initial version of eShopOnContainers (.NET Core 1.1), the hidden ID needed by EF Core infrastructure was implemented in the following way in the DbContext level, using Fluent API at the infrastructure project. Therefore, the ID was hidden from the domain model point of view, but still present in the infrastructure. diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md index 88b35dd8c739c..4a00c5b4f1c8c 100644 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md +++ b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-implementation-entity-framework-core.md @@ -362,7 +362,7 @@ You can do this with single fields or also with collections, like a `List<>` fie ### Use shadow properties in EF Core, hidden at the infrastructure level -Shadow properties in EF Core are properties that do not exist in your entity class model. The values and states of these properties are maintained purely in the [ChangeTracker](https://docs.microsoft.com/ef/core/api/microsoft.entityframeworkcore.changetracking.changetracker) class at the infrastructure level. +Shadow properties in EF Core are properties that do not exist in your entity class model. The values and states of these properties are maintained purely in the [ChangeTracker](/ef/core/api/microsoft.entityframeworkcore.changetracking.changetracker) class at the infrastructure level. ## Implement the Query Specification pattern diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md index 29bebc38d5f96..4910b27bffa6e 100644 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md +++ b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md @@ -18,7 +18,7 @@ The Solution Explorer view of the Ordering.API microservice, showing the subfold **Figure 7-23**. The application layer in the Ordering.API ASP.NET Core Web API project -ASP.NET Core includes a simple [built-in IoC container](https://docs.microsoft.com/aspnet/core/fundamentals/dependency-injection) (represented by the IServiceProvider interface) that supports constructor injection by default, and ASP.NET makes certain services available through DI. ASP.NET Core uses the term *service* for any of the types you register that will be injected through DI. You configure the built-in container's services in the ConfigureServices method in your application's Startup class. Your dependencies are implemented in the services that a type needs and that you register in the IoC container. +ASP.NET Core includes a simple [built-in IoC container](/aspnet/core/fundamentals/dependency-injection) (represented by the IServiceProvider interface) that supports constructor injection by default, and ASP.NET makes certain services available through DI. ASP.NET Core uses the term *service* for any of the types you register that will be injected through DI. You configure the built-in container's services in the ConfigureServices method in your application's Startup class. Your dependencies are implemented in the services that a type needs and that you register in the IoC container. Typically, you want to inject dependencies that implement infrastructure objects. A typical dependency to inject is a repository. But you could inject any other infrastructure dependency that you may have. For simpler implementations, you could directly inject your Unit of Work pattern object (the EF DbContext object), because the DBContext is also the implementation of your infrastructure persistence objects. diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md index 58bbba1e8264f..fc1ba0c922d15 100644 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md +++ b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/net-core-microservice-domain-model.md @@ -147,7 +147,7 @@ In this snippet, most of the validations or logic related to the creation of an In addition, the new OrderItem(params) operation will also be controlled and performed by the AddOrderItem method from the Order aggregate root. Therefore, most of the logic or validations related to that operation (especially anything that impacts the consistency between other child entities) will be in a single place within the aggregate root. That is the ultimate purpose of the aggregate root pattern. -When you use Entity Framework Core 1.1 or later, a DDD entity can be better expressed because it allows [mapping to fields](https://docs.microsoft.com/ef/core/modeling/backing-field) in addition to properties. This is useful when protecting collections of child entities or value objects. With this enhancement, you can use simple private fields instead of properties and you can implement any update to the field collection in public methods and provide read-only access through the AsReadOnly method. +When you use Entity Framework Core 1.1 or later, a DDD entity can be better expressed because it allows [mapping to fields](/ef/core/modeling/backing-field) in addition to properties. This is useful when protecting collections of child entities or value objects. With this enhancement, you can use simple private fields instead of properties and you can implement any update to the field collection in public methods and provide read-only access through the AsReadOnly method. In DDD, you want to update the entity only through methods in the entity (or the constructor) in order to control any invariant and the consistency of the data, so properties are defined only with a get accessor. The properties are backed by private fields. Private members can only be accessed from within the class. However, there is one exception: EF Core needs to set these fields as well (so it can return the object with the proper values). diff --git a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md index f0234dd6c2411..45c5cba04384c 100644 --- a/docs/architecture/microservices/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md +++ b/docs/architecture/microservices/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md @@ -46,7 +46,7 @@ For instance, the following JSON code is a sample implementation of an order agg ## Introduction to Azure Cosmos DB and the native Cosmos DB API -[Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db/introduction) is Microsoft's globally distributed database service for mission-critical applications. Azure Cosmos DB provides [turn-key global distribution](https://docs.microsoft.com/azure/cosmos-db/distribute-data-globally), [elastic scaling of throughput and storage](https://docs.microsoft.com/azure/cosmos-db/partition-data) worldwide, single-digit millisecond latencies at the 99th percentile, [five well-defined consistency levels](https://docs.microsoft.com/azure/cosmos-db/consistency-levels), and guaranteed high availability, all backed by [industry-leading SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/). Azure Cosmos DB [automatically indexes data](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf) without requiring you to deal with schema and index management. It is multi-model and supports document, key-value, graph, and columnar data models. +[Azure Cosmos DB](/azure/cosmos-db/introduction) is Microsoft's globally distributed database service for mission-critical applications. Azure Cosmos DB provides [turn-key global distribution](/azure/cosmos-db/distribute-data-globally), [elastic scaling of throughput and storage](/azure/cosmos-db/partition-data) worldwide, single-digit millisecond latencies at the 99th percentile, [five well-defined consistency levels](/azure/cosmos-db/consistency-levels), and guaranteed high availability, all backed by [industry-leading SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/). Azure Cosmos DB [automatically indexes data](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf) without requiring you to deal with schema and index management. It is multi-model and supports document, key-value, graph, and columnar data models. ![Diagram showing the Azure Cosmos DB global distribution.](./media/nosql-database-persistence-infrastructure/azure-cosmos-db-global-distribution.png) @@ -116,7 +116,7 @@ However, when you persist your model into the NoSQL database, the code and API c You can access Azure Cosmos DB databases from .NET code running in containers, like from any other .NET application. For instance, the Locations.API and Marketing.API microservices in eShopOnContainers are implemented so they can consume Azure Cosmos DB databases. -However, there’s a limitation in Azure Cosmos DB from a Docker development environment point of view. Even though there’s an on-premises [Azure Cosmos DB Emulator](https://docs.microsoft.com/azure/cosmos-db/local-emulator) that can run in a local development machine, it only supports Windows. Linux and macOS aren't supported. +However, there’s a limitation in Azure Cosmos DB from a Docker development environment point of view. Even though there’s an on-premises [Azure Cosmos DB Emulator](/azure/cosmos-db/local-emulator) that can run in a local development machine, it only supports Windows. Linux and macOS aren't supported. There's also the possibility to run this emulator on Docker, but just on Windows Containers, not with Linux Containers. That's an initial handicap for the development environment if your application is deployed as Linux containers, since, currently, you can't deploy Linux and Windows Containers on Docker for Windows at the same time. Either all containers being deployed have to be for Linux or for Windows. @@ -132,7 +132,7 @@ Cosmos DB databases support MongoDB API for .NET as well as the native MongoDB w This is a very convenient approach for proof of concepts in Docker environments with Linux containers because the [MongoDB Docker image](https://hub.docker.com/r/_/mongo/) is a multi-arch image that supports Docker Linux containers and Docker Windows containers. -As shown in the following image, by using the MongoDB API, eShopOnContainers supports MongoDB Linux and Windows containers for the local development environment but then, you can move to a scalable, PaaS cloud solution as Azure Cosmos DB by simply [changing the MongoDB connection string to point to Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db/connect-mongodb-account). +As shown in the following image, by using the MongoDB API, eShopOnContainers supports MongoDB Linux and Windows containers for the local development environment but then, you can move to a scalable, PaaS cloud solution as Azure Cosmos DB by simply [changing the MongoDB connection string to point to Azure Cosmos DB](/azure/cosmos-db/connect-mongodb-account). ![Diagram showing that the Location microservice in eShopOnContainers can use either Cosmos DB or Mongo DB.](./media/nosql-database-persistence-infrastructure/eshoponcontainers-mongodb-containers.png) @@ -144,7 +144,7 @@ Your custom .NET Core containers can run on a local development Docker host (tha A clear benefit of using the MongoDB API is that your solution could run in both database engines, MongoDB or Azure Cosmos DB, so migrations to different environments should be easy. However, sometimes it is worthwhile to use a native API (that is the native Cosmos DB API) in order to take full advantage of the capabilities of a specific database engine. -For further comparison between simply using MongoDB versus Cosmos DB in the cloud, see the [Benefits of using Azure Cosmos DB in this page](https://docs.microsoft.com/azure/cosmos-db/mongodb-introduction). +For further comparison between simply using MongoDB versus Cosmos DB in the cloud, see the [Benefits of using Azure Cosmos DB in this page](/azure/cosmos-db/mongodb-introduction). ### Analyze your approach for production applications: MongoDB API vs. Cosmos DB API @@ -293,7 +293,7 @@ ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP= #ESHOP_AZURE_SERVICE_BUS= ``` -Uncomment the ESHOP_AZURE_COSMOSDB line and update it with your Azure Cosmos DB connection string obtained from the Azure portal as explained in [Connect a MongoDB application to Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db/connect-mongodb-account). +Uncomment the ESHOP_AZURE_COSMOSDB line and update it with your Azure Cosmos DB connection string obtained from the Azure portal as explained in [Connect a MongoDB application to Azure Cosmos DB](/azure/cosmos-db/connect-mongodb-account). If the `ESHOP_AZURE_COSMOSDB` global variable is empty, meaning it's commented out in the `.env` file, then the container uses a default MongoDB connection string. This connection string points to the local MongoDB container deployed in eShopOnContainers that is named `nosqldata` and was defined at the docker-compose file, as shown in the following .yml code: diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md b/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md index 00f57f2d25318..8995263892498 100644 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md +++ b/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md @@ -62,7 +62,7 @@ Without using `IHostedService`, you could always start a background thread to ru ## The IHostedService interface -When you register an `IHostedService`, .NET Core will call the `StartAsync()` and `StopAsync()` methods of your `IHostedService` type during application start and stop respectively. For more details, refer [IHostedService interface](https://docs.microsoft.com/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1&tabs=visual-studio#ihostedservice-interface) +When you register an `IHostedService`, .NET Core will call the `StartAsync()` and `StopAsync()` methods of your `IHostedService` type during application start and stop respectively. For more details, refer [IHostedService interface](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio&view=aspnetcore-3.1#ihostedservice-interface) As you can imagine, you can create multiple implementations of IHostedService and register them at the `ConfigureService()` method into the DI container, as shown previously. All those hosted services will be started and stopped along with the application/microservice. diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/data-driven-crud-microservice.md b/docs/architecture/microservices/multi-container-microservice-net-applications/data-driven-crud-microservice.md index 3eb7c0a6447c9..277b8e2c6813f 100644 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/data-driven-crud-microservice.md +++ b/docs/architecture/microservices/multi-container-microservice-net-applications/data-driven-crud-microservice.md @@ -22,7 +22,7 @@ An example of this kind of simple data-drive service is the catalog microservice **Figure 6-5**. Simple data-driven/CRUD microservice design -The previous diagram shows the logical Catalog microservice, that includes its Catalog database, which can be or not in the same Docker host. Having the database in the same Docker host might be good for development, but not for production. When you are developing this kind of service, you only need [ASP.NET Core](https://docs.microsoft.com/aspnet/core/) and a data-access API or ORM like [Entity Framework Core](https://docs.microsoft.com/ef/core/index). You could also generate [Swagger](https://swagger.io/) metadata automatically through [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore) to provide a description of what your service offers, as explained in the next section. +The previous diagram shows the logical Catalog microservice, that includes its Catalog database, which can be or not in the same Docker host. Having the database in the same Docker host might be good for development, but not for production. When you are developing this kind of service, you only need [ASP.NET Core](/aspnet/core/) and a data-access API or ORM like [Entity Framework Core](/ef/core/index). You could also generate [Swagger](https://swagger.io/) metadata automatically through [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore) to provide a description of what your service offers, as explained in the next section. Note that running a database server like SQL Server within a Docker container is great for development environments, because you can have all your dependencies up and running without needing to provision a database in the cloud or on-premises. This is very convenient when running integration tests. However, for production environments, running a database server in a container is not recommended, because you usually do not get high availability with that approach. For a production environment in Azure, it is recommended that you use Azure SQL DB or any other database technology that can provide high availability and high scalability. For example, for a NoSQL approach, you might choose CosmosDB. @@ -296,7 +296,7 @@ public class CatalogController : ControllerBase // Implementation ... ``` -This versioning mechanism is simple and depends on the server routing the request to the appropriate endpoint. However, for a more sophisticated versioning and the best method when using REST, you should use hypermedia and implement [HATEOAS (Hypertext as the Engine of Application State)](https://docs.microsoft.com/azure/architecture/best-practices/api-design#use-hateoas-to-enable-navigation-to-related-resources). +This versioning mechanism is simple and depends on the server routing the request to the appropriate endpoint. However, for a more sophisticated versioning and the best method when using REST, you should use hypermedia and implement [HATEOAS (Hypertext as the Engine of Application State)](/azure/architecture/best-practices/api-design#use-hateoas-to-enable-navigation-to-related-resources). ### Additional resources @@ -331,7 +331,7 @@ The main reasons to generate Swagger metadata for your APIs are the following. - [Microsoft PowerApps](https://powerapps.microsoft.com/). You can automatically consume your API from [PowerApps mobile apps](https://powerapps.microsoft.com/blog/register-and-use-custom-apis-in-powerapps/) built with [PowerApps Studio](https://powerapps.microsoft.com/build-powerapps/), with no programming skills required. -- [Azure App Service Logic Apps](https://docs.microsoft.com/azure/app-service-logic/app-service-logic-what-are-logic-apps). You can automatically [use and integrate your API into an Azure App Service Logic App](https://docs.microsoft.com/azure/app-service-logic/app-service-logic-custom-hosted-api), with no programming skills required. +- [Azure App Service Logic Apps](/azure/app-service-logic/app-service-logic-what-are-logic-apps). You can automatically [use and integrate your API into an Azure App Service Logic App](/azure/app-service-logic/app-service-logic-custom-hosted-api), with no programming skills required. **Ability to automatically generate API documentation**. When you create large-scale RESTful APIs, such as complex microservice-based applications, you need to handle many endpoints with different data models used in the request and response payloads. Having proper documentation and having a solid API explorer, as you get with Swagger, is key for the success of your API and adoption by developers. diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/integration-event-based-microservice-communications.md b/docs/architecture/microservices/multi-container-microservice-net-applications/integration-event-based-microservice-communications.md index cc8430f04e89d..f43a7d1eedb1a 100644 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/integration-event-based-microservice-communications.md +++ b/docs/architecture/microservices/multi-container-microservice-net-applications/integration-event-based-microservice-communications.md @@ -71,7 +71,7 @@ In the [Observer pattern](https://en.wikipedia.org/wiki/Observer_pattern), your ### Publish/Subscribe (Pub/Sub) pattern -The purpose of the [Publish/Subscribe pattern](https://docs.microsoft.com/previous-versions/msp-n-p/ff649664(v=pandp.10)) is the same as the Observer pattern: you want to notify other services when certain events take place. But there is an important difference between the Observer and Pub/Sub patterns. In the observer pattern, the broadcast is performed directly from the observable to the observers, so they "know" each other. But when using a Pub/Sub pattern, there is a third component, called broker or message broker or event bus, which is known by both the publisher and subscriber. Therefore, when using the Pub/Sub pattern the publisher and the subscribers are precisely decoupled thanks to the mentioned event bus or message broker. +The purpose of the [Publish/Subscribe pattern](/previous-versions/msp-n-p/ff649664(v=pandp.10)) is the same as the Observer pattern: you want to notify other services when certain events take place. But there is an important difference between the Observer and Pub/Sub patterns. In the observer pattern, the broadcast is performed directly from the observable to the observers, so they "know" each other. But when using a Pub/Sub pattern, there is a third component, called broker or message broker or event bus, which is known by both the publisher and subscriber. Therefore, when using the Pub/Sub pattern the publisher and the subscribers are precisely decoupled thanks to the mentioned event bus or message broker. ### The middleman or event bus diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/microservice-application-design.md b/docs/architecture/microservices/multi-container-microservice-net-applications/microservice-application-design.md index 412648dd2729a..665b01b89e4cf 100644 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/microservice-application-design.md +++ b/docs/architecture/microservices/multi-container-microservice-net-applications/microservice-application-design.md @@ -153,7 +153,7 @@ There are many architectural patterns used by software architects and developers - Simple CRUD, single-tier, single-layer. -- [Traditional N-Layered](https://docs.microsoft.com/previous-versions/msp-n-p/ee658109(v=pandp.10)). +- [Traditional N-Layered](/previous-versions/msp-n-p/ee658109(v=pandp.10)). - [Domain-Driven Design N-layered](https://devblogs.microsoft.com/cesardelatorre/published-first-alpha-version-of-domain-oriented-n-layered-architecture-v2-0/). diff --git a/docs/architecture/microservices/multi-container-microservice-net-applications/subscribe-events.md b/docs/architecture/microservices/multi-container-microservice-net-applications/subscribe-events.md index 70268afa208d0..e5d7f4d53a4b0 100644 --- a/docs/architecture/microservices/multi-container-microservice-net-applications/subscribe-events.md +++ b/docs/architecture/microservices/multi-container-microservice-net-applications/subscribe-events.md @@ -86,17 +86,17 @@ In more advanced microservices, like when using CQRS approaches, it can be imple ### Designing atomicity and resiliency when publishing to the event bus -When you publish integration events through a distributed messaging system like your event bus, you have the problem of atomically updating the original database and publishing an event (that is, either both operations complete or none of them). For instance, in the simplified example shown earlier, the code commits data to the database when the product price is changed and then publishes a ProductPriceChangedIntegrationEvent message. Initially, it might look essential that these two operations be performed atomically. However, if you are using a distributed transaction involving the database and the message broker, as you do in older systems like [Microsoft Message Queuing (MSMQ)](https://msdn.microsoft.com/library/windows/desktop/ms711472(v=vs.85).aspx), this is not recommended for the reasons described by the [CAP theorem](https://www.quora.com/What-Is-CAP-Theorem-1). +When you publish integration events through a distributed messaging system like your event bus, you have the problem of atomically updating the original database and publishing an event (that is, either both operations complete or none of them). For instance, in the simplified example shown earlier, the code commits data to the database when the product price is changed and then publishes a ProductPriceChangedIntegrationEvent message. Initially, it might look essential that these two operations be performed atomically. However, if you are using a distributed transaction involving the database and the message broker, as you do in older systems like [Microsoft Message Queuing (MSMQ)](/previous-versions/windows/desktop/legacy/ms711472(v=vs.85)), this is not recommended for the reasons described by the [CAP theorem](https://www.quora.com/What-Is-CAP-Theorem-1). Basically, you use microservices to build scalable and highly available systems. Simplifying somewhat, the CAP theorem says that you cannot build a (distributed) database (or a microservice that owns its model) that's continually available, strongly consistent, *and* tolerant to any partition. You must choose two of these three properties. -In microservices-based architectures, you should choose availability and tolerance, and you should de-emphasize strong consistency. Therefore, in most modern microservice-based applications, you usually do not want to use distributed transactions in messaging, as you do when you implement [distributed transactions](https://docs.microsoft.com/previous-versions/windows/desktop/ms681205(v=vs.85)) based on the Windows Distributed Transaction Coordinator (DTC) with [MSMQ](https://msdn.microsoft.com/library/windows/desktop/ms711472(v=vs.85).aspx). +In microservices-based architectures, you should choose availability and tolerance, and you should de-emphasize strong consistency. Therefore, in most modern microservice-based applications, you usually do not want to use distributed transactions in messaging, as you do when you implement [distributed transactions](/previous-versions/windows/desktop/ms681205(v=vs.85)) based on the Windows Distributed Transaction Coordinator (DTC) with [MSMQ](/previous-versions/windows/desktop/legacy/ms711472(v=vs.85)). Let's go back to the initial issue and its example. If the service crashes after the database is updated (in this case, right after the line of code with `_context.SaveChangesAsync()`), but before the integration event is published, the overall system could become inconsistent. This might be business critical, depending on the specific business operation you are dealing with. As mentioned earlier in the architecture section, you can have several approaches for dealing with this issue: -- Using the full [Event Sourcing pattern](https://docs.microsoft.com/azure/architecture/patterns/event-sourcing). +- Using the full [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing). - Using [transaction log mining](https://www.scoop.it/t/sql-server-transaction-log-mining). diff --git a/docs/architecture/modern-web-apps-azure/azure-hosting-recommendations-for-asp-net-web-apps.md b/docs/architecture/modern-web-apps-azure/azure-hosting-recommendations-for-asp-net-web-apps.md index dea13fca933f6..1034d7b3e27a2 100644 --- a/docs/architecture/modern-web-apps-azure/azure-hosting-recommendations-for-asp-net-web-apps.md +++ b/docs/architecture/modern-web-apps-azure/azure-hosting-recommendations-for-asp-net-web-apps.md @@ -49,7 +49,7 @@ In addition to apps that are not optimized for the cloud, Azure App Service Web ![Basic Azure architecture](./media/image1-5.png) -A small number of resources in a single resource group is typically sufficient to manage such an app. Apps that are typically deployed as a single unit, rather than those apps that are made up of many separate processes, are good candidates for this [basic architectural approach](https://docs.microsoft.com/azure/architecture/reference-architectures/app-service-web-app/basic-web-app). Though architecturally simple, this approach still allows the hosted app to scale both up (more resources per node) and out (more hosted nodes) to meet any increase in demand. With autoscale, the app can be configured to automatically adjust the number of nodes hosting the app based on demand and average load across nodes. +A small number of resources in a single resource group is typically sufficient to manage such an app. Apps that are typically deployed as a single unit, rather than those apps that are made up of many separate processes, are good candidates for this [basic architectural approach](/azure/architecture/reference-architectures/app-service-web-app/basic-web-app). Though architecturally simple, this approach still allows the hosted app to scale both up (more resources per node) and out (more hosted nodes) to meet any increase in demand. With autoscale, the app can be configured to automatically adjust the number of nodes hosting the app based on demand and average load across nodes. ### App Service Web Apps for Containers @@ -69,7 +69,7 @@ As portions of larger applications are broken up into their own smaller, indepen ![Microservices sample architecture with several common design patterns noted.](./media/image1-10.png) -[Learn more about design patterns to consider when building microservice-based systems.](https://docs.microsoft.com/azure/architecture/microservices/design/patterns) +[Learn more about design patterns to consider when building microservice-based systems.](/azure/architecture/microservices/design/patterns) ### Azure Kubernetes Service @@ -95,7 +95,7 @@ Azure Dev Spaces: - Reduce number of integration environments required by team - Remove need to mock certain services in distributed system when developing/testing -[Learn more about Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/about) +[Learn more about Azure Dev Spaces](/azure/dev-spaces/about) ### Azure Virtual Machines diff --git a/docs/architecture/modern-web-apps-azure/common-web-application-architectures.md b/docs/architecture/modern-web-apps-azure/common-web-application-architectures.md index 7c2d87f541096..35b5966f589fb 100644 --- a/docs/architecture/modern-web-apps-azure/common-web-application-architectures.md +++ b/docs/architecture/modern-web-apps-azure/common-web-application-architectures.md @@ -199,7 +199,7 @@ The monolithic approach is common, and many organizations are developing with th ![Figure 5-14](./media/image5-14.png) -Deploying monolithic applications in Microsoft Azure can be achieved using dedicated VMs for each instance. Using [Azure Virtual Machine Scale Sets](https://docs.microsoft.com/azure/virtual-machine-scale-sets/), you can easily scale the VMs. [Azure App Services](https://azure.microsoft.com/services/app-service/) can run monolithic applications and easily scale instances without having to manage the VMs. Azure App Services can run single instances of Docker containers as well, simplifying the deployment. Using Docker, you can deploy a single VM as a Docker host, and run multiple instances. Using the Azure balancer, as shown in the Figure 5-14, you can manage scaling. +Deploying monolithic applications in Microsoft Azure can be achieved using dedicated VMs for each instance. Using [Azure Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/), you can easily scale the VMs. [Azure App Services](https://azure.microsoft.com/services/app-service/) can run monolithic applications and easily scale instances without having to manage the VMs. Azure App Services can run single instances of Docker containers as well, simplifying the deployment. Using Docker, you can deploy a single VM as a Docker host, and run multiple instances. Using the Azure balancer, as shown in the Figure 5-14, you can manage scaling. The deployment to the various hosts can be managed with traditional deployment techniques. The Docker hosts can be managed with commands like **docker run** performed manually, or through automation such as Continuous Delivery (CD) pipelines. diff --git a/docs/architecture/modern-web-apps-azure/develop-asp-net-core-mvc-apps.md b/docs/architecture/modern-web-apps-azure/develop-asp-net-core-mvc-apps.md index ad05cc3552c7d..27503a8d571f8 100644 --- a/docs/architecture/modern-web-apps-azure/develop-asp-net-core-mvc-apps.md +++ b/docs/architecture/modern-web-apps-azure/develop-asp-net-core-mvc-apps.md @@ -233,7 +233,7 @@ Another approach to decoupling the application from implementation details is to ### Feature organization -By default, ASP.NET Core applications organize their folder structure to include Controllers and Views, and frequently ViewModels. Client-side code to support these server-side structures is typically stored separately in the wwwroot folder. However, large applications may encounter problems with this organization, since working on any given feature often requires jumping between these folders. This gets more and more difficult as the number of files and subfolders in each folder grows, resulting in a great deal of scrolling through Solution Explorer. One solution to this problem is to organize application code by _feature_ instead of by file type. This organizational style is typically referred to as feature folders or [feature slices](https://docs.microsoft.com/archive/msdn-magazine/2016/september/asp-net-core-feature-slices-for-asp-net-core-mvc) (see also: [Vertical Slices](https://deviq.com/vertical-slices/)). +By default, ASP.NET Core applications organize their folder structure to include Controllers and Views, and frequently ViewModels. Client-side code to support these server-side structures is typically stored separately in the wwwroot folder. However, large applications may encounter problems with this organization, since working on any given feature often requires jumping between these folders. This gets more and more difficult as the number of files and subfolders in each folder grows, resulting in a great deal of scrolling through Solution Explorer. One solution to this problem is to organize application code by _feature_ instead of by file type. This organizational style is typically referred to as feature folders or [feature slices](/archive/msdn-magazine/2016/september/asp-net-core-feature-slices-for-asp-net-core-mvc) (see also: [Vertical Slices](https://deviq.com/vertical-slices/)). ASP.NET Core MVC supports Areas for this purpose. Using areas, you can create separate sets of Controllers and Views folders (as well as any associated models) in each Area folder. Figure 7-1 shows an example folder structure, using Areas. @@ -293,7 +293,7 @@ You then specify this convention as an option when you add support for MVC to yo services.AddMvc(o => o.Conventions.Add(new FeatureConvention())); ``` -ASP.NET Core MVC also uses a convention to locate views. You can override it with a custom convention so that views will be located in your feature folders (using the feature name provided by the FeatureConvention, above). You can learn more about this approach and download a working sample from the MSDN Magazine article, [Feature Slices for ASP.NET Core MVC](https://docs.microsoft.com/archive/msdn-magazine/2016/september/asp-net-core-feature-slices-for-asp-net-core-mvc). +ASP.NET Core MVC also uses a convention to locate views. You can override it with a custom convention so that views will be located in your feature folders (using the feature name provided by the FeatureConvention, above). You can learn more about this approach and download a working sample from the MSDN Magazine article, [Feature Slices for ASP.NET Core MVC](/archive/msdn-magazine/2016/september/asp-net-core-feature-slices-for-asp-net-core-mvc). ### APIs and Blazor applications @@ -305,7 +305,7 @@ The addition of a Blazor WebAssembly admin interface to eShopOnWeb required addi One might ask, why add a separate `BlazorShared` project when there is already a common `ApplicationCore` project that could be used to share any types required by both `PublicApi` and `BlazorAdmin`? The answer is that this project includes all of the application's business logic and is thus much larger than necessary and also much more likely to need to be kept secure on the server. Remember that any library referenced by `BlazorAdmin` will be downloaded to users' browsers when they load the Blazor application. -Depending on whether one is using the [Backends-For-Frontends (BFF) pattern](https://docs.microsoft.com/azure/architecture/patterns/backends-for-frontends), the APIs consumed by the Blazor WebAssembly app may not share their types 100% with Blazor. In particular, a public API that's meant to be consumed by many different clients may define its own request and result types, rather than sharing them in a client-specific shared project. In the eShopOnWeb sample, the assumption is being made that the `PublicApi` project is, in fact, hosting a public API, so not all of its request and response types come from the `BlazorShared` project. +Depending on whether one is using the [Backends-For-Frontends (BFF) pattern](/azure/architecture/patterns/backends-for-frontends), the APIs consumed by the Blazor WebAssembly app may not share their types 100% with Blazor. In particular, a public API that's meant to be consumed by many different clients may define its own request and result types, rather than sharing them in a client-specific shared project. In the eShopOnWeb sample, the assumption is being made that the `PublicApi` project is, in fact, hosting a public API, so not all of its request and response types come from the `BlazorShared` project. ### Cross-cutting concerns @@ -379,7 +379,7 @@ public async Task Put(int id, [FromBody]Author author) } ``` -You can read more about implementing filters and download a working sample from the MSDN Magazine article, [Real-World ASP.NET Core MVC Filters](https://docs.microsoft.com/archive/msdn-magazine/2016/august/asp-net-core-real-world-asp-net-core-mvc-filters). +You can read more about implementing filters and download a working sample from the MSDN Magazine article, [Real-World ASP.NET Core MVC Filters](/archive/msdn-magazine/2016/august/asp-net-core-real-world-asp-net-core-mvc-filters). > ### References – Structuring applications > diff --git a/docs/architecture/modern-web-apps-azure/development-process-for-azure.md b/docs/architecture/modern-web-apps-azure/development-process-for-azure.md index 93c5dbdfc968d..b412db9721db2 100644 --- a/docs/architecture/modern-web-apps-azure/development-process-for-azure.md +++ b/docs/architecture/modern-web-apps-azure/development-process-for-azure.md @@ -40,9 +40,9 @@ To get started with developing an ASP.NET Core application using CI/CD, you can To create a release pipeline for your app, you need to have your application code in source control. Set up a local repository and connect it to a remote repository in a team project. Follow these instructions: -- [Share your code with Git and Visual Studio](https://docs.microsoft.com/azure/devops/git/share-your-code-in-git-vs) or +- [Share your code with Git and Visual Studio](/azure/devops/git/share-your-code-in-git-vs) or -- [Share your code with TFVC and Visual Studio](https://docs.microsoft.com/azure/devops/tfvc/share-your-code-in-tfvc-vs) +- [Share your code with TFVC and Visual Studio](/azure/devops/tfvc/share-your-code-in-tfvc-vs) Create an Azure App Service where you'll deploy your application. Create a Web App by going to the App Services blade on the Azure portal. Click +Add, select the Web App template, click Create, and provide a name and other details. The web app will be accessible from {name}.azurewebsites.net. @@ -52,13 +52,13 @@ Create an Azure App Service where you'll deploy your application. Create a Web A Your CI build process will perform an automated build whenever new code is committed to the project's source control repository. This gives you immediate feedback that the code builds (and, ideally, passes automated tests) and can potentially be deployed. This CI build will produce a web deploy package artifact and publish it for consumption by your CD process. -[Define your CI build process](https://docs.microsoft.com/azure/devops/pipelines/ecosystems/dotnet-core) +[Define your CI build process](/azure/devops/pipelines/ecosystems/dotnet-core) Be sure to enable continuous integration so the system will queue a build whenever someone on your team commits new code. Test the build and verify that it is producing a web deploy package as one of its artifacts. When a build succeeds, your CD process will deploy the results of your CI build to your Azure web app. To configure this, you create and configure a *Release*, which will deploy to your Azure App Service. -[Deploy an Azure web app](https://docs.microsoft.com/azure/devops/pipelines/targets/webapp) +[Deploy an Azure web app](/azure/devops/pipelines/targets/webapp) Once your CI/CD pipeline is configured, you can simply make updates to your web app and commit them to source control to have them deployed. @@ -76,7 +76,7 @@ Developing your ASP.NET Core application for deployment to Azure is no different #### Step 2. Application code repository -Whenever you're ready to share your code with your team, you should push your changes from your local source repository to your team's shared source repository. If you've been working in a custom branch, this step usually involves merging your code into a shared branch (perhaps by means of a [pull request](https://docs.microsoft.com/azure/devops/git/pull-requests)). +Whenever you're ready to share your code with your team, you should push your changes from your local source repository to your team's shared source repository. If you've been working in a custom branch, this step usually involves merging your code into a shared branch (perhaps by means of a [pull request](/azure/devops/git/pull-requests)). #### Step 3. Build Server: Continuous integration. build, test, package diff --git a/docs/architecture/modern-web-apps-azure/test-asp-net-core-mvc-apps.md b/docs/architecture/modern-web-apps-azure/test-asp-net-core-mvc-apps.md index cc6fe2b0fc149..5e43e159d3537 100644 --- a/docs/architecture/modern-web-apps-azure/test-asp-net-core-mvc-apps.md +++ b/docs/architecture/modern-web-apps-azure/test-asp-net-core-mvc-apps.md @@ -144,7 +144,7 @@ In most cases, you'll want to use global exception handlers in your controllers, ## Integration testing ASP.NET Core apps -Most of the integration tests in your ASP.NET Core apps should be testing services and other implementation types defined in your Infrastructure project. For example, you could [test that EF Core was successfully updating and retrieving the data that you expect](https://docs.microsoft.com/ef/core/miscellaneous/testing/) from your data access classes residing in the Infrastructure project. The best way to test that your ASP.NET Core MVC project is behaving correctly is with functional tests that run against your app running in a test host. +Most of the integration tests in your ASP.NET Core apps should be testing services and other implementation types defined in your Infrastructure project. For example, you could [test that EF Core was successfully updating and retrieving the data that you expect](/ef/core/miscellaneous/testing/) from your data access classes residing in the Infrastructure project. The best way to test that your ASP.NET Core MVC project is behaving correctly is with functional tests that run against your app running in a test host. ## Functional testing ASP.NET Core apps diff --git a/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md b/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md index 76956772bee5a..f511ad42dc4a5 100644 --- a/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md +++ b/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md @@ -201,7 +201,7 @@ Learn more about owned [entity support in EF Core](/ef/core/modeling/owned-entit ### Resilient connections -External resources like SQL databases may occasionally be unavailable. In cases of temporary unavailability, applications can use retry logic to avoid raising an exception. This technique is commonly referred to as _connection resiliency_. You can implement your [own retry with exponential backoff](https://docs.microsoft.com/azure/architecture/patterns/retry) technique by attempting to retry with an exponentially increasing wait time, until a maximum retry count has been reached. This technique embraces the fact that cloud resources might intermittently be unavailable for short periods of time, resulting in failure of some requests. +External resources like SQL databases may occasionally be unavailable. In cases of temporary unavailability, applications can use retry logic to avoid raising an exception. This technique is commonly referred to as _connection resiliency_. You can implement your [own retry with exponential backoff](/azure/architecture/patterns/retry) technique by attempting to retry with an exponentially increasing wait time, until a maximum retry count has been reached. This technique embraces the fact that cloud resources might intermittently be unavailable for short periods of time, resulting in failure of some requests. For Azure SQL DB, Entity Framework Core already provides internal database connection resiliency and retry logic. But you need to enable the Entity Framework execution strategy for each DbContext connection if you want to have resilient EF Core connections. @@ -277,7 +277,7 @@ The first DbContext is the \_catalogContext and the second DbContext is within t While EF Core is a great choice for managing persistence, and for the most part encapsulates database details from application developers, it isn't the only choice. Another popular open-source alternative is [Dapper](https://github.com/StackExchange/Dapper), a so-called micro-ORM. A micro-ORM is a lightweight, less full-featured tool for mapping objects to data structures. In the case of Dapper, its design goals focus on performance, rather than fully encapsulating the underlying queries it uses to retrieve and update data. Because it doesn't abstract SQL from the developer, Dapper is "closer to the metal" and lets developers write the exact queries they want to use for a given data access operation. -EF Core has two significant features it provides which separate it from Dapper but also add to its performance overhead. The first is translation from LINQ expressions into SQL. These translations are cached, but even so there is overhead in performing them the first time. The second is change tracking on entities (so that efficient update statements can be generated). This behavior can be turned off for specific queries by using the AsNotTracking extension. EF Core also generates SQL queries that usually are very efficient and in any case perfectly acceptable from a performance standpoint, but if you need fine control over the precise query to be executed, you can pass in custom SQL (or execute a stored procedure) using EF Core, too. In this case, Dapper still outperforms EF Core, but only slightly. Julie Lerman presents some performance data in her May 2016 MSDN article [Dapper, Entity Framework, and Hybrid Apps](https://docs.microsoft.com/archive/msdn-magazine/2016/may/data-points-dapper-entity-framework-and-hybrid-apps). Additional performance benchmark data for a variety of data access methods can be found on [the Dapper site](https://github.com/StackExchange/Dapper). +EF Core has two significant features it provides which separate it from Dapper but also add to its performance overhead. The first is translation from LINQ expressions into SQL. These translations are cached, but even so there is overhead in performing them the first time. The second is change tracking on entities (so that efficient update statements can be generated). This behavior can be turned off for specific queries by using the AsNotTracking extension. EF Core also generates SQL queries that usually are very efficient and in any case perfectly acceptable from a performance standpoint, but if you need fine control over the precise query to be executed, you can pass in custom SQL (or execute a stored procedure) using EF Core, too. In this case, Dapper still outperforms EF Core, but only slightly. Julie Lerman presents some performance data in her May 2016 MSDN article [Dapper, Entity Framework, and Hybrid Apps](/archive/msdn-magazine/2016/may/data-points-dapper-entity-framework-and-hybrid-apps). Additional performance benchmark data for a variety of data access methods can be found on [the Dapper site](https://github.com/StackExchange/Dapper). To see how the syntax for Dapper varies from EF Core, consider these two versions of the same method for retrieving a list of items: diff --git a/docs/architecture/modernize-desktop/migrate-modern-applications.md b/docs/architecture/modernize-desktop/migrate-modern-applications.md index af0b6248f27ac..565513dc8f5ad 100644 --- a/docs/architecture/modernize-desktop/migrate-modern-applications.md +++ b/docs/architecture/modernize-desktop/migrate-modern-applications.md @@ -109,7 +109,7 @@ You can continue to use ODBC on .NET Core since Microsoft is providing the `Syst ### OLE DB -[OLE DB](https://docs.microsoft.com/previous-versions/windows/desktop/ms722784(v=vs.85)) has been a great way to access various data sources in a uniform manner. But it was based on COM, which is a Windows-only technology, and as such wasn't the best fit for a cross-platform technology such as .NET Core. It's also unsupported in SQL Server versions 2014 and later. For those reasons, OLE DB won't be supported by .NET Core. +[OLE DB](/previous-versions/windows/desktop/ms722784(v=vs.85)) has been a great way to access various data sources in a uniform manner. But it was based on COM, which is a Windows-only technology, and as such wasn't the best fit for a cross-platform technology such as .NET Core. It's also unsupported in SQL Server versions 2014 and later. For those reasons, OLE DB won't be supported by .NET Core. ### ADO.NET diff --git a/docs/architecture/modernize-with-azure-containers/index.md b/docs/architecture/modernize-with-azure-containers/index.md index 7b59ac44562d9..1ba4982e113d1 100644 --- a/docs/architecture/modernize-with-azure-containers/index.md +++ b/docs/architecture/modernize-with-azure-containers/index.md @@ -113,10 +113,10 @@ Each maturity level in the modernization process is associated with the followin - **Cloud Infrastructure-Ready** (rehost or basic lift & shift): As a first step, many organizations want only to quickly execute a cloud-migration strategy. In this case, applications are rehosted. Most rehosting can be automated by using [Azure Migrate](https://aka.ms/azuremigrate), a service that provides the guidance, insights, and mechanisms needed to assist you in migrating to Azure based on cloud tools like [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) and [Azure Database Migration Service](https://azure.microsoft.com/campaigns/database-migration/). You can also set up rehosting manually, so that you can learn infrastructure details about your assets when you move legacy apps to the cloud. For example, you can move your applications to VMs in Azure with little modification-probably with only minor configuration changes. The networking in this case is similar to an on-premises environment, especially if you create virtual networks in Azure. -- **Cloud-Optimized** (Managed Services and Windows Containers): This model is about making a few important deployment optimizations to gain some significant benefits from the cloud, without changing the core architecture of the application. The fundamental step here is to add [Windows Containers](https://docs.microsoft.com/virtualization/windowscontainers/about/) support to your existing .NET Framework applications. This important step (containerization) doesn't require touching the code, so the overall lift and shift effort is light. You can use tools like [Image2Docker](https://github.com/docker/communitytools-image2docker-win) or Visual Studio, with its tools for [Docker](https://www.docker.com/). Visual Studio automatically chooses smart defaults for ASP.NET applications and Windows Containers images. These tools offer both a rapid inner loop, and a fast path to get the containers to Azure. Your agility is improved when you deploy to multiple environments. +- **Cloud-Optimized** (Managed Services and Windows Containers): This model is about making a few important deployment optimizations to gain some significant benefits from the cloud, without changing the core architecture of the application. The fundamental step here is to add [Windows Containers](/virtualization/windowscontainers/about/) support to your existing .NET Framework applications. This important step (containerization) doesn't require touching the code, so the overall lift and shift effort is light. You can use tools like [Image2Docker](https://github.com/docker/communitytools-image2docker-win) or Visual Studio, with its tools for [Docker](https://www.docker.com/). Visual Studio automatically chooses smart defaults for ASP.NET applications and Windows Containers images. These tools offer both a rapid inner loop, and a fast path to get the containers to Azure. Your agility is improved when you deploy to multiple environments. Then, moving to production, you can deploy your Windows Containers to [Azure Web App for Containers](https://azure.microsoft.com/services/app-service/containers/), [Azure Container Instances (ACI)](https://azure.microsoft.com/services/container-instances/), and Azure VMs with Windows Server 2016 and containers if you prefer an IaaS approach. For more complex multi-container applications, consider using orchestrators like [Azure Kubernetes Service (AKS/ACS)](https://azure.microsoft.com/services/container-service/). -During this initial modernization, you can also add assets from the cloud, such as monitoring with tools like [Azure Application Insights](https://docs.microsoft.com/azure/application-insights/app-insights-overview); CI/CD pipelines for your app lifecycles with [Azure DevOps Services](https://azure.microsoft.com/services/devops/); and many more data resource services that are available in Azure. For instance, you can modify a monolithic web app that was originally developed by using traditional [ASP.NET Web Forms](https://www.asp.net/web-forms) or [ASP.NET MVC](https://www.asp.net/mvc), but now you deploy it by using Windows Containers. When you use Windows Containers, you should also migrate your data to a database in [Azure SQL Database Managed Instance](https://docs.microsoft.com/azure/sql-database/), all without changing the core architecture of your application. +During this initial modernization, you can also add assets from the cloud, such as monitoring with tools like [Azure Application Insights](/azure/application-insights/app-insights-overview); CI/CD pipelines for your app lifecycles with [Azure DevOps Services](https://azure.microsoft.com/services/devops/); and many more data resource services that are available in Azure. For instance, you can modify a monolithic web app that was originally developed by using traditional [ASP.NET Web Forms](https://www.asp.net/web-forms) or [ASP.NET MVC](https://www.asp.net/mvc), but now you deploy it by using Windows Containers. When you use Windows Containers, you should also migrate your data to a database in [Azure SQL Database Managed Instance](/azure/sql-database/), all without changing the core architecture of your application. - **Cloud-Native**: As introduced, you should think about architecting [cloud-native](https://www.gartner.com/doc/3181919/architect-design-cloudnative-applications) applications when you are targeting large and complex applications with multiple independent development teams working on different microservices that can be developed and deployed autonomously. Also, due to granularized and independent scalability per microservice. These architectural approaches face very important challenges and complexities but can be greatly simplified by using cloud PaaS and orchestrators like [Azure Kubernetes Service (AKS/ACS)](https://azure.microsoft.com/services/container-service/) (managed Kubernetes), and [Azure Functions](https://azure.microsoft.com/services/functions/) for a serverless approach. All these approaches (like microservices and Serverless) typically require you to architect for the cloud and write new code—code that is adapted to specific PaaS platforms, or code that aligns with specific architectures, like microservices. diff --git a/docs/architecture/modernize-with-azure-containers/lift-and-shift-existing-apps-azure-iaas.md b/docs/architecture/modernize-with-azure-containers/lift-and-shift-existing-apps-azure-iaas.md index 123021dd739f7..65a66ef67bbbe 100644 --- a/docs/architecture/modernize-with-azure-containers/lift-and-shift-existing-apps-azure-iaas.md +++ b/docs/architecture/modernize-with-azure-containers/lift-and-shift-existing-apps-azure-iaas.md @@ -63,7 +63,7 @@ Figure 2-2 shows you the built-in dependency mapping for all server and applicat ## Use Azure Site Recovery to migrate your existing VMs to Azure VMs -As part of the end-to-end [Azure Migrate](https://aka.ms/azuremigrate), [Azure Site Recovery](https://docs.microsoft.com/azure/site-recovery/site-recovery-overview) is a tool that you can use to easily migrate your web apps to VMs in Azure. You can use Site Recovery to replicate on-premises VMs and physical servers to Azure, or to replicate them to a secondary on-premises location. You can even replicate a workload that's running on a supported Azure VM, on an on-premises *Hyper-V* VM, on a *VMware* VM, or on a Windows or Linux physical server. Replication to Azure eliminates the cost and complexity of maintaining a secondary datacenter. +As part of the end-to-end [Azure Migrate](https://aka.ms/azuremigrate), [Azure Site Recovery](/azure/site-recovery/site-recovery-overview) is a tool that you can use to easily migrate your web apps to VMs in Azure. You can use Site Recovery to replicate on-premises VMs and physical servers to Azure, or to replicate them to a secondary on-premises location. You can even replicate a workload that's running on a supported Azure VM, on an on-premises *Hyper-V* VM, on a *VMware* VM, or on a Windows or Linux physical server. Replication to Azure eliminates the cost and complexity of maintaining a secondary datacenter. Site Recovery is also made specifically for hybrid environments that are partly on-premises and partly on Azure. Site Recovery helps ensure business continuity by keeping your apps that are running on VMs and on-premises physical servers available if a site goes down. It replicates workloads that are running on VMs and physical servers so that they remain available in a secondary location if the primary site isn't available. It recovers workloads to the primary site when it's up and running again. diff --git a/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/build-resilient-services-ready-for-the-cloud-embrace-transient-failures-in-the-cloud.md b/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/build-resilient-services-ready-for-the-cloud-embrace-transient-failures-in-the-cloud.md index cbba48db9a56c..84ceb55eafc59 100644 --- a/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/build-resilient-services-ready-for-the-cloud-embrace-transient-failures-in-the-cloud.md +++ b/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/build-resilient-services-ready-for-the-cloud-embrace-transient-failures-in-the-cloud.md @@ -27,7 +27,7 @@ A resilient application like the one shown in Figure 4-9 should implement techni You can use these techniques both in HTTP resources and in database resources. In Figure 4-9, the application is based on a 3-tier architecture, so you need these techniques at the services level (HTTP) and at the data tier level (TCP). In a monolithic application that uses only a single app tier in addition to the database (no additional services or microservices), handling transient failures at the database connection level might be enough. In that scenario, just a particular configuration of the database connection is required. -When implementing resilient communications that access the database, depending on the version of .NET you are using, it can be straightforward (for example, [with Entity Framework 6 or later](/ef/ef6/fundamentals/connection-resiliency/retry-logic). It's just a matter of configuring the database connection). Or, you might need to use additional libraries like the [Transient Fault Handling Application Block](https://docs.microsoft.com/previous-versions/msp-n-p/hh680934(v=pandp.50)) (for earlier versions of .NET), or even implement your own library. +When implementing resilient communications that access the database, depending on the version of .NET you are using, it can be straightforward (for example, [with Entity Framework 6 or later](/ef/ef6/fundamentals/connection-resiliency/retry-logic). It's just a matter of configuring the database connection). Or, you might need to use additional libraries like the [Transient Fault Handling Application Block](/previous-versions/msp-n-p/hh680934(v=pandp.50)) (for earlier versions of .NET), or even implement your own library. When implementing HTTP retries and circuit breakers, the recommendation for .NET is to use the [Polly](https://github.com/App-vNext/Polly) library, which targets .NET Framework 4.0, .NET Framework 4.5, and .NET Standard 1.1, which includes .NET Core support. diff --git a/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/life-cycle-ci-cd-pipelines-devops-tools.md b/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/life-cycle-ci-cd-pipelines-devops-tools.md index d185c08dfa5c6..5d57bf2dd4075 100644 --- a/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/life-cycle-ci-cd-pipelines-devops-tools.md +++ b/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/life-cycle-ci-cd-pipelines-devops-tools.md @@ -11,9 +11,9 @@ Although continuous integration and deployment practices are well established, t Azure DevOps Services supports continuous integration and deployment of multi-container applications to a variety of environments through the official Azure DevOps Services deployment tasks: -- [Deploy to an Azure Web App for Containers](https://docs.microsoft.com/azure/devops/pipelines/apps/cd/deploy-docker-webapp?tabs=dotnet-core) +- [Deploy to an Azure Web App for Containers](/azure/devops/pipelines/apps/cd/deploy-docker-webapp?tabs=dotnet-core) -- [Deploy to Azure Kubernetes Service](https://docs.microsoft.com/azure/devops/pipelines/apps/cd/deploy-aks?tabs=dotnet-core) +- [Deploy to Azure Kubernetes Service](/azure/devops/pipelines/apps/cd/deploy-aks?tabs=dotnet-core) But you can also deploy to [Docker Swarm](https://blog.jcorioland.io/archives/2016/11/29/full-ci-cd-pipeline-to-deploy-multi-containers-application-on-azure-container-service-docker-swarm-using-visual-studio-team-services.html) or DC/OS by using Azure DevOps Services script-based tasks. diff --git a/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/microsoft-technologies-in-cloud-optimized-applications.md b/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/microsoft-technologies-in-cloud-optimized-applications.md index 90c05258ce4b7..cab15c5c62835 100644 --- a/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/microsoft-technologies-in-cloud-optimized-applications.md +++ b/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/microsoft-technologies-in-cloud-optimized-applications.md @@ -9,7 +9,7 @@ The following list describes the tools, technologies, and solutions that are rec - **Cloud infrastructure**: The infrastructure that provides the compute platform, operating system, network, and storage. Microsoft Azure is positioned at this level. -- **Runtime**: This layer provides the environment for the application to run. If you are using containers, this layer usually is based on [Docker Engine](https://docs.docker.com/engine/), running either on Linux hosts or on Windows hosts. ([Windows Containers](https://docs.microsoft.com/virtualization/windowscontainers/about/) are supported beginning with Windows Server 2016. Windows Containers is the best choice for existing .NET Framework applications that run on Windows.) +- **Runtime**: This layer provides the environment for the application to run. If you are using containers, this layer usually is based on [Docker Engine](https://docs.docker.com/engine/), running either on Linux hosts or on Windows hosts. ([Windows Containers](/virtualization/windowscontainers/about/) are supported beginning with Windows Server 2016. Windows Containers is the best choice for existing .NET Framework applications that run on Windows.) - **Managed cloud**: When you choose a managed cloud option, you can avoid the expense and complexity of managing and supporting the underlying infrastructure, VMs, OS patches, and networking configuration. If you choose to migrate by using IaaS, you are responsible for all of these tasks, and for associated costs. In a managed cloud option, you manage only the applications and services that you develop. The cloud service provider typically manages everything else. Examples of managed cloud services in Azure include [Azure SQL Database](https://azure.microsoft.com/services/sql-database), [Azure Redis Cache](https://azure.microsoft.com/services/cache/), [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/), [Azure Storage](https://azure.microsoft.com/services/storage/), [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/), [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/), [Azure Active Directory](https://azure.microsoft.com/services/active-directory/), and managed compute services like [VM scale sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Azure App Service](https://azure.microsoft.com/services/app-service/), and [Azure Kubernetes Service](https://azure.microsoft.com/services/container-service/). diff --git a/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/modernize-your-apps-with-monitoring-and-telemetry.md b/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/modernize-your-apps-with-monitoring-and-telemetry.md index cb0d0d1ce4aca..57dfd01f29da0 100644 --- a/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/modernize-your-apps-with-monitoring-and-telemetry.md +++ b/docs/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/modernize-your-apps-with-monitoring-and-telemetry.md @@ -19,13 +19,13 @@ Figure 4-10 shows an example of how Application Insights monitors your applicati ## Monitor your Docker infrastructure with Log Analytics and its Container Monitoring solution -[Azure Log Analytics](https://docs.microsoft.com/azure/log-analytics/log-analytics-overview) is part of the [Microsoft Azure overall monitoring solution](https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-overview). It's also a service in [Operations Management Suite (OMS)](https://docs.microsoft.com/azure/operations-management-suite/operations-management-suite-overview). Log Analytics monitors cloud and on-premises environments (OMS for on-premises) to help maintain availability and performance. It collects data generated by resources in your cloud and on-premises environments and from other monitoring tools to provide analysis across multiple sources. +[Azure Log Analytics](/azure/log-analytics/log-analytics-overview) is part of the [Microsoft Azure overall monitoring solution](/azure/monitoring-and-diagnostics/monitoring-overview). It's also a service in [Operations Management Suite (OMS)](/azure/operations-management-suite/operations-management-suite-overview). Log Analytics monitors cloud and on-premises environments (OMS for on-premises) to help maintain availability and performance. It collects data generated by resources in your cloud and on-premises environments and from other monitoring tools to provide analysis across multiple sources. -In relation to Azure infrastructure logs, Log Analytics, as an Azure service, ingests log and metric data from other Azure services (via [Azure Monitor](https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-overview-azure-monitor)), Azure VMs, Docker containers, and on-premises or other cloud infrastructures. Log Analytics offers flexible log search and out-of-the box analytics on top of this data. It provides rich tools that you can use to analyze data across sources, it allows complex queries across all logs, and it can proactively alert based on specified conditions. You can even collect custom data in the central Log Analytics repository, where you can query and visualize it. You can also take advantage of the Log Analytics built-in solutions to immediately gain insights into the security and functionality of your infrastructure. +In relation to Azure infrastructure logs, Log Analytics, as an Azure service, ingests log and metric data from other Azure services (via [Azure Monitor](/azure/monitoring-and-diagnostics/monitoring-overview-azure-monitor)), Azure VMs, Docker containers, and on-premises or other cloud infrastructures. Log Analytics offers flexible log search and out-of-the box analytics on top of this data. It provides rich tools that you can use to analyze data across sources, it allows complex queries across all logs, and it can proactively alert based on specified conditions. You can even collect custom data in the central Log Analytics repository, where you can query and visualize it. You can also take advantage of the Log Analytics built-in solutions to immediately gain insights into the security and functionality of your infrastructure. You can access Log Analytics through the OMS portal or the Azure portal, which run in any browser, and provide you with access to configuration settings and multiple tools to analyze and act on collected data. -The [Container Monitoring solution](https://docs.microsoft.com/azure/log-analytics/log-analytics-containers) in Log Analytics helps you view and manage your Docker and Windows Container hosts in a single location. The solution shows which containers are running, what container image they're running, and where containers are running. You can view detailed audit information, including commands that are being used with containers. You can also troubleshoot containers by viewing and searching centralized logs, without needing to remotely view Docker or Windows hosts. You can find containers that might be noisy and consuming excess resources on a host. Additionally, you can view centralized CPU, memory, storage, and network usage, and performance information, for containers. On computers running Windows, you can centralize and compare logs from Windows Server, Hyper-V, and Docker containers. The solution supports the following container orchestrators: +The [Container Monitoring solution](/azure/log-analytics/log-analytics-containers) in Log Analytics helps you view and manage your Docker and Windows Container hosts in a single location. The solution shows which containers are running, what container image they're running, and where containers are running. You can view detailed audit information, including commands that are being used with containers. You can also troubleshoot containers by viewing and searching centralized logs, without needing to remotely view Docker or Windows hosts. You can find containers that might be noisy and consuming excess resources on a host. Additionally, you can view centralized CPU, memory, storage, and network usage, and performance information, for containers. On computers running Windows, you can centralize and compare logs from Windows Server, Hyper-V, and Docker containers. The solution supports the following container orchestrators: - Docker Swarm diff --git a/docs/architecture/modernize-with-azure-containers/walkthroughs-technical-get-started-overview.md b/docs/architecture/modernize-with-azure-containers/walkthroughs-technical-get-started-overview.md index 8672b328af499..76dcd7537fa3a 100644 --- a/docs/architecture/modernize-with-azure-containers/walkthroughs-technical-get-started-overview.md +++ b/docs/architecture/modernize-with-azure-containers/walkthroughs-technical-get-started-overview.md @@ -186,7 +186,7 @@ The full technical walkthrough is available in the eShopModernizing GitHub repo ### Overview -[Azure Container Instances (ACI)](https://docs.microsoft.com/azure/container-instances/) is the quickest way to have a Containers dev/test/staging environment where you can deploy single instances of containers. +[Azure Container Instances (ACI)](/azure/container-instances/) is the quickest way to have a Containers dev/test/staging environment where you can deploy single instances of containers. ### Goals diff --git a/docs/architecture/serverless/application-insights.md b/docs/architecture/serverless/application-insights.md index f20fe63b8aa83..4a8eef2924079 100644 --- a/docs/architecture/serverless/application-insights.md +++ b/docs/architecture/serverless/application-insights.md @@ -7,7 +7,7 @@ ms.date: 06/26/2018 --- # Telemetry with Application Insights -[Application Insights](https://docs.microsoft.com/azure/application-insights) is a serverless diagnostics platform that enables developers to detect, triage, and diagnose issues in web apps, mobile apps, desktop apps, and microservices. You can turn on Application Insights for function apps simply by flipping a switch in the portal. Application Insights provides all of these capabilities without you having to configure a server or set up your own database. All of Application Insights' capabilities are provided as a service that automatically integrates with your apps. +[Application Insights](/azure/application-insights) is a serverless diagnostics platform that enables developers to detect, triage, and diagnose issues in web apps, mobile apps, desktop apps, and microservices. You can turn on Application Insights for function apps simply by flipping a switch in the portal. Application Insights provides all of these capabilities without you having to configure a server or set up your own database. All of Application Insights' capabilities are provided as a service that automatically integrates with your apps. ![Application Insights logo](./media/application-insights-logo.png) @@ -18,7 +18,7 @@ Adding Application Insights to existing apps is as easy as adding an instrumenta - Drill into performance by operation and measure the time it takes to call third-party dependencies - Monitor CPU usage, memory, and rates across all servers that host your function apps - View a live stream of metrics including request count and latency for your function apps -- Use [Analytics](https://docs.microsoft.com/azure/application-insights/app-insights-analytics) to search, query, and create custom charts over your function data +- Use [Analytics](/azure/application-insights/app-insights-analytics) to search, query, and create custom charts over your function data ![Metrics explorer](./media/metrics-explorer.png) @@ -31,7 +31,7 @@ public static TelemetryClient telemetry = new TelemetryClient() }; ``` -The following code measures how long it takes to insert a new row into an [Azure Table Storage](https://docs.microsoft.com/azure/cosmos-db/table-storage-overview) instance: +The following code measures how long it takes to insert a new row into an [Azure Table Storage](/azure/cosmos-db/table-storage-overview) instance: ```csharp var operation = TableOperation.Insert(entry); @@ -49,7 +49,7 @@ The custom telemetry reveals the average time to insert a new row is 32.6 millis Application Insights provides a powerful, convenient way to log detailed telemetry about your serverless applications. You have full control over the level of tracing and logging that is provided. You can track custom statistics such as events, dependencies, and page view. Finally, the powerful analytics enable you to write queries that ask important questions and generate charts and advanced insights. -For more information, see [Monitor Azure Functions](https://docs.microsoft.com/azure/azure-functions/functions-monitoring). +For more information, see [Monitor Azure Functions](/azure/azure-functions/functions-monitoring). >[!div class="step-by-step"] >[Previous](azure-functions.md) diff --git a/docs/architecture/serverless/architecture-approaches.md b/docs/architecture/serverless/architecture-approaches.md index aa9b11b947b2c..2531e3c7ae352 100644 --- a/docs/architecture/serverless/architecture-approaches.md +++ b/docs/architecture/serverless/architecture-approaches.md @@ -15,7 +15,7 @@ This chapter provides an overview of both logical and physical architecture patt Modern business applications follow a variety of architecture patterns. This section represents a survey of common patterns. The patterns listed here aren't necessarily all best practices, but illustrate different approaches. -For more information, see [Azure application architecture guide](https://docs.microsoft.com/azure/architecture/guide/). +For more information, see [Azure application architecture guide](/azure/architecture/guide/). ## Monoliths @@ -59,7 +59,7 @@ Serverless may be used to implement one or more layers. ## Microservices -**[Microservices](https://docs.microsoft.com/azure/architecture/guide/architecture-styles/microservices)** architectures contain common characteristics that include: +**[Microservices](/azure/architecture/guide/architecture-styles/microservices)** architectures contain common characteristics that include: - Applications are composed of several small services. - Each service runs in its own process. diff --git a/docs/architecture/serverless/architecture-deployment-approaches.md b/docs/architecture/serverless/architecture-deployment-approaches.md index f8575effd2967..039c5a8c5fe68 100644 --- a/docs/architecture/serverless/architecture-deployment-approaches.md +++ b/docs/architecture/serverless/architecture-deployment-approaches.md @@ -11,7 +11,7 @@ Regardless of the architecture approach used to design a business application, t ## N-Tier applications -The [N-Tier architecture pattern](https://docs.microsoft.com/azure/architecture/guide/architecture-styles/n-tier) is a mature architecture and simply refers to applications that separate various logical layers into separate physical tiers. N-Tier architecture is a physical implementation of N-Layer architecture. The most common implementation of this architecture includes: +The [N-Tier architecture pattern](/azure/architecture/guide/architecture-styles/n-tier) is a mature architecture and simply refers to applications that separate various logical layers into separate physical tiers. N-Tier architecture is a physical implementation of N-Layer architecture. The most common implementation of this architecture includes: - A presentation tier, for example a web app. - An API or data access tier, such as a REST API. @@ -50,11 +50,11 @@ The traditional approach to hosting applications requires buying hardware and ma Virtualization of hardware, via "virtual machines" enables Infrastructure as a Service (IaaS). Host machines are effectively partitioned to provide resources to instances with allocations for their own memory, CPU, and storage. The team provisions the necessary VMs and configures the associated networks and access to storage. -For more information, see [virtual machine N-tier reference architecture](https://docs.microsoft.com/azure/architecture/reference-architectures/virtual-machines-windows/n-tier). +For more information, see [virtual machine N-tier reference architecture](/azure/architecture/reference-architectures/virtual-machines-windows/n-tier). Although virtualization and Infrastructure as a Service (IaaS) address many concerns, it still leaves much responsibility in the hands of the infrastructure team. The team maintains operating system versions, applies security patches, and installs third-party dependencies on the target machines. Apps often behave differently on production machines compared to the test environment. Issues arise due to different dependency versions and/or OS SKU levels. Although many organizations deploy N-Tier applications to these targets, many companies benefit from deploying to a more cloud native model such as [Platform as a Service](#platform-as-a-service-paas). Architectures with microservices are more challenging because of the requirements to scale out for elasticity and resiliency. -For more information, see [virtual machines](https://docs.microsoft.com/azure/virtual-machines/). +For more information, see [virtual machines](/azure/virtual-machines/). ## Platform as a Service (PaaS) @@ -75,7 +75,7 @@ The main disadvantage of PaaS traditionally has been vendor lock-in. For example Software as a Service or SaaS is centrally hosted and available without local installation or provisioning. SaaS often is hosted on top of PaaS as a platform for deploying software. SaaS provides services to run and connect with existing software. SaaS is often industry and vertical specific. SaaS is often licensed and typically provides a client/server model. Most modern SaaS offerings use web-based apps for the client. Companies typically consider SaaS as a business solution to license offerings. It isn't often implemented as architecture consideration for scalability and maintainability of an application. Indeed, most SaaS solutions are built on IaaS, PaaS, and/or serverless back ends. -Learn more about SaaS through a [sample application](https://docs.microsoft.com/azure/sql-database/saas-tenancy-welcome-wingtip-tickets-app). +Learn more about SaaS through a [sample application](/azure/sql-database/saas-tenancy-welcome-wingtip-tickets-app). ## Containers and Functions as a Service (FaaS) @@ -99,7 +99,7 @@ The following image illustrates an example Kubernetes installation. Nodes in the ![Kubernetes](./media/kubernetes-example.png) -For more information about orchestration, see [Kubernetes on Azure](https://docs.microsoft.com/azure/aks/intro-kubernetes). +For more information about orchestration, see [Kubernetes on Azure](/azure/aks/intro-kubernetes). Functions as a Service (FaaS) is a specialized container service that is similar to serverless. A specific implementation of FaaS, called [OpenFaaS](https://github.com/openfaas/faas), sits on top of containers to provide serverless capabilities. OpenFaaS provides templates that package all of the container dependencies necessary to run a piece of code. Using templates simplifies the process of deploying code as a functional unit. OpenFaaS targets architectures that already include containers and orchestrators because it can use the existing infrastructure. Although it provides serverless functionality, it specifically requires you to use Docker and an orchestrator. @@ -124,7 +124,7 @@ The advantages of serverless include: - **Instant scale.** Serverless can scale to match workloads automatically and quickly. - **Faster time to market.** Developers focus on code and deploy directly to the serverless platform. Components can be released independently of each other. -Serverless is most often discussed in the context of compute, but can also apply to data. For example, [Azure SQL](https://docs.microsoft.com/azure/sql-database) and [Cosmos DB](https://docs.microsoft.com/azure/cosmos-db) both provide cloud databases that don't require you to configure host machines or clusters. This book focuses on serverless compute. +Serverless is most often discussed in the context of compute, but can also apply to data. For example, [Azure SQL](/azure/sql-database) and [Cosmos DB](/azure/cosmos-db) both provide cloud databases that don't require you to configure host machines or clusters. This book focuses on serverless compute. ## Summary @@ -148,16 +148,16 @@ The next chapter will focus on serverless architecture, use cases, and design pa ## Recommended resources -- [Azure application architecture guide](https://docs.microsoft.com/azure/architecture/guide/) -- [Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db) -- [Azure SQL](https://docs.microsoft.com/azure/sql-database) -- [N-Tier architecture pattern](https://docs.microsoft.com/azure/architecture/guide/architecture-styles/n-tier) -- [Kubernetes on Azure](https://docs.microsoft.com/azure/aks/intro-kubernetes) -- [Microservices](https://docs.microsoft.com/azure/architecture/guide/architecture-styles/microservices) -- [Virtual machine N-tier reference architecture](https://docs.microsoft.com/azure/architecture/reference-architectures/virtual-machines-windows/n-tier) -- [Virtual machines](https://docs.microsoft.com/azure/virtual-machines/) +- [Azure application architecture guide](/azure/architecture/guide/) +- [Azure Cosmos DB](/azure/cosmos-db) +- [Azure SQL](/azure/sql-database) +- [N-Tier architecture pattern](/azure/architecture/guide/architecture-styles/n-tier) +- [Kubernetes on Azure](/azure/aks/intro-kubernetes) +- [Microservices](/azure/architecture/guide/architecture-styles/microservices) +- [Virtual machine N-tier reference architecture](/azure/architecture/reference-architectures/virtual-machines-windows/n-tier) +- [Virtual machines](/azure/virtual-machines/) - [What is Docker?](../microservices/container-docker-introduction/docker-defined.md) -- [Wingtip Tickets SaaS application](https://docs.microsoft.com/azure/sql-database/saas-tenancy-welcome-wingtip-tickets-app) +- [Wingtip Tickets SaaS application](/azure/sql-database/saas-tenancy-welcome-wingtip-tickets-app) >[!div class="step-by-step"] >[Previous](architecture-approaches.md) diff --git a/docs/architecture/serverless/azure-functions.md b/docs/architecture/serverless/azure-functions.md index 25af51f990d4d..9c9944ba96f2f 100644 --- a/docs/architecture/serverless/azure-functions.md +++ b/docs/architecture/serverless/azure-functions.md @@ -13,7 +13,7 @@ Azure functions provide a serverless compute experience. A function is invoked b The current runtime version 3.0 supports cross-platform .NET Core 3.1 applications. Additional languages besides C# such as JavaScript, F#, and Java are supported. Functions created in the portal provide a rich scripting syntax. Functions created as standalone projects can be deployed with full platform support and capabilities. -For more information, see [Azure Functions documentation](https://docs.microsoft.com/azure/azure-functions). +For more information, see [Azure Functions documentation](/azure/azure-functions). ## Programming language support @@ -29,15 +29,15 @@ The following languages are all supported in general availability (GA). |**TypeScript**|Node 10 & 12 (via JavaScript)| |**PowerShell**|PowerShell Core 6| -For more information, see [Supported languages](https://docs.microsoft.com/azure/azure-functions/supported-languages). +For more information, see [Supported languages](/azure/azure-functions/supported-languages). ## App service plans Functions are backed by an *app service plan*. The plan defines the resources used by the functions app. You can assign plans to a region, determine the size and number of virtual machines that will be used, and pick a pricing tier. For a true serverless approach, function apps may use the **consumption** plan. The consumption plan will scale the back end automatically based on load. -Another hosting option for function apps is the [Premium plan](https://docs.microsoft.com/azure/azure-functions/functions-premium-plan). This plan provides an "always on" instance to avoid cold start, supports advanced features like VNet connectivity, and runs on premium hardware. +Another hosting option for function apps is the [Premium plan](/azure/azure-functions/functions-premium-plan). This plan provides an "always on" instance to avoid cold start, supports advanced features like VNet connectivity, and runs on premium hardware. -For more information, see [App service plans](https://docs.microsoft.com/azure/app-service/azure-web-sites-web-hosting-plans-in-depth-overview). +For more information, see [App service plans](/azure/app-service/azure-web-sites-web-hosting-plans-in-depth-overview). ## Create your first function @@ -47,11 +47,11 @@ There are three common ways you can create function apps. - Create the necessary resources using the Azure CLI. - Build functions locally using your favorite IDE and publish them to Azure. -For more information on creating a scripted function in the portal, see [Create your first function in the Azure portal](https://docs.microsoft.com/azure/azure-functions/functions-create-first-azure-function). +For more information on creating a scripted function in the portal, see [Create your first function in the Azure portal](/azure/azure-functions/functions-create-first-azure-function). -To build from the Azure CLI, see [Create your first function using the Azure CLI](https://docs.microsoft.com/azure/azure-functions/functions-create-first-azure-function-azure-cli). +To build from the Azure CLI, see [Create your first function using the Azure CLI](/azure/azure-functions/functions-create-first-azure-function-azure-cli). -To create a function from Visual Studio, see [Create your first function using Visual Studio](https://docs.microsoft.com/azure/azure-functions/functions-create-your-first-function-visual-studio). +To create a function from Visual Studio, see [Create your first function using Visual Studio](/azure/azure-functions/functions-create-your-first-function-visual-studio). ## Understand triggers and bindings @@ -108,7 +108,7 @@ public static string Run(Stream myBlob, string name, TraceWriter log) The example is a simple function that takes the name of the file that was modified or uploaded to blob storage, and places it on a queue for later processing. -For a full list of triggers and bindings, see [Azure Functions triggers and bindings concepts](https://docs.microsoft.com/azure/azure-functions/functions-triggers-bindings). +For a full list of triggers and bindings, see [Azure Functions triggers and bindings concepts](/azure/azure-functions/functions-triggers-bindings). >[!div class="step-by-step"] >[Previous](azure-serverless-platform.md) diff --git a/docs/architecture/serverless/durable-azure-functions.md b/docs/architecture/serverless/durable-azure-functions.md index 85d486c2d7a7c..7672903e66525 100644 --- a/docs/architecture/serverless/durable-azure-functions.md +++ b/docs/architecture/serverless/durable-azure-functions.md @@ -90,9 +90,9 @@ public static bool CheckAndReserveInventory([ActivityTrigger] DurableActivityCon ## Recommended resources -- [Durable Functions](https://docs.microsoft.com/azure/azure-functions/durable-functions-overview) -- [Bindings for Durable Functions](https://docs.microsoft.com/azure/azure-functions/durable-functions-bindings) -- [Manage instances in Durable Functions](https://docs.microsoft.com/azure/azure-functions/durable-functions-instance-management) +- [Durable Functions](/azure/azure-functions/durable-functions-overview) +- [Bindings for Durable Functions](/azure/azure-functions/durable-functions-bindings) +- [Manage instances in Durable Functions](/azure/azure-functions/durable-functions-instance-management) >[!div class="step-by-step"] >[Previous](event-grid.md) diff --git a/docs/architecture/serverless/event-grid.md b/docs/architecture/serverless/event-grid.md index f6f874701a37d..94fbfd5d6d72b 100644 --- a/docs/architecture/serverless/event-grid.md +++ b/docs/architecture/serverless/event-grid.md @@ -25,7 +25,7 @@ Event Grid addresses several different scenarios. This section covers three of t ![Ops automation](./media/ops-automation.png) -Event Grid can help speed automation and simplify policy enforcement by notifying [Azure Automation](https://docs.microsoft.com/azure/automation) when infrastructure is provisioned. +Event Grid can help speed automation and simplify policy enforcement by notifying [Azure Automation](/azure/automation) when infrastructure is provisioned. ### Application integration @@ -41,11 +41,11 @@ Event Grid can trigger Azure Functions, Logic Apps, or your own custom code. A m ## Event Grid vs. other Azure messaging services -Azure provides several messaging services, including [Event Hubs](https://docs.microsoft.com/azure/event-hubs) and [Service Bus](https://docs.microsoft.com/azure/service-bus-messaging). Each is designed to address a specific set of use cases. The following diagram provides a high-level overview of the differences between the services. +Azure provides several messaging services, including [Event Hubs](/azure/event-hubs) and [Service Bus](/azure/service-bus-messaging). Each is designed to address a specific set of use cases. The following diagram provides a high-level overview of the differences between the services. ![Azure messaging comparison](./media/azure-messaging-services.png) -For a more in-depth comparison, see [Compare messaging services](https://docs.microsoft.com/azure/event-grid/compare-messaging-services). +For a more in-depth comparison, see [Compare messaging services](/azure/event-grid/compare-messaging-services). ## Performance targets @@ -104,34 +104,34 @@ A major benefit of using Event Grid is the automatic messages produced by Azure. | | Microsoft.Resources.ResourceDeleteFailure | Raised when a resource delete operation fails. | | | Microsoft.Resources.ResourceDeleteCancel | Raised when a resource delete operation is canceled. This event happens when a template deployment is canceled. | -For more information, see [Azure Event Grid event schema](https://docs.microsoft.com/azure/event-grid/event-schema). +For more information, see [Azure Event Grid event schema](/azure/event-grid/event-schema). You can access Event Grid from any type of application, even one that runs on-premises. ## Conclusion -In this chapter you learned about the Azure serverless platform that is composed of Azure Functions, Logic Apps, and Event Grid. You can use these resources to build an entirely serverless app architecture, or create a hybrid solution that interacts with other cloud resources and on-premises servers. Combined with a serverless data platform such as [Azure SQL](https://docs.microsoft.com/azure/sql-database) or [CosmosDB](https://docs.microsoft.com/azure/cosmos-db/introduction), you can build fully managed cloud native applications. +In this chapter you learned about the Azure serverless platform that is composed of Azure Functions, Logic Apps, and Event Grid. You can use these resources to build an entirely serverless app architecture, or create a hybrid solution that interacts with other cloud resources and on-premises servers. Combined with a serverless data platform such as [Azure SQL](/azure/sql-database) or [CosmosDB](/azure/cosmos-db/introduction), you can build fully managed cloud native applications. ## Recommended resources -- [App service plans](https://docs.microsoft.com/azure/app-service/azure-web-sites-web-hosting-plans-in-depth-overview) -- [Application Insights](https://docs.microsoft.com/azure/application-insights) -- [Application Insights Analytics](https://docs.microsoft.com/azure/application-insights/app-insights-analytics) +- [App service plans](/azure/app-service/azure-web-sites-web-hosting-plans-in-depth-overview) +- [Application Insights](/azure/application-insights) +- [Application Insights Analytics](/azure/application-insights/app-insights-analytics) - [Azure: Bring your app to the cloud with serverless Azure Functions](https://channel9.msdn.com/events/Connect/2017/E102) -- [Azure Event Grid](https://docs.microsoft.com/azure/event-grid/overview) -- [Azure Event Grid event schema](https://docs.microsoft.com/azure/event-grid/event-schema) -- [Azure Event Hubs](https://docs.microsoft.com/azure/event-hubs) -- [Azure Functions documentation](https://docs.microsoft.com/azure/azure-functions) -- [Azure Functions triggers and bindings concepts](https://docs.microsoft.com/azure/azure-functions/functions-triggers-bindings) -- [Azure Logic Apps](https://docs.microsoft.com/azure/logic-apps) -- [Azure Service Bus](https://docs.microsoft.com/azure/service-bus-messaging) -- [Azure Table Storage](https://docs.microsoft.com/azure/cosmos-db/table-storage-overview) -- [Connecting to on-premises data sources with Azure On-premises Data Gateway](https://docs.microsoft.com/azure/analysis-services/analysis-services-gateway) -- [Create your first function in the Azure portal](https://docs.microsoft.com/azure/azure-functions/functions-create-first-azure-function) -- [Create your first function using the Azure CLI](https://docs.microsoft.com/azure/azure-functions/functions-create-first-azure-function-azure-cli) -- [Create your first function using Visual Studio](https://docs.microsoft.com/azure/azure-functions/functions-create-your-first-function-visual-studio) -- [Functions supported languages](https://docs.microsoft.com/azure/azure-functions/supported-languages) -- [Monitor Azure Functions](https://docs.microsoft.com/azure/azure-functions/functions-monitoring) +- [Azure Event Grid](/azure/event-grid/overview) +- [Azure Event Grid event schema](/azure/event-grid/event-schema) +- [Azure Event Hubs](/azure/event-hubs) +- [Azure Functions documentation](/azure/azure-functions) +- [Azure Functions triggers and bindings concepts](/azure/azure-functions/functions-triggers-bindings) +- [Azure Logic Apps](/azure/logic-apps) +- [Azure Service Bus](/azure/service-bus-messaging) +- [Azure Table Storage](/azure/cosmos-db/table-storage-overview) +- [Connecting to on-premises data sources with Azure On-premises Data Gateway](/azure/analysis-services/analysis-services-gateway) +- [Create your first function in the Azure portal](/azure/azure-functions/functions-create-first-azure-function) +- [Create your first function using the Azure CLI](/azure/azure-functions/functions-create-first-azure-function-azure-cli) +- [Create your first function using Visual Studio](/azure/azure-functions/functions-create-your-first-function-visual-studio) +- [Functions supported languages](/azure/azure-functions/supported-languages) +- [Monitor Azure Functions](/azure/azure-functions/functions-monitoring) >[!div class="step-by-step"] >[Previous](logic-apps.md) diff --git a/docs/architecture/serverless/index.md b/docs/architecture/serverless/index.md index 4511bd77af6f5..7e01a5364fd31 100644 --- a/docs/architecture/serverless/index.md +++ b/docs/architecture/serverless/index.md @@ -69,7 +69,7 @@ Participants and reviewers: This guide focuses on cloud native development of applications that use serverless. The book highlights the benefits and exposes the potential drawbacks of developing serverless apps and provides a survey of serverless architectures. Many examples of how serverless can be used are illustrated along with various serverless design patterns. -This guide explains the components of the Azure serverless platform and focuses specifically on implementation of serverless using [Azure Functions](https://docs.microsoft.com/azure/azure-functions/functions-overview). You'll learn about triggers and bindings as well as how to implement serverless apps that rely on state using durable functions. Finally, business examples and case studies will help provide context and a frame of reference to determine whether serverless is the right approach for your projects. +This guide explains the components of the Azure serverless platform and focuses specifically on implementation of serverless using [Azure Functions](/azure/azure-functions/functions-overview). You'll learn about triggers and bindings as well as how to implement serverless apps that rely on state using durable functions. Finally, business examples and case studies will help provide context and a frame of reference to determine whether serverless is the right approach for your projects. ## Evolution of cloud platforms @@ -111,12 +111,12 @@ Another feature of serverless is micro-billing. It's common for web applications ## What this guide doesn't cover -This guide specifically emphasizes architecture approaches and design patterns and isn't a deep dive into the implementation details of Azure Functions, [Logic Apps](https://docs.microsoft.com/azure/logic-apps/logic-apps-what-are-logic-apps), or other serverless platforms. This guide doesn't cover, for example, advanced workflows with Logic Apps or features of Azure Functions such as configuring Cross-Origin Resource Sharing (CORS), applying custom domains, or uploading SSL certificates. These details are available through the online [Azure Functions documentation](https://docs.microsoft.com/azure/azure-functions/functions-reference). +This guide specifically emphasizes architecture approaches and design patterns and isn't a deep dive into the implementation details of Azure Functions, [Logic Apps](/azure/logic-apps/logic-apps-what-are-logic-apps), or other serverless platforms. This guide doesn't cover, for example, advanced workflows with Logic Apps or features of Azure Functions such as configuring Cross-Origin Resource Sharing (CORS), applying custom domains, or uploading SSL certificates. These details are available through the online [Azure Functions documentation](/azure/azure-functions/functions-reference). ### Additional resources -- [Azure Architecture center](https://docs.microsoft.com/azure/architecture/) -- [Best practices for cloud applications](https://docs.microsoft.com/azure/architecture/best-practices/api-design) +- [Azure Architecture center](/azure/architecture/) +- [Best practices for cloud applications](/azure/architecture/best-practices/api-design) ## Who should use the guide diff --git a/docs/architecture/serverless/logic-apps.md b/docs/architecture/serverless/logic-apps.md index e260d8cedcf0a..cc02fb099c01f 100644 --- a/docs/architecture/serverless/logic-apps.md +++ b/docs/architecture/serverless/logic-apps.md @@ -7,11 +7,11 @@ ms.date: 06/26/2018 --- # Azure Logic Apps -[Azure Logic Apps](https://docs.microsoft.com/azure/logic-apps) provides a serverless engine to build automated workflows to integrate apps and data between cloud services and on-premises systems. You build workflows using a visual designer. You can trigger workflows based on events or timers and leverage connectors to integration applications and facilitate business-to-business (B2B) communication. Logic Apps integrates seamlessly with Azure Functions. +[Azure Logic Apps](/azure/logic-apps) provides a serverless engine to build automated workflows to integrate apps and data between cloud services and on-premises systems. You build workflows using a visual designer. You can trigger workflows based on events or timers and leverage connectors to integration applications and facilitate business-to-business (B2B) communication. Logic Apps integrates seamlessly with Azure Functions. ![Azure Logic Apps logo](./media/logic-apps-logo.png) -Logic Apps can do more than just connect your cloud services (like functions) with cloud resources (like queues and databases). You can also orchestrate on-premises workflows with the on-premises gateway. For example, you can use the Logic App to trigger an on-premises SQL stored procedure in response to a cloud-based event or conditional logic in your workflow. Learn more about [Connecting to on-premises data sources with Azure On-premises Data Gateway](https://docs.microsoft.com/azure/analysis-services/analysis-services-gateway). +Logic Apps can do more than just connect your cloud services (like functions) with cloud resources (like queues and databases). You can also orchestrate on-premises workflows with the on-premises gateway. For example, you can use the Logic App to trigger an on-premises SQL stored procedure in response to a cloud-based event or conditional logic in your workflow. Learn more about [Connecting to on-premises data sources with Azure On-premises Data Gateway](/azure/analysis-services/analysis-services-gateway). ![Logic Apps architecture](./media/logic-apps-architecture.png) @@ -25,7 +25,7 @@ Once the app is triggered, you can use the visual designer to build out steps, l The Logic Apps dashboard shows the history of running your workflows and whether each run completed successfully or not. You can navigate into any given run and inspect the data used by each step for troubleshooting. Logic Apps also provides existing templates you can edit and are well suited for complex enterprise workflows. -To learn more, see [Azure Logic Apps](https://docs.microsoft.com/azure/logic-apps). +To learn more, see [Azure Logic Apps](/azure/logic-apps). >[!div class="step-by-step"] >[Previous](application-insights.md) diff --git a/docs/architecture/serverless/orchestration-patterns.md b/docs/architecture/serverless/orchestration-patterns.md index c0b875ccb90fa..f260e91e6edac 100644 --- a/docs/architecture/serverless/orchestration-patterns.md +++ b/docs/architecture/serverless/orchestration-patterns.md @@ -147,7 +147,7 @@ public static async Task CheckStockPrice([OrchestrationTrigger] DurableOrchestra ## Recommended resources -- [Azure Durable Functions](https://docs.microsoft.com/azure/azure-functions/durable-functions-overview) +- [Azure Durable Functions](/azure/azure-functions/durable-functions-overview) - [Unit Testing in .NET Core and .NET Standard](../../core/testing/index.md) >[!div class="step-by-step"] diff --git a/docs/architecture/serverless/serverless-architecture-considerations.md b/docs/architecture/serverless/serverless-architecture-considerations.md index 11ac12984256c..73c7ccbcdf98e 100644 --- a/docs/architecture/serverless/serverless-architecture-considerations.md +++ b/docs/architecture/serverless/serverless-architecture-considerations.md @@ -17,7 +17,7 @@ There are several solutions to adopt state without compromising the benefits of - Use a temporary data store or distributed cache, like Redis - Store state in a database, like SQL or CosmosDB -- Handle state through a workflow engine like [durable functions](https://docs.microsoft.com/azure/azure-functions/durable/durable-functions-overview) +- Handle state through a workflow engine like [durable functions](/azure/azure-functions/durable/durable-functions-overview) The bottom line is that you should be aware of the need for any state management within processes you're considering to implement with serverless. @@ -57,7 +57,7 @@ Rules often specify how to scale-up (increase the host resources) and scale-out ## Monitoring, tracing, and logging -An often overlooked aspect of DevOps is monitoring applications once deployed. It's important to have a strategy for monitoring serverless functions. The biggest challenge is often correlation, or recognizing when a user calls multiple functions as part of the same interaction. Most serverless platforms allow console logging that can be imported into third-party tools. There are also options to automate collection of telemetry, generate and track correlation IDs, and monitor specific actions to provide detailed insights. Azure provides the advanced [Application Insights platform](https://docs.microsoft.com/azure/azure-functions/functions-monitoring) for monitoring and analytics. +An often overlooked aspect of DevOps is monitoring applications once deployed. It's important to have a strategy for monitoring serverless functions. The biggest challenge is often correlation, or recognizing when a user calls multiple functions as part of the same interaction. Most serverless platforms allow console logging that can be imported into third-party tools. There are also options to automate collection of telemetry, generate and track correlation IDs, and monitor specific actions to provide detailed insights. Azure provides the advanced [Application Insights platform](/azure/azure-functions/functions-monitoring) for monitoring and analytics. ## Inter-service dependencies diff --git a/docs/architecture/serverless/serverless-architecture.md b/docs/architecture/serverless/serverless-architecture.md index ef733c5bad7b3..c519f96a0fe20 100644 --- a/docs/architecture/serverless/serverless-architecture.md +++ b/docs/architecture/serverless/serverless-architecture.md @@ -9,7 +9,7 @@ ms.date: 06/26/2018 There are many approaches to using [serverless](https://azure.com/serverless) architectures. This chapter explores examples of common architectures that integrate serverless. It also covers concerns that may pose additional challenges or require extra consideration when implementing serverless. Finally, several design examples are provided that illustrate various serverless use cases. -Serverless hosts often use an existing container-based or PaaS layer to manage the serverless instances. For example, Azure Functions is based on [Azure App Service](https://docs.microsoft.com/azure/app-service/). The App Service is used to scale out instances and manage the runtime that executes Azure Functions code. For Windows-based functions, the host runs as PaaS and scales out the .NET runtime. For Linux-based functions, the host leverages containers. +Serverless hosts often use an existing container-based or PaaS layer to manage the serverless instances. For example, Azure Functions is based on [Azure App Service](/azure/app-service/). The App Service is used to scale out instances and manage the runtime that executes Azure Functions code. For Windows-based functions, the host runs as PaaS and scales out the .NET runtime. For Linux-based functions, the host leverages containers. ![Azure Functions architecture](./media/azure-functions-architecture.png) @@ -65,7 +65,7 @@ The sheer volume of devices and information often dictates an event-driven archi - Facilitates independent versioning so developers can update the business logic for a specific device without having to deploy the entire system. - Resiliency and less downtime. -The pervasiveness of IoT has resulted in several serverless products that focus specifically on IoT concerns, such as [Azure IoT Hub](https://docs.microsoft.com/azure/iot-hub). Serverless automates tasks such as device registration, policy enforcement, tracking, and even deployment of code to devices at *the edge*. The edge refers to devices like sensors and actuators that are connected to, but not an active part of, the Internet. +The pervasiveness of IoT has resulted in several serverless products that focus specifically on IoT concerns, such as [Azure IoT Hub](/azure/iot-hub). Serverless automates tasks such as device registration, policy enforcement, tracking, and even deployment of code to devices at *the edge*. The edge refers to devices like sensors and actuators that are connected to, but not an active part of, the Internet. >[!div class="step-by-step"] >[Previous](architecture-approaches.md) diff --git a/docs/architecture/serverless/serverless-business-scenarios.md b/docs/architecture/serverless/serverless-business-scenarios.md index 4270e2a2f5c04..a0c8f11bc4be5 100644 --- a/docs/architecture/serverless/serverless-business-scenarios.md +++ b/docs/architecture/serverless/serverless-business-scenarios.md @@ -11,11 +11,11 @@ There are many use cases and scenarios for serverless applications. This chapter ## Big data processing -![Map/reduce diagram](https://docs.microsoft.com/samples/azure-samples/durablefunctions-mapreduce-dotnet/big-data-processing-serverless-mapreduce-on-azure/media/mapreducearchitecture.png) +![Map/reduce diagram](/samples/azure-samples/durablefunctions-mapreduce-dotnet/big-data-processing-serverless-mapreduce-on-azure/media/mapreducearchitecture.png) This example uses serverless to do a map/reduce operation on a big data set. It determines the average speed of New York Yellow taxi trips per day in 2017. -[Big Data Processing: Serverless MapReduce on Azure](https://docs.microsoft.com/samples/azure-samples/durablefunctions-mapreduce-dotnet/big-data-processing-serverless-mapreduce-on-azure/) +[Big Data Processing: Serverless MapReduce on Azure](/samples/azure-samples/durablefunctions-mapreduce-dotnet/big-data-processing-serverless-mapreduce-on-azure/) ## Create serverless applications: hands-on lab @@ -29,33 +29,33 @@ Learn how to use functions to execute server-side logic and build serverless arc - Monitoring - Development, testing, and deployment -[Create serverless applications](https://docs.microsoft.com/learn/paths/create-serverless-applications/) +[Create serverless applications](/learn/paths/create-serverless-applications/) ## Customer reviews This sample showcases the new Azure Functions tooling for C# Class Libraries in Visual Studio. Create a website where customers submit product reviews that are stored in Azure storage blobs and CosmosDB. Add an Azure Function to perform automated moderation of the customer reviews using Azure Cognitive Services. Use an Azure storage queue to decouple the website from the function. -[Customer Reviews App with Cognitive Services](https://docs.microsoft.com/samples/azure-samples/functions-customer-reviews/customer-reviews-cognitive-services/) +[Customer Reviews App with Cognitive Services](/samples/azure-samples/functions-customer-reviews/customer-reviews-cognitive-services/) ## Docker Linux image support This sample demonstrates how to create a `Dockerfile` to build and run Azure Functions on a Linux Docker container. -[Azure Functions on Linux](https://docs.microsoft.com/samples/azure-samples/functions-linux-custom-image/azure-functions-on-linux-custom-image-tutorial-sample-project/) +[Azure Functions on Linux](/samples/azure-samples/functions-linux-custom-image/azure-functions-on-linux-custom-image-tutorial-sample-project/) ## File processing and validation This example parses a set of CSV files from hypothetical customers. It ensures that all files required for a customer "batch" are ready, then validates the structure of each file. Different solutions are presented using Azure Functions, Logic Apps, and Durable Functions. -[File processing and validation using Azure Functions, Logic Apps, and Durable Functions](https://docs.microsoft.com/samples/azure-samples/serverless-file-validation/file-processing-and-validation-using-azure-functions-logic-apps-and-durable-functions/) +[File processing and validation using Azure Functions, Logic Apps, and Durable Functions](/samples/azure-samples/serverless-file-validation/file-processing-and-validation-using-azure-functions-logic-apps-and-durable-functions/) ## Game data visualization -![Game telemetry](https://docs.microsoft.com/samples/azure-samples/gaming-in-editor-telemetry/in-editor-telemetry-visualization/media/points.png) +![Game telemetry](/samples/azure-samples/gaming-in-editor-telemetry/in-editor-telemetry-visualization/media/points.png) An example of how a developer could implement an in-editor data visualization solution for their game. In fact, an Unreal Engine 4 Plugin and Unity Plugin were developed using this sample as its backend. The service component is game engine agnostic. -[In-editor game telemetry visualization](https://docs.microsoft.com/samples/azure-samples/gaming-in-editor-telemetry/in-editor-telemetry-visualization/) +[In-editor game telemetry visualization](/samples/azure-samples/gaming-in-editor-telemetry/in-editor-telemetry-visualization/) ## GraphQL @@ -65,52 +65,52 @@ Create a serverless function that exposes a GraphQL API. ## Internet of Things (IoT) reliable edge relay -![IoT Architecture](https://docs.microsoft.com/samples/azure-samples/iot-reliable-edge-relay/iot-reliable-edge-relay/media/architecture.png) +![IoT Architecture](/samples/azure-samples/iot-reliable-edge-relay/iot-reliable-edge-relay/media/architecture.png) This sample implements a new communication protocol to enable reliable upstream communication from IoT devices. It automates data gap detection and backfill. -[IoT Reliable Edge Relay](https://docs.microsoft.com/samples/azure-samples/iot-reliable-edge-relay/iot-reliable-edge-relay/) +[IoT Reliable Edge Relay](/samples/azure-samples/iot-reliable-edge-relay/iot-reliable-edge-relay/) ## Microservices reference architecture -![Reference architecture](https://docs.microsoft.com/samples/azure-samples/serverless-microservices-reference-architecture/serverless-microservices-reference-architecture/media/macro-architecture.png) +![Reference architecture](/samples/azure-samples/serverless-microservices-reference-architecture/serverless-microservices-reference-architecture/media/macro-architecture.png) A reference architecture that walks you through the decision-making process involved in designing, developing, and delivering the Rideshare by Relecloud application (a fictitious company). It includes hands-on instructions for configuring and deploying all of the architecture's components. -[Serverless Microservices reference architecture](https://docs.microsoft.com/samples/azure-samples/serverless-microservices-reference-architecture/serverless-microservices-reference-architecture/) +[Serverless Microservices reference architecture](/samples/azure-samples/serverless-microservices-reference-architecture/serverless-microservices-reference-architecture/) ## Migrate console apps to serverless This sample is a generic function (`.csx` file) that can be used to convert any console application to an HTTP web service in Azure Functions. All you have to do is edit a configuration file and specify what input parameters will be passed as arguments to the `.exe`. -[Run Console Apps on Azure Functions](https://docs.microsoft.com/samples/azure-samples/functions-dotnet-migrating-console-apps/run-console-apps-on-azure-functions/) +[Run Console Apps on Azure Functions](/samples/azure-samples/functions-dotnet-migrating-console-apps/run-console-apps-on-azure-functions/) ## Serverless for mobile Azure Functions are easy to implement and maintain, and accessible through HTTP. They are a great way to implement an API for a mobile application. Microsoft offers great cross-platform tools for iOS, Android, and Windows with Xamarin. As such, Xamarin and Azure Functions are working great together. This article shows how to implement an Azure Function in the Azure portal or in Visual Studio at first, and build a cross-platform client with Xamarin.Forms running on Android, iOS, and Windows. -[Implementing a simple Azure Function with a Xamarin.Forms client](https://docs.microsoft.com/samples/azure-samples/functions-xamarin-getting-started/implementing-a-simple-azure-function-with-a-xamarinforms-client/) +[Implementing a simple Azure Function with a Xamarin.Forms client](/samples/azure-samples/functions-xamarin-getting-started/implementing-a-simple-azure-function-with-a-xamarinforms-client/) ## Serverless messaging This sample shows how to utilize Durable Functions' fan-out pattern to load an arbitrary number of messages across any number of sessions/partitions. It targets Service Bus, Event Hubs, or Storage Queues. The sample also adds the ability to consume those messages with another Azure Function and load the resulting timing data in to another Event Hub. The data is then ingested into analytics services like Azure Data Explorer. -[Produce and Consume messages through Service Bus, Event Hubs, and Storage Queues with Azure Functions](https://docs.microsoft.com/samples/azure-samples/durable-functions-producer-consumer/product-consume-messages-az-functions/) +[Produce and Consume messages through Service Bus, Event Hubs, and Storage Queues with Azure Functions](/samples/azure-samples/durable-functions-producer-consumer/product-consume-messages-az-functions/) ## Recommended resources -- [Azure Functions on Linux](https://docs.microsoft.com/samples/azure-samples/functions-linux-custom-image/azure-functions-on-linux-custom-image-tutorial-sample-project/) -- [Big Data Processing: Serverless MapReduce on Azure](https://docs.microsoft.com/samples/azure-samples/durablefunctions-mapreduce-dotnet/big-data-processing-serverless-mapreduce-on-azure/) -- [Create serverless applications](https://docs.microsoft.com/learn/paths/create-serverless-applications/) -- [Customer Reviews App with Cognitive Services](https://docs.microsoft.com/samples/azure-samples/functions-customer-reviews/customer-reviews-cognitive-services/) -- [File processing and validation using Azure Functions, Logic Apps, and Durable Functions](https://docs.microsoft.com/samples/azure-samples/serverless-file-validation/file-processing-and-validation-using-azure-functions-logic-apps-and-durable-functions/) -- [Implementing a simple Azure Function with a Xamarin.Forms client](https://docs.microsoft.com/samples/azure-samples/functions-xamarin-getting-started/implementing-a-simple-azure-function-with-a-xamarinforms-client/) -- [In-editor game telemetry visualization](https://docs.microsoft.com/samples/azure-samples/gaming-in-editor-telemetry/in-editor-telemetry-visualization/) -- [IoT Reliable Edge Relay](https://docs.microsoft.com/samples/azure-samples/iot-reliable-edge-relay/iot-reliable-edge-relay/) -- [Produce and Consume messages through Service Bus, Event Hubs, and Storage Queues with Azure Functions](https://docs.microsoft.com/samples/azure-samples/durable-functions-producer-consumer/product-consume-messages-az-functions/) -- [Run Console Apps on Azure Functions](https://docs.microsoft.com/samples/azure-samples/functions-dotnet-migrating-console-apps/run-console-apps-on-azure-functions/) +- [Azure Functions on Linux](/samples/azure-samples/functions-linux-custom-image/azure-functions-on-linux-custom-image-tutorial-sample-project/) +- [Big Data Processing: Serverless MapReduce on Azure](/samples/azure-samples/durablefunctions-mapreduce-dotnet/big-data-processing-serverless-mapreduce-on-azure/) +- [Create serverless applications](/learn/paths/create-serverless-applications/) +- [Customer Reviews App with Cognitive Services](/samples/azure-samples/functions-customer-reviews/customer-reviews-cognitive-services/) +- [File processing and validation using Azure Functions, Logic Apps, and Durable Functions](/samples/azure-samples/serverless-file-validation/file-processing-and-validation-using-azure-functions-logic-apps-and-durable-functions/) +- [Implementing a simple Azure Function with a Xamarin.Forms client](/samples/azure-samples/functions-xamarin-getting-started/implementing-a-simple-azure-function-with-a-xamarinforms-client/) +- [In-editor game telemetry visualization](/samples/azure-samples/gaming-in-editor-telemetry/in-editor-telemetry-visualization/) +- [IoT Reliable Edge Relay](/samples/azure-samples/iot-reliable-edge-relay/iot-reliable-edge-relay/) +- [Produce and Consume messages through Service Bus, Event Hubs, and Storage Queues with Azure Functions](/samples/azure-samples/durable-functions-producer-consumer/product-consume-messages-az-functions/) +- [Run Console Apps on Azure Functions](/samples/azure-samples/functions-dotnet-migrating-console-apps/run-console-apps-on-azure-functions/) - [Serverless functions for GraphQL](https://github.com/softchris/graphql-workshop-dotnet/blob/master/docs/workshop/4.md) -- [Serverless Microservices reference architecture](https://docs.microsoft.com/samples/azure-samples/serverless-microservices-reference-architecture/serverless-microservices-reference-architecture/) +- [Serverless Microservices reference architecture](/samples/azure-samples/serverless-microservices-reference-architecture/serverless-microservices-reference-architecture/) >[!div class="step-by-step"] >[Previous](orchestration-patterns.md) diff --git a/docs/architecture/serverless/serverless-design-examples.md b/docs/architecture/serverless/serverless-design-examples.md index a9398f5832a74..f0a9d60fa55af 100644 --- a/docs/architecture/serverless/serverless-design-examples.md +++ b/docs/architecture/serverless/serverless-design-examples.md @@ -23,13 +23,13 @@ Using CQRS, a read might involve a special "flattened" entity that models data t ![CQRS example](./media/cqrs-example.png) -Serverless can accommodate the CQRS pattern by providing the segregated endpoints. One serverless function accommodates queries or reads, and a different serverless function or set of functions handles update operations. A serverless function may also be responsible for keeping the read model up-to-date, and can be triggered by the database's [change feed](https://docs.microsoft.com/azure/cosmos-db/change-feed). Front-end development is simplified to connecting to the necessary endpoints. Processing of events is handled on the back end. This model also scales well for large projects because different teams may work on different operations. +Serverless can accommodate the CQRS pattern by providing the segregated endpoints. One serverless function accommodates queries or reads, and a different serverless function or set of functions handles update operations. A serverless function may also be responsible for keeping the read model up-to-date, and can be triggered by the database's [change feed](/azure/cosmos-db/change-feed). Front-end development is simplified to connecting to the necessary endpoints. Processing of events is handled on the back end. This model also scales well for large projects because different teams may work on different operations. ## Event-based processing -In message-based systems, events are often collected in queues or publisher/subscriber topics to be acted upon. These events can trigger serverless functions to execute a piece of business logic. An example of event-based processing is event-sourced systems. An "event" is raised to mark a task as complete. A serverless function triggered by the event updates the appropriate database document. A second serverless function may use the event to update the read model for the system. [Azure Event Grid](https://docs.microsoft.com/azure/event-grid/overview) provides a way to integrate events with functions as subscribers. +In message-based systems, events are often collected in queues or publisher/subscriber topics to be acted upon. These events can trigger serverless functions to execute a piece of business logic. An example of event-based processing is event-sourced systems. An "event" is raised to mark a task as complete. A serverless function triggered by the event updates the appropriate database document. A second serverless function may use the event to update the read model for the system. [Azure Event Grid](/azure/event-grid/overview) provides a way to integrate events with functions as subscribers. -> Events are informational messages. For more information, see [Event Sourcing pattern](https://docs.microsoft.com/azure/architecture/patterns/event-sourcing). +> Events are informational messages. For more information, see [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing). ## File triggers and transformations @@ -37,7 +37,7 @@ Extract, Transform, and Load (ETL) is a common business function. Serverless is ![Serverless file triggers and transformations](./media/serverless-file-triggers.png) -In the diagram, "cool storage" provides data that is parsed in [Azure Stream Analytics](https://docs.microsoft.com/azure/stream-analytics). Any issues encountered in the data stream trigger an Azure Function to address the anomaly. +In the diagram, "cool storage" provides data that is parsed in [Azure Stream Analytics](/azure/stream-analytics). Any issues encountered in the data stream trigger an Azure Function to address the anomaly. ## Asynchronous background processing and messaging @@ -59,7 +59,7 @@ Serverless functions can be used to facilitate a data pipeline. In this example, ## Stream processing -Devices and sensors often generate streams of data that must be processed in real time. There are a number of technologies that can capture messages and streams from [Event Hubs](https://docs.microsoft.com/azure/event-hubs/event-hubs-what-is-event-hubs) and [IoT Hub](https://docs.microsoft.com/azure/iot-hub) to [Service Bus](https://docs.microsoft.com/azure/service-bus). Regardless of transport, serverless is an ideal mechanism for processing the messages and streams of data as they come in. Serverless can scale quickly to meet the demand of large volumes of data. The serverless code can apply business logic to parse the data and output in a structured format for action and analytics. +Devices and sensors often generate streams of data that must be processed in real time. There are a number of technologies that can capture messages and streams from [Event Hubs](/azure/event-hubs/event-hubs-what-is-event-hubs) and [IoT Hub](/azure/iot-hub) to [Service Bus](/azure/service-bus). Regardless of transport, serverless is an ideal mechanism for processing the messages and streams of data as they come in. Serverless can scale quickly to meet the demand of large volumes of data. The serverless code can apply business logic to parse the data and output in a structured format for action and analytics. ![Serverless stream processing](./media/serverless-stream-processing.png) @@ -71,16 +71,16 @@ An API gateway provides a single point of entry for clients and then intelligent ## Recommended resources -- [Azure Event Grid](https://docs.microsoft.com/azure/event-grid/overview) -- [Azure IoT Hub](https://docs.microsoft.com/azure/iot-hub) +- [Azure Event Grid](/azure/event-grid/overview) +- [Azure IoT Hub](/azure/iot-hub) - [Challenges and solutions for distributed data management](../microservices/architect-microservice-container-applications/distributed-data-management.md) -- [Designing microservices: identifying microservice boundaries](https://docs.microsoft.com/azure/architecture/microservices/microservice-boundaries) -- [Event Hubs](https://docs.microsoft.com/azure/event-hubs/event-hubs-what-is-event-hubs) -- [Event Sourcing pattern](https://docs.microsoft.com/azure/architecture/patterns/event-sourcing) +- [Designing microservices: identifying microservice boundaries](/azure/architecture/microservices/microservice-boundaries) +- [Event Hubs](/azure/event-hubs/event-hubs-what-is-event-hubs) +- [Event Sourcing pattern](/azure/architecture/patterns/event-sourcing) - [Implementing the Circuit Breaker pattern](../microservices/implement-resilient-applications/implement-circuit-breaker-pattern.md) -- [IoT Hub](https://docs.microsoft.com/azure/iot-hub) -- [Service Bus](https://docs.microsoft.com/azure/service-bus) -- [Working with the change feed support in Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db/change-feed) +- [IoT Hub](/azure/iot-hub) +- [Service Bus](/azure/service-bus) +- [Working with the change feed support in Azure Cosmos DB](/azure/cosmos-db/change-feed) >[!div class="step-by-step"] >[Previous](serverless-architecture-considerations.md)