Overview
Request batching is a useful way of minimizing the number of messages that are passed between the client and the server. This reduces network traffic and provides a smoother, less chatty user interface. This feature will enable Web API users to batch multiple HTTP requests and send them as a single HTTP request.
Scenarios
To enable batching in general, we’re providing custom message handlers (i.e. DefaultHttpBatchHandler, DefaultODataBatchHandler) which you can register per-route to handle the batch requests.
Web API Batching
Registering HTTP batch endpoint
You can use MapHttpBatchRoute, which is an HttpRouteCollection extension method, to create a batch endpoint. For example, the following will create a batch endpoint at “api/$batch”.
using System.Web.Http; using System.Web.Http.Batch; namespace BatchSample { publicstaticclass WebApiConfig { publicstaticvoid Register(HttpConfiguration config) { config.Routes.MapHttpBatchRoute( routeName: "WebApiBatch", routeTemplate: "api/$batch", batchHandler: new DefaultHttpBatchHandler(GlobalConfiguration.DefaultServer)); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } }
That’s all you need to do on the server side. Now, on the client side you can use the existing Web API client library to submit a batch request .
using System.Net.Http; using System.Net.Http.Formatting; namespace BatchClientSample { internalclass Program { privatestaticvoid Main(string[] args) { string baseAddress = "http://localhost:8080"; HttpClient client = new HttpClient(); HttpRequestMessage batchRequest = new HttpRequestMessage(HttpMethod.Post, baseAddress + "/api/$batch") { Content = new MultipartContent("mixed") { // POST http://localhost:8080/api/valuesnew HttpMessageContent(new HttpRequestMessage(HttpMethod.Post, baseAddress + "/api/values") { Content = new ObjectContent<string>("my value", new JsonMediaTypeFormatter()) }), // GET http://localhost:8080/api/valuesnew HttpMessageContent(new HttpRequestMessage(HttpMethod.Get, baseAddress + "/api/values")) } }; HttpResponseMessage batchResponse = client.SendAsync(batchRequest).Result; MultipartStreamProvider streamProvider = batchResponse.Content.ReadAsMultipartAsync().Result; foreach (var content in streamProvider.Contents) { HttpResponseMessage response = content.ReadAsHttpResponseMessageAsync().Result; // Do something with the response messages } } } }
Changing the ExecutionOrder of the DefaultHttpBatchHandler
By default each individual batch request is executed sequentially. Meaning the second request in the batch won’t start until the first one is completed. If you have a scenario where the order of execution is not important and you want to execute the requests asynchronously, you can set the ExecutionOrder property on the DefaultHttpBatchHandler to NonSequential.
using System.Web.Http; using System.Web.Http.Batch; namespace BatchSample { publicstaticclass WebApiConfig { publicstaticvoid Register(HttpConfiguration config) { HttpBatchHandler batchHandler = new DefaultHttpBatchHandler(GlobalConfiguration.DefaultServer) { ExecutionOrder = BatchExecutionOrder.NonSequential }; config.Routes.MapHttpBatchRoute( routeName: "WebApiBatch", routeTemplate: "api/$batch", batchHandler: batchHandler); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } }
OData Batching
Registering OData batch endpoint
You can simply pass an ODataBatchHandler to the MapODataRoute to enable the batching. The batch endpoint will be available at routePrefix/$batch. For instance if you have the following OData route, the batch endpoint will be exposed at “odata/$batch”.
using System.Web.Http; using System.Web.Http.OData.Batch; using System.Web.Http.OData.Builder; using BatchODataSample.Controllers; using Microsoft.Data.Edm; namespace BatchODataSample { publicstaticclass WebApiConfig { publicstaticvoid Register(HttpConfiguration config) { config.Routes.MapODataRoute( routeName: "defaultOdata", routePrefix: "odata", model: GetModel(), batchHandler: new DefaultODataBatchHandler(GlobalConfiguration.DefaultServer)); } privatestatic IEdmModel GetModel() { ODataConventionModelBuilder builder = new ODataConventionModelBuilder(); builder.Namespace = "BatchODataSample.Controllers"; builder.EntitySet<Customer>("Customers"); builder.EntitySet<Order>("Orders"); return builder.GetEdmModel(); } } }
That’s it. Now for the client, you can use WCF Data Services Client Library:
using System; using System.Data.Services.Client; using BatchClientSample.ServiceReference1; namespace BatchClientSample { internalclass Program { privatestaticvoid Main(string[] args) { string baseAddress = "http://localhost:8080/odata"; Container container = new Container(new Uri(baseAddress)); int id = new Random().Next(); var customer = new Customer { ID = id, Name = "User"+ id }; var order = new Order { ID = id, Amount = id + 10 }; // Batch operation. container.AddToCustomers(customer); container.AddToOrders(order); container.AddLink(customer, "Orders", order); var batchResponse = container.SaveChanges(SaveChangesOptions.Batch); foreach (var response in batchResponse) { Console.WriteLine(response.StatusCode); Console.WriteLine(response.Headers); } } } }
Or datajs (or any JavaScript library that supports sending OData batch requests):
OData.request({ requestUri: "/odata/$batch", method: "POST", data: { __batchRequests: [ { __changeRequests: [ { requestUri: "Customers", method: "POST", data: customer } ] }, { requestUri: "Customers", method: "GET" } ] } }, function (data, response) { //success handler }, function () { alert("request failed"); }, OData.batchHandler);
Setting up multiple OData routes and the batch endpoints
You can have multiple OData routes with their own batch endpoints. For example, the following will setup the batch endpoints at “catalog/$batch” and “commerce/$batch”.
publicstaticvoid Register(HttpConfiguration config) { config.Routes.MapODataRoute( routeName: "odata1", routePrefix: "catalog", model: GetModel(), batchHandler: new DefaultODataBatchHandler(GlobalConfiguration.DefaultServer)); config.Routes.MapODataRoute( routeName: "odata2", routePrefix: "commerce", model: GetModel2(), batchHandler: new DefaultODataBatchHandler(GlobalConfiguration.DefaultServer)); }
Note that the ODataBatchHandler instance cannot be shared across routes if you want to support relative URI in the OData batch requests. For example, the following request constructed using datajs will submit a batch request to “/commerce/$batch” but the requestUri for the sub-requests are simply “Customers”. In this case the ODataBatchHandler need to be aware of the OData route where it’s registered to figure out the right route prefix (in this case “commerce”) for the requestUri.
OData.request({ requestUri: "/commerce/$batch", method: "POST", data: { __batchRequests: [ { __changeRequests: [ { requestUri: "Customers", method: "POST", data: customer } ] }, { requestUri: "Customers", method: "GET" } ] } }, function (data, response) { //success handler }, function () { alert("request failed"); }, OData.batchHandler);
Setting Batch Quotas
You can set the throttle on the Batch by setting the MessageQuotas on the ODataBatchHandler. For instance, the following setting will only allow a maximum of 10 requests per batch and 10 operations per ChangeSet.
using System.Web.Http; using System.Web.Http.OData.Batch; using System.Web.Http.OData.Builder; using BatchODataSample.Controllers; using Microsoft.Data.Edm; namespace BatchODataSample { publicstaticclass WebApiConfig { publicstaticvoid Register(HttpConfiguration config) { ODataBatchHandler odataBatchHandler = new DefaultODataBatchHandler(GlobalConfiguration.DefaultServer); odataBatchHandler.MessageQuotas.MaxOperationsPerChangeset = 10; odataBatchHandler.MessageQuotas.MaxPartsPerBatch = 10; config.Routes.MapODataRoute( routeName: "defaultOdata", routePrefix: "odata", model: GetModel(), batchHandler: odataBatchHandler); } privatestatic IEdmModel GetModel() { ODataConventionModelBuilder builder = new ODataConventionModelBuilder(); builder.Namespace = "BatchODataSample.Controllers"; builder.EntitySet<Customer>("Customers"); builder.EntitySet<Order>("Orders"); return builder.GetEdmModel(); } } }
Custom Batching
You can derive from either HttpBatchHandler or DefaultHttpBatchHandler to support custom batch formats. For instance, instead of using MIME multipart, you can use JSON as the format for the batch requests just like Facebook batch requests.
Here is a naïve implementation of HttpBatchHandler that encodes the batch requests/responses as JSON. It simply derives from DefaultHttpBatchHandler, overrides the ParseBatchRequestsAsync/CreateResponseMessageAsync methods and let the base class handle the rest.
using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Threading.Tasks; using System.Web.Http; using System.Web.Http.Batch; namespace BatchSample { publicclass JsonBatchHandler : DefaultHttpBatchHandler { public JsonBatchHandler(HttpServer server) : base(server) { SupportedContentTypes.Add("text/json"); SupportedContentTypes.Add("application/json"); } publicoverride async Task<IList<HttpRequestMessage>> ParseBatchRequestsAsync(HttpRequestMessage request) { var jsonSubRequests = await request.Content.ReadAsAsync<JsonRequestMessage[]>(); // Creating simple requests, no headers nor bodies var subRequests = jsonSubRequests.Select(r => { Uri subRequestUri = new Uri(request.RequestUri, "/"+ r.relative_url); returnnew HttpRequestMessage(new HttpMethod(r.method), subRequestUri); }); return subRequests.ToList(); } publicoverride async Task<HttpResponseMessage> CreateResponseMessageAsync(IList<HttpResponseMessage> responses, HttpRequestMessage request) { List<JsonResponseMessage> jsonResponses = new List<JsonResponseMessage>(); foreach (var subResponse in responses) { var jsonResponse = new JsonResponseMessage { code = (int)subResponse.StatusCode }; foreach (var header in subResponse.Headers) { jsonResponse.headers.Add(header.Key, String.Join(",", header.Value)); } if (subResponse.Content != null) { jsonResponse.body = await subResponse.Content.ReadAsStringAsync(); foreach (var header in subResponse.Content.Headers) { jsonResponse.headers.Add(header.Key, String.Join(",", header.Value)); } } jsonResponses.Add(jsonResponse); } return request.CreateResponse<List<JsonResponseMessage>>(HttpStatusCode.OK, jsonResponses); } } publicclass JsonResponseMessage { public JsonResponseMessage() { headers = new Dictionary<string, string>(); } publicint code { get; set; } public Dictionary<string, string> headers { get; set; } publicstring body { get; set; } } publicclass JsonRequestMessage { publicstring method { get; set; } publicstring relative_url { get; set; } } }
Just like with any other HttpBatchHandler, you can register the batch endpoint using MapHttpBatchRoute.
using System.Web.Http; using System.Web.Http.Batch; using BatchSample; namespace BatchRequestSample { publicstaticclass WebApiConfig { publicstaticvoid Register(HttpConfiguration config) { config.Routes.MapHttpBatchRoute( routeName: "WebApiBatch", routeTemplate: "api/$batch", batchHandler: new DefaultHttpBatchHandler(GlobalConfiguration.DefaultServer)); config.Routes.MapHttpBatchRoute( routeName: "WebApiBatchJson", routeTemplate: "api/$batchJson", batchHandler: new JsonBatchHandler(GlobalConfiguration.DefaultServer)); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } }
Design
HttpBatchHandler
This is a custom HttpMessageHandler that is used to handle the batch requests. The HttpBatchHandler takes an HttpServer in the constructor and use it to dispatch the sub-requests.
HttpBatchHandler is an abstract class and an implementation of HttpBatchHandler will typically do the following:
- Parse the incoming request into sub-requests
- Execute the batch requests
- Build the batch response
Here is a high level overview of how HttpBatchHandler interacts with other handlers/dispatchers in the Web API pipeline. Note that the HttpBatchHandler is registered as a per-route handler as you’ve seen in the sample code above.
HttpBatchHandler Implementations
Out of the box, we provide different HttpBatchHandler implementations to support simple Web API batching as well as OData batching. Here is an overview of the HttpBatchHandler hierarchy.
DefaultHttpBatchHandler
This is a simple batch handler that encodes the HTTP request/response messages as MIME multipart. By default, it buffers the HTTP request messages in-memory during parsing. The DefaultHttpBatchHandler has several virtual methods which you can use to extend and customize.
ODataHttpBatchHandler
For OData batching we provide two implementations:
- DefaultODataBatchHandler– Supports OData batch formats, sub-requests are sent once all the sub-requests are read, it buffers the content stream of the sub-requests.
- UnbufferedODataBatchHandler– Supports OData batch formats, individual sub-request is sent as soon as it is read, it doesn’t buffer the content stream of the sub-requests.
Why can’t I set ExecutionOrder in ODataHttpBatchHandler like in DefaultHttpBatchHandler?
The OData spec has the execution order defined and it could be problematic if changed because the client can make certain assumptions based on the spec. By definition, the operation/ChangeSet within a batch request is executed in ordered manner. Although the operations within the ChangeSet can be executed regardless of the order but our implementation will execute them sequentially for simplicity (easier to deal with Content-ID references).
Content-ID references
We do support Content-ID header which is a mechanism used for referencing requests in a ChangeSet. Here is the description from the OData spec: “If a MIME part representing an Insert request within a ChangeSet includes a Content-ID header, then the new entity may be referenced by subsequent requests within the same ChangeSet by referring to the Content-ID value prefixed with a "$" character. When used in this way, $<contentIdValue> acts as an alias for the Resource Path that identifies the new entity.”
The way how we implement this is by building a Content-ID to Location header dictionary when processing the ChangeSet. After sending each request in the ChangeSet, we build the dictionary using the Content-ID header from the request and the Location header from the response. And before sending each request in the ChangeSet, we use the dictionary to replace the $<contentIdValue> in the request URI with the Location.
Raw Batch Formats
DefaultHttpBatchHandler
Sample Request
POST http://localhost:8080/api/$batch HTTP/1.1 Content-Type: multipart/mixed; boundary="91731eeb-d443-4aa6-9816-560a8aca66b1" Host: localhost:8080 Content-Length: 390 Expect: 100-continue Connection: Keep-Alive --91731eeb-d443-4aa6-9816-560a8aca66b1 Content-Type: application/http; msgtype=request POST /api/values HTTP/1.1 Host: localhost:8080 Content-Type: application/json; charset=utf-8 "my value" --91731eeb-d443-4aa6-9816-560a8aca66b1 Content-Type: application/http; msgtype=request GET /api/values HTTP/1.1 Host: localhost:8080 --91731eeb-d443-4aa6-9816-560a8aca66b1--
Sample Response
HTTP/1.1 200 OK Content-Length: 333 Content-Type: multipart/mixed; boundary="5b2a806d-4040-43f0-8f04-7d4c86793fa7" Server: Microsoft-HTTPAPI/2.0 Date: Mon, 08 Apr 2013 19:00:26 GMT --5b2a806d-4040-43f0-8f04-7d4c86793fa7 Content-Type: application/http; msgtype=response HTTP/1.1 202 Accepted --5b2a806d-4040-43f0-8f04-7d4c86793fa7 Content-Type: application/http; msgtype=response HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 ["my value"] --5b2a806d-4040-43f0-8f04-7d4c86793fa7--
DefaultODataBatchHandler/UnbufferedODataBatchHandler
Sample Request
POST /service/$batch HTTP/1.1 Host: host Content-Type: multipart/mixed; boundary=batch_36522ad7-fc75-4b56-8c71-56071383e77b --batch_36522ad7-fc75-4b56-8c71-56071383e77b Content-Type: multipart/mixed; boundary=changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621 Content-Length: ### --changeset(77162fcd-b8da-41ac-a9f8-9357efbbd621) Content-Type: application/http Content-Transfer-Encoding: binary Content-ID: 1 POST /service/Customers HTTP/1.1 Host: host Content-Type: application/atom+xml;type=entry Content-Length: ### <AtomPubrepresentationofanewCustomer> --changeset(77162fcd-b8da-41ac-a9f8-9357efbbd621) Content-Type: application/http Content-Transfer-Encoding: binary POST $1/Orders HTTP/1.1 Host: host Content-Type: application/atom+xml;type=entry Content-Length: ### <AtomPubrepresentationofanewOrder> --changeset(77162fcd-b8da-41ac-a9f8-9357efbbd621)-- --batch(36522ad7-fc75-4b56-8c71-56071383e77b)--
Sample Response
HTTP/1.1 202 Accepted DataServiceVersion: 1.0 Content-Length: #### Content-Type: multipart/mixed; boundary=batch(36522ad7-fc75-4b56-8c71-56071383e77b) --batch(36522ad7-fc75-4b56-8c71-56071383e77b) Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 200 Ok Content-Type: application/atom+xml;type=entry Content-Length: ### <AtomPubrepresentationoftheCustomerentitywithEntityKeyALFKI> --batch(36522ad7-fc75-4b56-8c71-56071383e77b) Content-Type: multipart/mixed; boundary=changeset(77162fcd-b8da-41ac-a9f8-9357efbbd621) Content-Length: ### --changeset(77162fcd-b8da-41ac-a9f8-9357efbbd621) Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 201 Created Content-Type: application/atom+xml;type=entry Location: http://host/service.svc/Customer('POIUY') Content-Length: ### <AtomPubrepresentationofanewCustomerentity> --changeset(77162fcd-b8da-41ac-a9f8-9357efbbd621) Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 204 No Content Host: host --changeset(77162fcd-b8da-41ac-a9f8-9357efbbd621)-- --batch(36522ad7-fc75-4b56-8c71-56071383e77b) Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 404 Not Found Content-Type: application/xml Content-Length: ### <Errormessage> --batch(36522ad7-fc75-4b56-8c71-56071383e77b)--