Hi,
Context:
We recently ran into a performance issue with a CRM plugin. What it tries to do is to add a large existing contacts (over 2500s) to an existing marketing list. The plugin is registered as a synchronous pre-operation. We updated the plugin code to use ExecuteMultipleRequest, but received an "SQL Timeout expired" (error message included below).
we tested the new code outside of the plugin in a test console app, and it worked perfectly.
To prevent the case of plugin infinite loop, I also added below line in the plugin:
if (context.Depth > 1) return;
I also attached the error details with an elapsed time measured by a stopwatch in c#. It shows the request took longer than 30 seconds. I believe by default SQL timeout is 30 seconds. So the error makes sense.
During my test, I only updated one or two records. Why does it take longer than 30s? Is this the right approach to solve
my problem or there is an alternative?
Old Code:(worked fine for up to 750s, but error-ed out at 2500s)
foreach (var relatedEntity in relatedEntities) { AddMemberListRequest addMemberListRequest = new AddMemberListRequest(); addMemberListRequest.EntityId = relatedEntity.Id; addMemberListRequest.ListId = entity.Id; AddMemberListResponse addMemberListResponse = service.Execute(addMemberListRequest) as AddMemberListResponse; }
New Code: (not working even with one record being passed)
internal static void TestAddMembers() { IOrganizationService service; service = GetOrganizationService(); Entity listEntity = new Entity("list"); listEntity.Id = new Guid("7804cb7a-ed71-e511-80dd-3863bb343ba0"); EntityReferenceCollection relatedEntities = new EntityReferenceCollection(); relatedEntities.Add(new EntityReference("contact", new Guid("9076447F-809F-DF11-9DD4-00155D0A4002"))); relatedEntities.Add(new EntityReference("contact", new Guid("6029a586-859f-df11-9dd4-00155d0a4002"))); AddLargeMembersToMarketListMultiple(service, listEntity, relatedEntities); } private static void AddLargeMembersToMarketListMultiple(IOrganizationService service, Entity listEntity, EntityReferenceCollection relatedEntities) { var allowedBatchSize = 1000; var remainingCreates = relatedEntities.Count; var idxCount = 0; ExecuteMultipleRequest requestWithResults = null; try { requestWithResults = new ExecuteMultipleRequest() { Settings = new ExecuteMultipleSettings() { ContinueOnError = true, ReturnResponses = false }, Requests = new OrganizationRequestCollection() }; foreach (var relatedEntity in relatedEntities) { idxCount++; AddMemberListRequest addMemberListRequest = new AddMemberListRequest(); addMemberListRequest.EntityId = relatedEntity.Id; addMemberListRequest.ListId = listEntity.Id; requestWithResults.Requests.Add(addMemberListRequest); if (idxCount == allowedBatchSize || idxCount == remainingCreates) { remainingCreates -= idxCount; idxCount = 0; ExecuteMultipleResponse responseWithResults = (ExecuteMultipleResponse)service.Execute(requestWithResults); // Display the results returned in the responses. foreach (var responseItem in responseWithResults.Responses) { // A valid response. if (responseItem.Response != null) Debug.WriteLine("Succeeded!" + responseItem.Response.Results["id"].ToString()); // An error has occurred. else if (responseItem.Fault != null) { throw new InvalidPluginExecutionException(responseItem.Fault.Message + "- Time Elapsed: " + elapsedTime); } } requestWithResults.Requests.Clear(); } } } catch (FaultException<OrganizationServiceFault> fault) { // Check if the maximum batch size has been exceeded. The maximum batch size is only included in the fault if it // the input request collection count exceeds the maximum batch size. if (fault.Detail.ErrorDetails.Contains("MaxBatchSize")) { int maxBatchSize = Convert.ToInt32(fault.Detail.ErrorDetails["MaxBatchSize"]); if (maxBatchSize < requestWithResults.Requests.Count) { // Here you could reduce the size of your request collection and re-submit the ExecuteMultiple request. // For this sample, that only issues a few requests per batch, we will just print out some info. However, // this code will never be executed because the default max batch size is 1000. Console.WriteLine("The input request collection contains %0 requests, which exceeds the maximum allowed (%1)", requestWithResults.Requests.Count, maxBatchSize); } } // Re-throw so Main() can process the fault. throw; } }
Here's the error message from the plugin:
Unhandled Exception: System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]: SQL timeout expired.- Time Elapsed: 00:00:31.28context.Depth: 1Detail:
<OrganizationServiceFault xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
<ErrorCode>-2147220891</ErrorCode>
<ErrorDetails xmlns:d2p1="http://schemas.datacontract.org/2004/07/System.Collections.Generic">
<KeyValuePairOfstringanyType>
<d2p1:key>OperationStatus</d2p1:key>
<d2p1:value xmlns:d4p1="http://www.w3.org/2001/XMLSchema" i:type="d4p1:string">0</d2p1:value>
</KeyValuePairOfstringanyType>
<KeyValuePairOfstringanyType>
<d2p1:key>SubErrorCode</d2p1:key>
<d2p1:value xmlns:d4p1="http://www.w3.org/2001/XMLSchema" i:type="d4p1:string">-2146233088</d2p1:value>
</KeyValuePairOfstringanyType>
</ErrorDetails>
<Message>SQL timeout expired.- Time Elapsed: 00:00:31.28context.Depth: 1</Message>
<Timestamp>2015-10-21T15:12:02.0470921Z</Timestamp>
<InnerFault i:nil="true" />
<TraceText>
Thanks,
H