When importing customizations to CRM 4.0, the import fails with a message "generic SQL error". Digging a bit deeper the error message is really that a timeout has occured. The same error occurs when trying to create a new entity.
I increased the timeout as suggested in the link below, but the timeout occured anyway - it just took longer time to happen.
Increasing the timeout:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\OLEDBTimeout
This value does not exist by default, and if it does not exist then the default value will be 30 seconds. To change it, add the registry value (of type DWORD), and put in the value you want (measured in seconds). Then you have to recycle the CrmAppPool application pool for the change to take effect
https://community.dynamics.com/product/crm/crmtechnical/b/crmdavidjennaway/archive/2008/09/04/sql-timeouts-in-crm-generic-sql-error.aspx
The SQL profiler displays a set of inserts and updates related to the metadata in CRM, and then a call to the stored procedure exec p_RecreateIndexes
This call is apparently the culprit and never completes in a timely fashion (30+ minutes now and not completed yet). This is an existing test instance of CRM and is quite extensively used and filled with lots of data. Creating new entities has never before taken this long. Just in case, I have run the asyncoperation cleaning scripts from MS. It did not have any visible effect.
Is there any way to find the reason for the delay in this procedure, or some other solution I can try?
Try splitting up your import into chunks. For example, import the first 20 entities, then the second 20 entities, and so on until you've imported all of them. Then publish. Then go back and try importing the entire customizations file at the same time and republish. Following this method exactly has been the only way we've found to import some customization files in particularly stubborn environments.
It sounds like the re-index operation is taking some time - which would be as expected if the data size is large, and the fragmentation is high. It also depends on exactly what that stored procedure does, and how many cores/CPUs you have - and how many SQL Server is allowed to use.
Does the app allow you to defer that operation? You'd be able to run it manually yourself through Management Studio - if that doesn't break the application.
You could be cheeky, and rename that procedure, and replace it with one of your own that does nothing.... and then rename back, and run. Again, it might break something.
Or just keep increasing the timeout until you get past this issue. Some of my re-index jobs on databases generally take hours....
Or contact the vendor if you have support?
If you ran that query through Management Studio, it would complete.... doing that would give you the approximate time required to put (temporarily) into the timeout setting.
I just experienced the exact same problem and I was only importing 2 entities.
I found your questions when I was googling for p_RecreateIndexes after seeing it in the Trace files.
I ended up running exec p_RecreateIndexes in SQL Server. After it completed - about 2 minutes - I reran the Entity import and it worked.
Related
I have downloaded the Desktop Version of NEO4J on my MAC last week. (It's version 1.2.4)
Neo4j Browser version: 4.0.3
Neo4j Server version: 3.5.14 (enterprise)
Last week I was using the USING PERIODIC COMMIT command of loading in a CSV as seen below, this set up my relationships fine. However as of a couple of days ago, I tried to do the exact same command however now I get an error which is shown as Executing queries that use periodic commit in an open transaction is not possible. Please can someone explain to me what has gone wrong please?
query:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM
"file:/Volumes/Twitter_Dataset/tweets.csv" AS csvLine
MATCH (tweet:Tweet {tweetID: csvLine.tweetID})
MATCH (user:User {username: csvLine.username})
MERGE (user)-[:POSTS]->(tweet);
The short answer:
Prefix your USING PERIODIC COMMIT queries with :auto
Changes were pushed out to provide more context here, so the error message now includes a link for more info about what's going on, as well as the :auto workaround above.
The long answer:
This is related to a recent feature improvement in the Neo4j Browser, which has a side effect with USING PERIODIC COMMIT operations, but there is a way around this, and a browser update was already pushed to provide more context with a clear workaround.
The latest round of Neo4j Browser updates include this change, which uses transactional functions instead of auto-committed transactions, giving queries through the browser automatic retry capability, and better ability to cope with member changes when hitting against a causal cluster.
The problem is that USING PERIODIC COMMIT needs to be run in an auto-committed transaction. This requires a means to switch whether we're using an auto-committed transaction or not.
You said you're using browser version 4.0.3. I believe that went out yesterday, and included with it changes providing details about what's going on and how to get this to work as normal. When encountering that error, you should now see a link with info on the :auto command, mentioning auto-committing transactions. Following the link should show an info card with:
The :auto command will send the Cypher query following it, in an auto committing transaction. In general this is not recommended because of the lack of support for auto retrying on leader switch errors in clusters.
Some query types do however need to be sent in auto-committing transactions, USING PERIODIC COMMIT is the most notable one.
An example is provided on the card for prefixing a USING PERIODIC COMMIT query with :auto to let it execute in an auto-committing transaction.
I am using CDC in SQL Server 2008 and triggering it via SSIS(In the form Stored procedures and calling commands like table-diff in EST). Recently i came across a different issue and I am simply not understanding what might be the reason. What exactly happened is CDC suddenly hanged while executing. It was in running state for almost around two days. After that we manually had to kill CDC process in order to run the next batch for the same. When I ran same CDC job next time it ran properly without any interruption (Completed in few minutes/No changes were made to package or anywhere before next execution).When that process was executing for around two days I checked changed records, except for 2-3 new Inserts/Updates nothing was there. After going through log file details I came to know my table-diff cmd took around 5 hours to execute which normally takes around 5 mins for my package everyday.I just want to know what might be the possible reason for such behaviour.
PS: Such behaviour occurred few times in past as well.
Some colleagues were using an Excel file to keep track of some issues, and they have decided to switch to a better system, asking me to setup a Jira project for them and to import all the tickets. A way or the other I have done everything, but the resolution date is now wrong, because it's the one of when I ran the script to import them into Jira. They would like to have the original one, so that they can know when an issue was really fixed. Unfortunately there's no way to change it from Jira's interface, so I have to access the DB directly. The command, for the record, is like:
update jiraissue
set RESOLUTIONDATE = "2015-02-16 14:48:40"
where pkey = "OV001-1";
Now, low-level writes to a database in general are dangerous, and I am wondering whether there can be any risks. Our test server is not available right now, so I'd have to work directly on the production one. One thing I had seen on our test server is that this seemed to work, except that JQL queries such as
resolved < 2015-03-20
are wrong because they still use the old Resolution date. Clearly, I have to reindex; but I'm wondering whether it is safe. Does Jira perform some consistency checks? Like, verifying that a ticket is solved after it is created. In my case, since I have modified the resolution date but not the creation, it is a clear inconsistency. Will Jira complain about this? Is there the risk to corrupt the DB? And if I also modify the creation date, do I have to watch out for other things?
We are using Jira 5.2.11.
I have access to the test server again, and I have tried it. I have modified all the RESOLUTIONDATE fields I had to fix, and when I reloaded the page the new date was there. Jira didn't complain about anything. I reindexed the server, so that queries yield correct results, and I saw no issues. Then I even ran the integrity checks (Administration -> System -> Integrity Checker), and no error was found.
Finally I did the same on the production server, and again everything is running fine.
I can therefore conclude that the operation is not dangerous at all, and it can be done safely.
Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!
I've noticed that whenever I add tables / stored procs / functions / whatever to a sql server database, that it takes a while for the code completion to pick up that they are now part of the database.
This is really annoying since the code completion and syntax highlighting become totally broken in the workflow scenario where you create a table and then start writing queries or whatever that deal with this new object.
Does anyone know how to get the code completion / syntax highlighting engine to update it's view of what is in the database to get rid of all these spurious invalid object name errors?
I understand that it's too late to answer the question but maybe it will help someone.
You can refresh the Intellisense cache with Ctrl+Shift+R, and wait for 5-10 seconds.
A guess: Close and reopen SSMS? Lame and ineffective, and I hope there's a better way.