I've been developing a pipeline in ADF with one simple copy activity taking data from SQL Server on premise up to an Azure SQL Database and yesterday came across an issue (pipeline image below)
My pipeline kept failing to debug in the same place with the same error
{
"errorCode": "BadRequest",
"message": "The integration runtime 'Integration-Runtime-Name' under data factory 'Data-Factory-Name' does not exist. ",
"failureType": "UserError",
"target": "PiplineActivity"
}
The day before it had worked without any issues, and I realised although the debug run failed, if I kicked off a trigger run the pipeline would succeed.
I tested this out with a couple of different runtime environments and a completely new pipeline, but got the same result. I even stripped the one copy task it was trying to do down to a simple test table with one column and one row.
Can anyone else verify they are seeing the same or different behavior? or point out if I'm doing anything wrong. I would like to emphasise it was working fine the day before yesterday.
Many thanks
The issue has been fixed. Please have a try again. Thanks.
Related
We have a process that generates around 20,000 PDFs using SSRS as the backend.
The mechanism used to generate these is the Report Execution 2005 API endpoint
We have just performed a run where occasionally a PDF would be generated missing some data, or the header or the body or the footer.
The break down of that was around 12,000 generated correctly, 1000 blank, 7000 data incorrect.
There doesn't seem to be a rhyme or reason for this, no pattern in the data or timings. In the logs we can see the following:
We log all parameters used to execute these reports in a DB, so using this we went and re-run that report with the same parameters and it ran fine. We then went and took 100 of the random errored ones, ran them manually and they all worked fine.
My only straw left to grasp at is there is some caching or SSRS deep issue that is causing this.
If anyone has seen something similar or has an opinion on where to check next, would be much appreciated.
We've been using Snowflake+DBT with key pair authentication for a long time now, and we've never had any issues.
Recently, we started getting random connection errors on some models:
250001 (08001): Failed to connect to DB: account.region.snowflakecomputing.com:443. JWT token is invalid.
Most of the models will work, but some of them might fail. This can happen to any model, and it's never the same one -- sometimes it's the very first one, sometimes it's the very last one, and sometimes it's a bunch of them. It happens in runs with lots of models, or in runs with a single model.
It's very inconsistent, and there doesn't seems to be any kind of pattern to it. Sometimes it works, and sometimes it doesn't.
We've also tried with 1, 4, or 8 threads, and it happens regardless.
Obviously, there's nothing wrong with the credentials or configurations — otherwise, nothing would run at all. So I assume there must be something wrong with how DBT is handling the connection(s).
Interestingly, the errors only happens locally (so far). We haven't seen it in DBT Cloud runs.
DBT version is 0.20.2 in both cases. We tried with other versions (0.21.0, 0.20.0 & 0.19.1), and the issue persists. I don't know why we're just encountering this, as we've used these other versions previously without any issues.
It's similar to this question, except in our case it doesn't happen consistently at all. We tried connecting "without a region" (using Snowflake Organizations), but it doesn't make any difference:
250001 (08001): Failed to connect to DB: organization-account.snowflakecomputing.com:443. JWT token is invalid.
Is there anything we can do to resolve this?
EDIT: When the error happens, the model hangs for 60 seconds until the error appears.
EDIT 2: I think the error might have started happening when we started using the DBT-provided Docker images. Not sure exactly what might be wrong with them, but we'll try going back to our own custom images and see if that works.
This has since stopped happening. 🤷♂️
Possibly because of this change from Snowflake:
"An improvement has been added to Snowflake's Cloud Services layer to relax the validity restrictions."
Also, we are now using different Docker containers, so that might have something to do with it as well:
First, we switched to xemuliam/dbt, from Docker Hub.
But since that is no longer maintained, we are now using the official DBT Docker images from GitHub (which were not available at the time the question was posted):
dbt-core
dbt-postgres
dbt-snowflake
This is one of the most bizarre problems I've come across since I started using OData for my mobile apps. The OData server I've developed is backed by SQL Express 2008 and this combination has been installed on 50 different servers and/or PCs over the last 15 months. All 50 servers have been running stable with consistent function for large amounts of data.
A couple of days ago one of my clients contacted me indicating that my client app (running on iOS7) was having an odd error come up when POSTing data to their server. The error had an HTTP code of 400 and the error text is "The operation couldn't be completed. (Timeout error 400.)". My first question is: why is a timeout error coming back with a 400 code? Generally when I get timeouts (due to firewall, etc) they're in the 100x range. There is no indication in the event logs on the server of ANY problems occurring. My own logs (stored in the SQL database) show no error (which is odd because I'm using the generic exception catching method in my OData service to log any problems). I haven't got to the step of adding logging of all requests as yet.
The error is only being raised when posting one particular set of data. All other posts from the device are functioning perfectly. I got the client to re-install the app (deleting all data) and then to download the data set that was causing the error. The download worked fine. We began making changes to the data to replicate what the data looked like when the error occurred in incremental changes, posting the change to the server and observing the result. Most of the incremental changes work fine but certain combinations cause the error to occur. One of the increments involves a large volume of changes and that posts fine, but subsequent alteration of any of the objects (sometimes altering as little as 6 characters in a text field) cause the error to occur. And yet in some circumstances altering objects that have already been posted to the server works without a problem.
I wiped the service components from the server and undertook a fresh install. I shifted TCP ports in case 443 had another listener causing problems. I reset the server. None of these change the behaviour of the error.
My last ditch solution is to completely re-install IIS and .NET Framework but I'd obviously like to avoid this as it's not my server... The server is overseas from my current location so debugging isn't really an option. Hoping someone has an idea as to what I can do diagnostically to try and determine the source of this bizarre 'gremlin'.
Have you tried a more thorough traffic analysis using a tool like Fiddler? The "timeout" error does indeed seem odd and what stood from you post was that your server was "overseas". Could there be something with the "times" that are being used/generated, e.g. server time, local time, etc?
Just to confirm, the "same" exact set of data always fails? Can you replicate this via a remote debugger or via localhost? If so, can you turn on "verbose errors"?
We have built a website based on the design of the Kigg project on CodePlex:
http://kigg.codeplex.com/releases/view/28200
Basically, the code uses the repository pattern, with a repository implementation based on Linq-To-Sql. Full source code can be found at the link above.
The site has been running for some time now and just about a year ago we started to get errors like:
There is already an open DataReader associated with this Command which must be closed first.
ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.
These are the closest error examples I can find based on my memory. These errors started to occur when the site traffic started to pick up. After banging my head against the wall, I figured out assumed that the problem is inherit within Linq-To-Sql and how we are using the same connection to call multiple commands in a single web request.
Evenually, I discovered MARS (Multiple Active Result Sets) and added that to the data context's connection string and like magic, all of my errors went away.
Now, fast forward about 1 year and the site traffic has increased tremendously. Every week or so, I will get an error in SQL Server that reads:
A severe error occurred on the current command. The results, if any, should be discarded
Immediately after this error, I receive hundreds to thousands of InvalidCastException errors in the error logs. Basically, this error shows up for each and every call to the Linq-To-Sql data context. Only after I restart the web server do these errors clear up.
I read a post on the Micosoft Support site that descrived my problem (minus the InvalidCastException errors) and stating the solution is that if I'm going to use MARS that I should also use Asncronous Processing=True. I tried this, but it did not solve my problem either.
Not really sure where to go from here. Hopefully someone here has seen and solved this problem before.
I have the same issue. Once the errors start, I have to restart the IIS Application Pool to fix.
I have not been able to reproduce the bug in dev despite trying many different scenarios involving multi-threading, leaving connections open, etc etc.
One possible lead I do have is that amongst the errors in the server Event Log is an OutOfMemoryException for the Application Pool. Perhaps this is the underlying cause of the spurious SQL Datareader errors (a memory leak elsewhere). Although again I haven't been able to reproduce this in dev.
Obviously if you are using a 64 bit OS then this is probably not the cause in your case.
So after much refactoring and re-architecting, we figured out that problem all along is MARS (Multiple Active Result Sets) itself. Not sure why or what happens exactly but MARS somehow gets result sets mixed up and doesn't recover until the web app is restarted.
We removed MARS and the errors stopped.
If I remember correctly, we added MARS to solve the problem where a connection/command was already closed using LinqToSql and we tried to access an object graph that hadn't been loaded. Without MARS, we'd get an error. But when we added MARS, it seemed to not care about it. This is really a great example of us not really understanding what the heck we were doing and we learned some valuable (and expensive) lessons from this.
Hope this helps others who have experienced this.
Thanks to all how have contributed their comments and answers.
I understand you figured out the solution..
Following is not a direct solution to the problem; but it is good for others to take a look at
What does "A severe error occurred on the current command. The results, if any, should be discarded." SQL Azure error mean?
http://social.msdn.microsoft.com/Forums/en-US/bbe589f8-e0eb-402e-b374-dbc74a089afc/severe-error-in-current-command-during-datareaderread
Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!