I am unable to export my redcap data and receiving an error notification. notification says to much data although it is a tiny project. solutions? - export

The problem I am trying to export my redcap data to a CSV form, and unable to do so. I am receiving an error notification that says there is to much data, although it is a tiny project. help will be much appreciated.
The full error text: " We are sorry, but apparently the data export is not able to complete successfully. It may simply be that there is too much data trying to be exported at once, in which it is causing REDCap to crash. If this error occurs again, it is recommended that you attempt to export a smaller data set (fewer fields and/or perhaps fewer records) so that this error does not occur. Our apologies for this inconvenience."
what I have tried
I have made sure I have the necessary user rights.
I have tried through a colleges redcap user (who has thenecessary user rights) .
have tried exporting only one instrument (no success)
have created a test project with only 2 questions. also in the new test project I receive the same notification.
could not export data both in development mode and in production.
any ideas?
Many Thanks!

The institution had blocked the option for file upload due to security reasons. Apparently REDCap's exporting system uses the uploading mechanism, and therefore landed up being disabled as well.

The local storage folder location was pointing to a folder that was missing on the server. Just created the folder and upload and subsequent problem was fixed

Related

I am not able to add data in the vespa

I am not able to add data to the database even though the database is up.
I have checked it using
vespa-get-cluster-state
The error message that I got is in the image below.
Please let me know what to do to resolve this issue.
You need to ensure that all your nodes agree on the current time, by running NTP.
You'll probably be better off deploying on https://cloud.vespa.ai so you don't need to deal with this yourself.

Getting Error Pipe Notifications bind failure "Bucket already bound from another Snowflake account"

I am getting the error: Pipe Notifications bind failure "Bucket already bound from another Snowflake account"
We have 2 accounts, so I removed the bucket references from one account, but I am still getting this error. I have an S3 Integration setup.
Do I need to re-do the integration to get this working properly? I am unable to create additional / new transforms on this pipe.
Does this require Snowflake hands to fix?
Thanks!
Someone (bstora) answered this yesterday with a correct answer, I'm not sure why it's been deleted, but it was 100% correct.
So, to rewrite what they wrote, if this happens to you then you will need to reach out to Snowflake Support to have them determine the correct course of action. There is currently no action you can take as a user to correct or fix this.
This page will show you how to create a support ticket, if you haven't created one in the past.
https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge

BigQuery Throwing Import Error, No Information Provided

I am trying to import a CSV file into my BigQuery Table. This import has worked in the past, but now I am getting the following error message:
{"message":"Too many errors encountered. Limit is: 0.","reason":"invalid"}
All other fields are empty when I run the debugger.
This is... not helpful. I am unaware of any issues with the data itself, as the export/import data has not changed. Curiously, when trying to use a previous Job Template and run through the web console, the web console itself hangs and the dialog never goes away once I hit the blue "Submit" button.
Job Id: job_e0faf560d3df424ea74519e1b24a23f7
I am generating a CSV and exporting it to Google Cloud Storage. I am using AppEngine and have switched to the new Google Cloud Storage Client Library. I had uploaded the file using the GcsFileOptions.getDefaultInstance() as well as constructing my own GSFileOptions setting the content type to CSV.
After failure, I downloaded the file from Google Cloud Storage, change the encoding (tried ASCII and UTF8) and still have gotten the same result.
I am using AppEngine 1.8.1.1 and the BigQuery Library (google-api-services-bigquery-v2-rev89-1.15.0-rc). This was working as expected previously, so I'm not sure what has happened. Any suggestions are welcome. Thank you!
There are two error fields on the bigquery job. The first is the error result, which tells you whether (and why) the job failed. The error result in your case that the job failed due to encountering too many input errors during the import.
The second field is the error stream, which tells you about errors encountered during the job. If you had set the maxBadRecords field, for example, you could have errors in the error stream, but the actual job might succeed.
I looked up your job in the BigQuery logs, and was able to find that the error stream indicates an error on line 6253: "Too few columns: expected 80 column(s) but got 1 column(s). For additional help: http://goo.gl/RWuPQ"
Can you verify that line 6253 is correct?
-- Jordan Tigani / BigQuery Engineer
Today there is some general problem with app engine:
"We are still investigating the issue with Google App Engine, primarily (but not restricted to) Datastore latency.
We will provide another status update in the next two hours."
https://groups.google.com/forum/#!topic/google-appengine-downtime-notify/1pJZnl4EMKk

App Engine backup never finishes only clue is failure in map reduce worker_callback

Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!

Sudden error accessing custom settings in SalesForce

We use custom settings in a SalesForce app. We access it like so:
MySettings__c settings = MySettings__c.getOrgDefaults();
This was working fine, but today the app completely crashed. By that I mean the page doesn't load at all, I just get a white screen telling me an internal error occurred. We traced it down to this line of code - when it is commented out the page loads as well as it can without those settings (but at least it loads).
Running that single line of code in the System Log (using the Execute functionality) also causes a report of Internal System Error. The only thing the system log reports is "FATAL_ERROR Internal Salesforce.com Error." The Apex code modal reports "Internal System Error: 1018505045-332 (-920440070)"
The setting has values for the organization - we've also tried deleting the settings and recreating them to no affect. So far SalesForce has been no help beyond telling us to ask on their website.
This is very frustrating as it was working fine on Friday and today it was broken before anyone touched anything.
What you have there is a platform error. Whenever you get those you should report them to SFDC support and they will be able to see further internal logging to sort it out.
Nothing anyone out here can do to help I am afraid.
Paul
try setting the apiVersion of the affected code back to version 21.0. We had the same issue and making this change has provided an effective workaround.
This was a bug in Salesforce's infrastructure, which has been reported resolved. If you're still seeing this error with API version 22.0, you should create a case with salesforce support.

Resources