Getting Error Pipe Notifications bind failure "Bucket already bound from another Snowflake account" - snowflake-cloud-data-platform

I am getting the error: Pipe Notifications bind failure "Bucket already bound from another Snowflake account"
We have 2 accounts, so I removed the bucket references from one account, but I am still getting this error. I have an S3 Integration setup.
Do I need to re-do the integration to get this working properly? I am unable to create additional / new transforms on this pipe.
Does this require Snowflake hands to fix?
Thanks!

Someone (bstora) answered this yesterday with a correct answer, I'm not sure why it's been deleted, but it was 100% correct.
So, to rewrite what they wrote, if this happens to you then you will need to reach out to Snowflake Support to have them determine the correct course of action. There is currently no action you can take as a user to correct or fix this.
This page will show you how to create a support ticket, if you haven't created one in the past.
https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge

Related

I am not able to add data in the vespa

I am not able to add data to the database even though the database is up.
I have checked it using
vespa-get-cluster-state
The error message that I got is in the image below.
Please let me know what to do to resolve this issue.
You need to ensure that all your nodes agree on the current time, by running NTP.
You'll probably be better off deploying on https://cloud.vespa.ai so you don't need to deal with this yourself.

Create does not work in GAE Datastore viewer

When I try to create some entities I don't see the option to input fields. I just see the SaveEntity button.
However I can view all the existing entities.
What is very strange is - there is another entity called VideoEntity for which the create did not work yesterday but works today.
Can somebody help me with this seemingly unpredictable tool ?
Regards,
Sathya
i think the console knows what properties each entity has based on existing data, rather then your models. and the data is only updated periodically. when did you upload your app? maybe waiting a few hours will give the console time to update.
alternatively, you could use the remote api to add your entities, or write a small snippet and upload such as ...
VideoStatsEntity(app='home', ip='116.89.52.67', params='tag=20130210').put()
Writing a simple interface to the data-store to allow you to edit/create models is probably the best thing to do in this case. You know what they contain so you can adjust your interface accordingly, rather then waiting for the admin interface to "catch up" as Gwyn notes.
I believe that there are some property types that are impossible to add via the admin interface that you are using so you'll probably get to the point sooner rather then later of creating a custom interface.
The admin datastore view is good for quickly checking out the contents of the datastore, but ever tried paging through 100's of entries? Not fun.

App Engine backup never finishes only clue is failure in map reduce worker_callback

Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!

Debugging Silverlight RIA Services SubmitChanges

I'm having huge difficulty debugging database operations from Silverlight RIA. This is understandable, I suppose, since database operations are abstracted by about 3 layers of services and ORM and stuff, but is there any way I can see what the database is telling me regarding the issue? I'm able to find an EntityConflict object which doesn't offer much information.
The only field which seems to indicate any problem is "IsDeleted" property equal to true, however this property is not well documented at MSDN and I cannot even be sure that having it be true is an issue.
I've attempted to use fiddler, however no errors are reaching that, I've attached to the application's unhadled exception, but that points me back to EntityConflict, which I am recovering through handling SubmittedChanges event and accessing the args. I've even enabled WCF tracing to attempt to recover some information but, of course, nothing there either.
Did you inspect SubmitOperation.Error after the submit operation failed? It should contain the error information you seek. More information on error handling can be found here: Link
The issue was a known issue, at least a few years ago, where INSTEAD OF INSERT doesn't return Scope_identity for a column inserted by this method. EF uses this value to verify that an insert was completed. When it fails, you get a deletion conflict. This is, apparently, a known issue with the SQL Server EF provider, however I have been unable to confirm that it's been resolved.

Moss 2007 SSP Error "Search application '{0}' is not ready."

I'm trying to fix a broken SSP on a MOSS 2007 site. The problem I am running into manifests itself as follows...
In the SSP "Search Settings" page I get this message:
The search service is currently offline. Visit the Services on Server page in SharePoint Central Administration to verify whether the service is enabled. This might also be because an indexer move is in progress.
In the SSP "User Profiles and Properties" page I get this in red at the top:
An error has occurred while accessing the SQL Server database or the Office SharePoint Server Search service. If this is the first time you have seen this message, try again later. If this problem persists, contact your administrator.
I have contacted my administrator, but that is currently me and it turns out I don't know any more than I do about the problem.
In the Event Log I get the following message:
The Execute method of job definition Microsoft.Office.Server.Search.Administration.IndexingScheduleJobDefinition (ID 8714973c-0514-4e1a-be01-e1fe8bc01a18) threw an exception. More information is included below.
Search application '{0}' is not ready.
The Event ID is 6398, which isn't as useful as I had hoped, but I don find the message interesting in that it looks like a String.format call where the substituted value is missing. Unfortunately no interesting in that it tells me how to fix the problem.
Sharepoint's own log offers this:
UserProfileConfigManager.GetImportStatus() failed to obtain crawl status: System.InvalidOperationException: Search application '{0}' is not ready.
at Microsoft.Office.Server.Search.Administration.SearchApi..ctor(WellKnownSearchCatalogs catalog, SearchSharedApplication application)
at Microsoft.Office.Server.Search.Administration.SearchSharedApplication.get_SearchApi()
at Microsoft.Office.Server.UserProfiles.UserProfileConfigManager.c__DisplayClass3.b__0()
at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock)
I have tried stopping and starting the search service, removing and re-adding it from the administration panel, and pretty much every other thing I could find to do with Sharepoint's own administrative tools, which leads me to believe the problem here may be database or permissions related.
There was a second SSP set up on the same server, which I think may have been part of the original cause of the problem, but removing it has made no difference.
Maybe you can make sense of this - I'm new to sharepoint, so it makes little sense to me:
"Service Shared, after looking for the solution much encontre this forum where a person tapeworm the same problem. After reading a infinity of commentaries, which I made to solve the problem was to create a new shared service, later it assigns the other applications to him and later I put it like predetermined, it initiates the import of profiles, and later the hearings, clearly first I did it in a site of tests just in case something happened, later eliminates the First Shared Service and finally the error I am solved. The snapshot of the Registry of the configuration of the application in the data base has been stored correctly. Context: application `SharedServices2 ′"
You didn't mention anything about tapeworms, so maybe you're running a newer version.
Translation of:
http://tecnologiainformaticait.wordpress.com/2008/11/21/error-sharepoint-search-application-0-is-not-ready/
Personally, I'd try the msdn forums.
So it seems that the problem was a corrupted Shared Service Provider ( no idea how it came about, but there you go ) and the only working solution I could find was to delete it and start again.
I suspect there may have been a more elegant fix by changing something in the database somewhere, but I don't know the Sharepoint Database model well enough to find it in the time available.
As an additional warning to this, if you do delete your SSP you may find that it doesn't delete cleanly so that you get a bunch of SQL server tasks that still try to run on an empty database, which can cause problems if you have anything else running on the same database server.
Same problem. My DBA delete correctly the search database and it still doesn't work.
I'll post the solution on my blog when I found something.
For the moment, we open a MS call.
Created a new SSP
2- In central admin, click on shared Services Administration
3- Click on "Change Associations" and move all the web apps to the new SSP
Choose a new search_DB and select the good server that will index if you are in a farm
Problems created by this operation:
We notice that we lose statistics information for our sites.
if you tried this solution, give us your feed back too
Thanks.
http://dejacquelot.blogspot.com/

Resources