Salesforce Adobe Merge Mapping - salesforce

I am getting this error while performing merge mapping on Salesforce:
Error:
Error saving the merge mapping: Upsert failed. First exception on row 18; first error: STORAGE_LIMIT_EXCEEDED, storage limit exceeded: []
Can someone help me fix this ?

I would say that the error message is clearly. It seems that your storage in your Salesforce Org is reached and the document is too big.
Go to Setup -> Data Management -> Storage Usage and check your available storage.

Related

Getting data not available error while importing bcfks certificate into salesforce

Getting this error while importing the BCKFS file into salesforce system.
Data Not Available
The data you were trying to access could not be found. It may be due to another user deleting the data or a system error. If you know the data is not deleted but cannot access it, please look at our support page.

delete_model() error when cleaning up AWS sagemaker

I followed the tutorial on https://aws.amazon.com/getting-started/hands-on/build-train-deploy-machine-learning-model-sagemaker/
I got an error when trying to clean up with the following code.
xgb_predictor.delete_endpoint()
xgb_predictor.delete_model()
ClientError: An error occurred (ValidationException) when calling the DescribeEndpointConfig operation: Could not find the endpoint configuration.
Does it mean I need to delete the model first instead?
I checked on the console and deleted the model manually.
No, you don't need to delete the model prior to deleting the endpoint. From the error logs looks like its not able to find the endpoint configuration. Can you verify if you are setting delete_endpoint_config to True
xgb_predictor.delete_endpoint(delete_endpoint_config=True)
Additionally, you can verify if the endpoint_config is still avaiable on the AWS console.

Snowflake creation of Notification integration on azure storage queue error

I was trying to create notification integration for azure storage , created storage queue. snowflake Subnet included and snowflake service principle has access to storage , Everything working fine with storage integration. now i am trying to setup notification integration and getting following error
SQL execution internal error: Processing aborted due to error 370001:1831050371;
create notification integration my_azure_int
enabled = true
type = queue
notification_provider = azure_storage_queue
azure_storage_queue_primary_uri = 'https://accountname.queue.core.windows.net/queuename'
azure_tenant_id = '123456-abcdef-abc-123-98765432';```
Error is not at all descriptive. please suggest some ideas.
Can you verify how many notification integrations you have created in your account by executing the below command?
show notification integrations;
This could be because you exceeded the maximum number of integrations/queues that can be created (10 in total).
If it's not the case, I'd suggest trying again later, or open a support ticket.
It is issue with snowflake for some reason notification integration is not allowed , but if you see the error is created with incident 37001. Snowflake monitor those incident and make changes as needed.
they have enabled notification integration after day, then its working fine.

How to config activemq DB persistence to Oracle?

I need steps to configure Active MQ persistence to Oracle database. I was reading some blogs but couldn't find a solid solution. Can someone please guide me as I am new when it comes to configuring Active MQs.
Thanks in advance.
Edit 1 -
I followed steps mentioned on official blog, but ended up getting below error.
URL : https://activemq.apache.org/how-to-configure-a-new-database
ERROR: java.lang.RuntimeException: Failed to execute start task. Reason: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 92 in XML document from class path resource [activemq.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 92; columnNumber: 48; cvc-complex-type.2.4.a: Invalid content was found starting with element 'jdbcPersistence'. One of '{"http://activemq.apache.org/schema/core":jdbcPersistenceAdapter, "http://activemq.apache.org/schema/core":journalPersistenceAdapter, "http://activemq.apache.org/schema/core":kahaDB, "http://activemq.apache.org/schema/core":levelDB, "http://activemq.apache.org/schema/core":mKahaDB, "http://activemq.apache.org/schema/core":memoryPersistenceAdapter, "http://activemq.apache.org/schema/core":replicatedLevelDB, WC[##other:"http://activemq.apache.org/schema/core"]}' is expected.
Edit 2 -
I was able to achieve this. Now issue is messages are getting stored in DB as in BLOB format but I want them to be stored as in plain/text. Can someone please help?
Messages have to be stored as a BLOB, since ActiveMQ supports many non-text formats bytes, map, object and stream message types.

Realtime document permanantly unable to be loaded due to server error

Earlier today we started to see instances of server errors popping up on an old realtime document. This is a persistent error and the end result appears to be that the document is completely inaccessible using the gapi.drive.realtime.load endpoint. Not great.
However the same document is accessible through the gapi.client.drive.realtime.get endpoint. Which is great for data recovery, but not so great for actually using the document. It's possible I can 'fix' the document by doing a 'drive.realtime.update', but haven't tried as hopefully the doc can be used to track down the bug.
Document ID: 0B9I5WUIeAEJ1Y3NLQnpqQWVlX1U
App ID: 597847337936
500 Error Message: "Document was not successfully migrated to new UserKey format"
Anyone else seeing this issue? Can I provide any additional information?

Resources