I'm using Cosmos DB SDK in Javascript to find and delete an existing item in an Azure Function. Now initialising the Cosmos client seems to work, but actually finding an item doesn't for some reason.
Code to find/delete the item:
await container.item(id).delete();
The error:
I know the item exists with that given ID. Since it's present in the Cosmos DB.
This is the ID Cosmos DB is giving back saying it doesn't exist.
And this is the ID I see in the DB, so it's there.
Does anyone know what I'm doing wrong here?
EDIT:
I added the partition key value to the function:
await container.item(req.params.searchId, "/id").delete();
I believe /id is the partition key value:
But I'm still getting the same error back.
Related
When I surf to the deployed web app on azure, and I want to save e.g. a comment or a post to this blog project, I get error message that I can't save into the db which I have on azure. Yet, I have transformed the connection string to point to the right db on azure in my web.release.config, and I can see data from the db too, when I search for a blog or a post on this web app, data is displayed from the db on azure.
But again, I can not save any data to that db from the web app. So when I want to create a new comment or blog, I can not do that and I get something like:
Cannot insert the value NULL into column 'Id', table 'Blog.dbo.Comments'; column does not allow nulls. INSERT fails.
Yet, I can create a blog or comment locally and it works just fine, but that is to the local db.
Probably you don't set the Id in your code and on your localhost you have some king of auto increment on the Id field in your DB ( Auto increment primary key in SQL Server Management Studio 2012 ), but you don't have that in your Azure SQL, so it won't generate it for you automatically, so it will be null, but probably it is set to non nullable, and this is why you get the error message. If you change it like you can see in the link above, it should work.
I am currently setting up our second Azure Search service. I am making it identical to our existing one, just in a different region.
I'm using the portal Import Data function to set up my index. For the Data Source, I have configured it to connect to my Azure SQL Database and table, which definitely has Integrated Change Tracking turned on. Further, it's the exact same database and table that I'm connected to and indexing from in my existing Azure Search service.
The issue is that when I get to the "Create an Indexer" step, I get the message that says "Consider enabling integrated change tracking on your database..." In other words, it doesn't think I have change tracking on this database. I definitely do, and our other Azure Search Service recognizes this just fine on the exact same database.
Any idea what's going on here? How can I get this Data Source to be recognized as having Change Tracking turned on, and why isn't it doing so when all is working as expected in our existing Search service with identical set up?
We will investigate. In the meantime, please try creating your datasource and indexer programmatically using the REST API or .NET SDK.
When I was experiencing this problem, I tried creating the search service via "Add Azure Search" in Azure portal > SQL database.
Using that wizard I was able to create the search data source, index & indexer.
Update: I opened a ticket with Azure support, and when trying to get more information to provide to them, I tried to reproduce the problem (create a data source via REST API), but the expected failure did not happen ("Change tracking not enabled for table..." despite it being enabled). This makes me think there was something wrong with internal Azure code that was fixed in the meantime.
I'm having a problem whenever I try to create a new Cosmos DB database through Azure Portal. I'm using a free subscription so I do not have access to CosmosDB support.
Basically, all values seem to be valid but after creation everything fails. I'm doing the following:
Input a unique ID with no spaces or uppercases or symbols.
Chose "Azure Table" as API type.
Use my "Free Trial" subscription.
Create a new resource group (again with no spaces or uppercases of symbols).
Choose a server in either "South UK" and "North Europe" (tried both on different tries).
Whenever I click finish, after some seconds, I get the following message:
Invalid capability EnableTable. ActivityId: ...
Microsoft.Azure.Documents.Common/1.10.106.1 (Code: BadRequest)
Error Message:
{ "code": "BadRequest", "message": "Invalid capability
EnableTable.\r\nActivityId: 9cb0e2eb-3b62-4bda-a0f9-e3945eb8148b,
Microsoft.Azure.Documents.Common/1.19.106.1" }
I also tried Edge and Chrome and neither work. I find funny that Microsoft says that we can try Azure's CosmosDB for free but in fact we can't because creation fails and they offer no support for free.
You need to use the below url and select the required service to try Cosmos DB free.
I just created one Cosmos SQL DB for free.
https://azure.microsoft.com/en-us/try/cosmosdb/
Problem Solved
Not really sure if this can be considered an answer, but my problem has solved itself somehow. Apparently the solution is to keep trying multiple times till it works.
If it helps the only different thing I did this time was:
Create first an Azure Cosmos MongoDB, including creating a new resource group
Create now the Azure Table DB using the existing resource group used by Mongo DB.
It worked.
Not sure if it was this or and Azure error or subscription issue, since I created my account today, so It could not be properly configured.
I am looking for an alternative to symmetric key encryption to savely store sensitive data in a Microsoft SQL database. The reason for this is a few days ago I had an error during the night (at 3 am) where the database responded my status call, which is used for health checks of the backend, with an error
A severe error occurred on the current command. The results, if any, should be discarded.
(The call I am using for health check is only calling my rest api - going through the web service to the database, does a select count(*) from Member and returns the count.)
After that error every api call which used sensitive data from the database returned
Please create a master key in the database or open the master key in the session before performing this operation.
My monitor service said that the backend was up again after 2 minutes automatically but the master key was not working anymore. I fixed it with the following command
open master key decryption by password = 'password'
alter master key add encryption by service master key
the morning after but in the meantime the backend was not working correctly. So the failover didn't really did its job (because I had to do something manually to get everything working again).
So I am trying to achieve to store sensitive data easily in the database (must be able to decrypt it again) and to have a working failover without doing anything manually too.
Thanks for your input!
What i think I'm reading is that you have some sort of HA technology in play (e.g. availability groups). If that's the case, care needs to be taken to ensure that both sides of the topology can open the database master key.
Fortunately, it's fairly easy to do that. You can backup and restore service master keys (SMK). So, you'll backup the SMK from the secondary node and restore it to the primary node. SQL Server will decrypt anything currently encrypted with the old key and re-encrypt with the new.
I have a working MVC4 application. We recently decided to give LINQPad a try for testing and scripting stuff. While I can get it to access our databases directly, when I try to get it to connect using our backend EfDbContext, it reads the DLL correctly and shows all of the POCOs, but every query results in:
Introducing FOREIGN KEY constraint 'FK_dbo.Seekers_dbo.Companies_CompanyID' on table 'Seekers' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.
Could not create constraint. See previous errors.
I can see a UserQuery database is created each time I try to get a top x of any table. I have it pointing to the Web.config which holds the connectionString for the DB connection. When I put the connection string in the App.config for the backend and point to that, I get the same error.
Using the Profiler, I can see that when I set up a connection and test it, LINQPad queries to database it is supposed to. It's only when I try to do X.Take(100) where I get issues.
The problem was that the original POCOs weren't set up correctly, and when they were fixed testing hadn't caught the issues. After assigning correct foreign key annotations, it appears to be working.
UPDATE:
The fixes to make it "work" with LINQPad actually broke a lot of things, so I'm reverting it back to how we had it and assuming there's an issue with LinqPad not querying the correct database as expected.