Azure Cosmos DB Implementation Failure - database

I'm having a problem whenever I try to create a new Cosmos DB database through Azure Portal. I'm using a free subscription so I do not have access to CosmosDB support.
Basically, all values seem to be valid but after creation everything fails. I'm doing the following:
Input a unique ID with no spaces or uppercases or symbols.
Chose "Azure Table" as API type.
Use my "Free Trial" subscription.
Create a new resource group (again with no spaces or uppercases of symbols).
Choose a server in either "South UK" and "North Europe" (tried both on different tries).
Whenever I click finish, after some seconds, I get the following message:
Invalid capability EnableTable. ActivityId: ...
Microsoft.Azure.Documents.Common/1.10.106.1 (Code: BadRequest)
Error Message:
{ "code": "BadRequest", "message": "Invalid capability
EnableTable.\r\nActivityId: 9cb0e2eb-3b62-4bda-a0f9-e3945eb8148b,
Microsoft.Azure.Documents.Common/1.19.106.1" }
I also tried Edge and Chrome and neither work. I find funny that Microsoft says that we can try Azure's CosmosDB for free but in fact we can't because creation fails and they offer no support for free.

You need to use the below url and select the required service to try Cosmos DB free.
I just created one Cosmos SQL DB for free.
https://azure.microsoft.com/en-us/try/cosmosdb/

Problem Solved
Not really sure if this can be considered an answer, but my problem has solved itself somehow. Apparently the solution is to keep trying multiple times till it works.
If it helps the only different thing I did this time was:
Create first an Azure Cosmos MongoDB, including creating a new resource group
Create now the Azure Table DB using the existing resource group used by Mongo DB.
It worked.
Not sure if it was this or and Azure error or subscription issue, since I created my account today, so It could not be properly configured.

Related

Azure SQL Database - change user permissions on a read-only database for cross-database queries

We use Azure SQL Database, and therefore had to jump through some hoops to get cross-database queries set up. We achieved this following this great article: https://techcommunity.microsoft.com/t5/azure-database-support-blog/cross-database-query-in-azure-sql-database/ba-p/369126 Things are working great for most of our databases.
The problem comes in for one of our databases which is read-only. The reason it's read-only is b/c it is being synced from another Azure SQL Server to derive its content. This is being achieved via the Geo-Replication function in Azure SQL Database. When attempting to run the query GRANT SELECT ON [RemoteTable] TO RemoteLogger as seen in the linked article, I of course get the error "Failed to update because the database is read-only."
I have been trying to come up with a workaround for this. It appears user permissions are one of the things that do NOT sync as part of the geo-replication, as I've created this user and granted the SELECT permission on the origin database, but it doesn't carry over.
Has anyone run into this or something similar and found a workaround/solution? Is it safe/feasible to temporarily set the database to read/write, update the permission, then put it back to read-only? I don't know if this is even possible - I was told by one colleague that they think it will throw an error along the lines of "this database can't be set to read/write b/c it's syncing from another database..."
I figured out a work-around: Create a remote connection to the database on the ORIGIN server. So simple, yet it escaped me until now. Everything working great now.

Error 404 when exporting SQL database in Azure

When I try to export my SQL database through the Azure Portal, I get a 404 error (Entity not found to invoke export).
The weird thing is that a month ago it worked perfectly. I even wrote a little manual on how to do it as I exported it. A coworker found the issue when trying to do it herself.
I've seen somewhere that "the database name is case sensitive when using az sql db export". It's strange, because we have not changed anything, but I've taken a look at the activity log of the DB and compared the log for the last successful export with the failing ones and I do see that the references to de DB in the JSON of the activity log have a different case (the last "B" of the database name):
I can also see that the database name appears with different case in different places. If I go to the Database itself, the last "B" is in uppercase, but if I go to the SQL Server, it is in lowercase. If I connect to the database from SSMS, it's lowercase too. I guess its correct name is with a lowercase b...
Anyway, I'm pretty sure we haven't changed it. In fact, in the "manual" I did a month ago I can see screenshots with the same case mismatch.
Anyone knows how to fix this issue?
I think the error is happened in Azure backend, because we did nothing for the database.
We can not help you to fix the error. Only the Azure support can help you. According my experience, Azure still have some bugs for now.
Ask Azure support like this in Portal, you can follow my example:
New support request:
Basics:
Azure will give some Solutions you can reference, just click next Details, give the more details or error screenshot for Azure.
Create the request, and wait the Azure support engineer contact you with Email or Phone.
Hope this helps.

Azure Search not recognizing Integrated Change Tracking on SQL Server Database

I am currently setting up our second Azure Search service. I am making it identical to our existing one, just in a different region.
I'm using the portal Import Data function to set up my index. For the Data Source, I have configured it to connect to my Azure SQL Database and table, which definitely has Integrated Change Tracking turned on. Further, it's the exact same database and table that I'm connected to and indexing from in my existing Azure Search service.
The issue is that when I get to the "Create an Indexer" step, I get the message that says "Consider enabling integrated change tracking on your database..." In other words, it doesn't think I have change tracking on this database. I definitely do, and our other Azure Search Service recognizes this just fine on the exact same database.
Any idea what's going on here? How can I get this Data Source to be recognized as having Change Tracking turned on, and why isn't it doing so when all is working as expected in our existing Search service with identical set up?
We will investigate. In the meantime, please try creating your datasource and indexer programmatically using the REST API or .NET SDK.
When I was experiencing this problem, I tried creating the search service via "Add Azure Search" in Azure portal > SQL database.
Using that wizard I was able to create the search data source, index & indexer.
Update: I opened a ticket with Azure support, and when trying to get more information to provide to them, I tried to reproduce the problem (create a data source via REST API), but the expected failure did not happen ("Change tracking not enabled for table..." despite it being enabled). This makes me think there was something wrong with internal Azure code that was fixed in the meantime.

What SQL user is used by TFS to send alerts?

We are running into a few issues with our TFS installation (TFS 2013 Update 4, SQL 2014 Standard) as a result of email alerts. Most notably, Work Items cannot be created, because this triggers an email.
Any time a process or user attempts to create a Work Item, the error
TF30040: The database is not correctly configured. Contact your Team Foundation Server administrator.
is received. Further, when I check the Event Viewer on the server, I can see the error and it reports that the inner exception is:
Exception Message: The EXECUTE permission was denied on the object 'sp_send_dbmail', database 'msdb', schema 'dbo'. (type SqlException)
I have worked with the DBA and we have enabled Email Alerts on the server. We have verified that, in general, the alerts work by using the test button on the administration console. I can also set up a check-in alert through the web interface and receive said alerts without issue. This seems to be specifically affecting Work Item creation alerts (which apparently are just automatically and irrevocably enabled).
Presumably, we could correct this by giving appropriate permissions to use that stored procedure. To do so, we need to know what user to give permissions to. So far we have tried giving execute permissions to my AD user, the service account used by the build service, and the Network Service account (which appears to be the TFS Service Account).
There is no indication in any error message as to what user is being used to execute that procedure. So, my question: What SQL user is used to send alerts when creating Work Items?
Edit:
For the record, this started working of its own accord. We decided Monday to call Microsoft to get this fixed. Before that happened, failed builds magically created some work items (on Tuesday, a full day after we gave up), and we are now able to create work items. Everyone involved states not doing anything. We are baffled, but in a good way.
I'm going to advise you that a DBA should not be making changes to the TFS databases. I suggest opening a ticket with MSFT and getting assistance from the product support group.

How can I resolve error when trying to launch instance on Amazon RDS?

I'm using the AWS Toolkit in Visual Studio 2013 to attempt to launch a new instance on Amazon RDS. I get through the wizard for creating the new instance and after clicking finish, there is a delay, and then a message appears saying:
Error launching DB instance: DB Security Groups can only be associated with VPC DB Instances using API version 2012-01-15 through 2012-09-17.
Launching different types of instances (SQL Server SE vs MySQL) doesn't seem to help, nor does selecting different versions of the platforms (SQL Server 2008 vs 2012). The only thing that gets it to go through is unchecking the box for "default" in the DB Security Groups area. However, I feel like something is going on here that shouldn't be happening.
Can anyone explain why this is happening and how I can resolve it other than by not setting a default security group? Thank you.
If you created your AWS account recently, you will be using a VPC by default.
It sounds like the API the plugin is trying to use hasn't been updated. The latest version is 1.5.6, and looking at the history it seems like some of these features were added in 1.5.0.
I finally solved it! Since I couldn't use the API that the VS 2013 plugin uses, I had to manually add my IP to the Security Group created for my Elastic Beanstalk.
Go to the console, ec2's security groups configuration
Find the one which description matches your Beanstalk (e.g.: Security Group created for Beanstalk Environment to give access to RDS instances)
Hit Inbound, Edit and add a new rule for All Traffic (I guess HTTP should be enough, but just in case).
In Source, select My IP and Save.

Resources