I am currently setting up our second Azure Search service. I am making it identical to our existing one, just in a different region.
I'm using the portal Import Data function to set up my index. For the Data Source, I have configured it to connect to my Azure SQL Database and table, which definitely has Integrated Change Tracking turned on. Further, it's the exact same database and table that I'm connected to and indexing from in my existing Azure Search service.
The issue is that when I get to the "Create an Indexer" step, I get the message that says "Consider enabling integrated change tracking on your database..." In other words, it doesn't think I have change tracking on this database. I definitely do, and our other Azure Search Service recognizes this just fine on the exact same database.
Any idea what's going on here? How can I get this Data Source to be recognized as having Change Tracking turned on, and why isn't it doing so when all is working as expected in our existing Search service with identical set up?
We will investigate. In the meantime, please try creating your datasource and indexer programmatically using the REST API or .NET SDK.
When I was experiencing this problem, I tried creating the search service via "Add Azure Search" in Azure portal > SQL database.
Using that wizard I was able to create the search data source, index & indexer.
Update: I opened a ticket with Azure support, and when trying to get more information to provide to them, I tried to reproduce the problem (create a data source via REST API), but the expected failure did not happen ("Change tracking not enabled for table..." despite it being enabled). This makes me think there was something wrong with internal Azure code that was fixed in the meantime.
Related
We use Azure SQL Database, and therefore had to jump through some hoops to get cross-database queries set up. We achieved this following this great article: https://techcommunity.microsoft.com/t5/azure-database-support-blog/cross-database-query-in-azure-sql-database/ba-p/369126 Things are working great for most of our databases.
The problem comes in for one of our databases which is read-only. The reason it's read-only is b/c it is being synced from another Azure SQL Server to derive its content. This is being achieved via the Geo-Replication function in Azure SQL Database. When attempting to run the query GRANT SELECT ON [RemoteTable] TO RemoteLogger as seen in the linked article, I of course get the error "Failed to update because the database is read-only."
I have been trying to come up with a workaround for this. It appears user permissions are one of the things that do NOT sync as part of the geo-replication, as I've created this user and granted the SELECT permission on the origin database, but it doesn't carry over.
Has anyone run into this or something similar and found a workaround/solution? Is it safe/feasible to temporarily set the database to read/write, update the permission, then put it back to read-only? I don't know if this is even possible - I was told by one colleague that they think it will throw an error along the lines of "this database can't be set to read/write b/c it's syncing from another database..."
I figured out a work-around: Create a remote connection to the database on the ORIGIN server. So simple, yet it escaped me until now. Everything working great now.
I have been using SQL Server Reporting Services on many servers but on this particular one when I try to open the url to create subscriptions I get the following error:
HTTP Error 500.24 - Internal Server Error An ASP.NET setting has been detected that does not apply in Integrated managed pipeline mode
Any advise would be greatly appreciated, please advise if more information is required
One option would be to simply switch your Managed Pipeline mode from Integrated to Classic in IIS.
The steps would be as follows.
Start the Internet Information Services (IIS) Manager from the Start menu on the Windows machine.
Click on Application Pools from the left menu and select the application you are working on from the middle table.
Right-cick to bring up the "Set Application Defaults" menu.
Change Managed Pipeline Mode from "Integrated" to "Classic" and press OK.
You will then see the Managed Pipeline column for that item change from Integrated to Classic.
This should at least get rid of the error message, without diagnosing the cause (which would require a lot more information).
Issue resolved, I had to create the virtual directory for Report manager URL.Once I added the virtual directory name and clicked apply, the report manager opened in IE.
I'm having a problem whenever I try to create a new Cosmos DB database through Azure Portal. I'm using a free subscription so I do not have access to CosmosDB support.
Basically, all values seem to be valid but after creation everything fails. I'm doing the following:
Input a unique ID with no spaces or uppercases or symbols.
Chose "Azure Table" as API type.
Use my "Free Trial" subscription.
Create a new resource group (again with no spaces or uppercases of symbols).
Choose a server in either "South UK" and "North Europe" (tried both on different tries).
Whenever I click finish, after some seconds, I get the following message:
Invalid capability EnableTable. ActivityId: ...
Microsoft.Azure.Documents.Common/1.10.106.1 (Code: BadRequest)
Error Message:
{ "code": "BadRequest", "message": "Invalid capability
EnableTable.\r\nActivityId: 9cb0e2eb-3b62-4bda-a0f9-e3945eb8148b,
Microsoft.Azure.Documents.Common/1.19.106.1" }
I also tried Edge and Chrome and neither work. I find funny that Microsoft says that we can try Azure's CosmosDB for free but in fact we can't because creation fails and they offer no support for free.
You need to use the below url and select the required service to try Cosmos DB free.
I just created one Cosmos SQL DB for free.
https://azure.microsoft.com/en-us/try/cosmosdb/
Problem Solved
Not really sure if this can be considered an answer, but my problem has solved itself somehow. Apparently the solution is to keep trying multiple times till it works.
If it helps the only different thing I did this time was:
Create first an Azure Cosmos MongoDB, including creating a new resource group
Create now the Azure Table DB using the existing resource group used by Mongo DB.
It worked.
Not sure if it was this or and Azure error or subscription issue, since I created my account today, so It could not be properly configured.
I'm using the AWS Toolkit in Visual Studio 2013 to attempt to launch a new instance on Amazon RDS. I get through the wizard for creating the new instance and after clicking finish, there is a delay, and then a message appears saying:
Error launching DB instance: DB Security Groups can only be associated with VPC DB Instances using API version 2012-01-15 through 2012-09-17.
Launching different types of instances (SQL Server SE vs MySQL) doesn't seem to help, nor does selecting different versions of the platforms (SQL Server 2008 vs 2012). The only thing that gets it to go through is unchecking the box for "default" in the DB Security Groups area. However, I feel like something is going on here that shouldn't be happening.
Can anyone explain why this is happening and how I can resolve it other than by not setting a default security group? Thank you.
If you created your AWS account recently, you will be using a VPC by default.
It sounds like the API the plugin is trying to use hasn't been updated. The latest version is 1.5.6, and looking at the history it seems like some of these features were added in 1.5.0.
I finally solved it! Since I couldn't use the API that the VS 2013 plugin uses, I had to manually add my IP to the Security Group created for my Elastic Beanstalk.
Go to the console, ec2's security groups configuration
Find the one which description matches your Beanstalk (e.g.: Security Group created for Beanstalk Environment to give access to RDS instances)
Hit Inbound, Edit and add a new rule for All Traffic (I guess HTTP should be enough, but just in case).
In Source, select My IP and Save.
I'm using SQL Server 2008 (without application server or anything).
Numbers of users can be up to 1000. Windows Authentication is used.
The question is:
How to handle modes, so
some users will be allowed to work
in read-only mode
some users won't
have access to db for some time
My versions:
Using a table with a mode id for
every group of users, that will work
the same way. On Form Load
application will query that table
for mode id.
Using trigger on the tables, that
must work according to that mode.
The trigger will query mode value
and doesn't work if access is closed
or it's in read-only mode
I know these are not the best solutions, that's why I'm asking for your advice.
There's one more point.
If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment.
With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this?
Is there any optimal solution?
Thank you. Any help would be appreciated.
It depends somewhat on how you Access app interacts with the server, but for number 1 why not just use the built in Role/User permissions system in SQL Server?
For number 2, as your using Windows Authentication you can restrict logon hours in active directory on a per-account/OU basis this should prevent them from logging on to SQL server.
You could also do it via Logon Triggers which would not also prevent access to other domain resources.