Azure Cognitive Search - Connect to SQL - azure-cognitive-search

I have a Azure Cognitive Search Service and trying to Connect to an Azure SQL Table.
The Azure SQL has Public Access disabled and a Private Link is created.
I have created a Azure Cognitive Search Service and a Private Link for the same.
The Search service has Managed Identity enabled. The Managed Identity has been given access to the SQL Database and Server.
Now when I try to create a data source the error being returned is
Failed to create data source "search-ds", error: "Failed to fetch"
What am I doing wrong?
Thanks in advance.
BR

This documentation topic gives the most comprehensive tutorial to use private link with Azure Cognitive Search indexers: https://learn.microsoft.com/en-us/azure/search/search-indexer-howto-access-private?tabs=portal-create%2Cportal-status
It has a troubleshooting section near the end.
One of the things it mentions that seems easy to miss is the configuration of the indexer to work in private mode:
{
"name": "indexer",
"dataSourceName": "your-datasource",
"targetIndexName": "index",
"parameters": {
"configuration": {
"executionEnvironment": "private"
}
},
"fieldMappings": []
}
Beside the private link aspect, the regular advice also applies, and a lot of things can prevent the indexer from connecting to the data source. Here's another topic that covers other possible causes: https://learn.microsoft.com/en-us/azure/search/search-indexer-troubleshooting

Related

Debezium SQL Server Connector - "Couldn't obtain database name"

I'm trying to set up a Debezium SQL Server Connector against a SQL Server instance that is controlled by DBAs at my workplace. I've been able to start up Zookeeper and Kafka Server without issue, and Kafka Connect itself works with sample Connectors, but when attempting to start a Debezium SQL Server Connector instance I've been getting the error "Couldn't obtain database name".
[2022-07-12 16:36:04,269] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:117)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Unable to connect. Check this and other connection properties. Error: Couldn't obtain database name
Here is my debezium config:
name=Dbz-SqlServer-connector
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
database.hostname=MyDbHost
database.port=1433
database.user=MyUsername
database.password=MyPassword
database.dbname=MyDatabase
database.server.name=MyDbHost
table.include.list=dbo.CdcTest
database.history.kafka.bootstrap.servers=localhost:9092
database.history.kafka.topic=dbhistory.CdcTest
I've tried this in a .properties file passed to a standalone Connect instance, and as a JSON POST to a distributed Connect instance. I have tried all of the same steps on both my local Windows machine as well as on a linux VM, with the same results.
Confluent and Docker are not options for me in this situation.
for SQL Server login credentials, I am using a local account on the SQL Server instance that does have access to the database in question. I found the source code for debezium's connectors on their github and was able to find that specific error message within the code:
private static final String GET_DATABASE_NAME = "SELECT name FROM sys.databases WHERE name = ?";
...
public String retrieveRealDatabaseName(String databaseName) {
try {
return prepareQueryAndMap(GET_DATABASE_NAME,
ps -> ps.setString(1, databaseName),
singleResultMapper(rs -> rs.getString(1), "Could not retrieve exactly one database name"));
}
catch (SQLException e) {
throw new RuntimeException("Couldn't obtain database name", e);
}
}
I'm not completely familiar with Java but it appears that basically something is going wrong when the connector is trying to run "SELECT name FROM sys.databases WHERE name = 'MyDatabase'". When I run this against the database myself, logged in with the same account I'm using, it seems to work just fine, so I'm really not sure where to go from here. It is fair to say that since I'm not in full control of the SQL Server environment that I'm using, there may be some permissions issues that I'm not aware of, but from what I'm able to test it seems like it should be working.
I would greatly appreciate any help at all, whether just suggestions on settings/configs to check or a full-blown solution.
Thank you!
Update: I've built a simple console app to run that sys.databases query against MyDbHost, master database, as the relevant account, and it's working just fine so I feel like that confirms that my connection info is correct and account permissions are also correct. Seems like this is an issue within the Debezium connector.
It turned out that my problem was a mistake in the connector's config setting. I misunderstood which specific pieces of data to put into database.hostname and database.server.name, and one I corrected those fields the connector works.

Can't connect Azure Search index to Snowflake database via the power query

Error detecting index schema from datasource: "Data source type 'powerquery is not supported
enter image description here
Make sure that you are connecting after requesting access to Power Query connectors. You will be provided instructions on how to use them from the portal.
Power Query connector support is currently in a gated public preview.
For instructions please visit:
https://learn.microsoft.com/azure/search/search-how-to-index-power-query-data-sources

Azure Search not recognizing Integrated Change Tracking on SQL Server Database

I am currently setting up our second Azure Search service. I am making it identical to our existing one, just in a different region.
I'm using the portal Import Data function to set up my index. For the Data Source, I have configured it to connect to my Azure SQL Database and table, which definitely has Integrated Change Tracking turned on. Further, it's the exact same database and table that I'm connected to and indexing from in my existing Azure Search service.
The issue is that when I get to the "Create an Indexer" step, I get the message that says "Consider enabling integrated change tracking on your database..." In other words, it doesn't think I have change tracking on this database. I definitely do, and our other Azure Search Service recognizes this just fine on the exact same database.
Any idea what's going on here? How can I get this Data Source to be recognized as having Change Tracking turned on, and why isn't it doing so when all is working as expected in our existing Search service with identical set up?
We will investigate. In the meantime, please try creating your datasource and indexer programmatically using the REST API or .NET SDK.
When I was experiencing this problem, I tried creating the search service via "Add Azure Search" in Azure portal > SQL database.
Using that wizard I was able to create the search data source, index & indexer.
Update: I opened a ticket with Azure support, and when trying to get more information to provide to them, I tried to reproduce the problem (create a data source via REST API), but the expected failure did not happen ("Change tracking not enabled for table..." despite it being enabled). This makes me think there was something wrong with internal Azure code that was fixed in the meantime.

Azure Cosmos DB Implementation Failure

I'm having a problem whenever I try to create a new Cosmos DB database through Azure Portal. I'm using a free subscription so I do not have access to CosmosDB support.
Basically, all values seem to be valid but after creation everything fails. I'm doing the following:
Input a unique ID with no spaces or uppercases or symbols.
Chose "Azure Table" as API type.
Use my "Free Trial" subscription.
Create a new resource group (again with no spaces or uppercases of symbols).
Choose a server in either "South UK" and "North Europe" (tried both on different tries).
Whenever I click finish, after some seconds, I get the following message:
Invalid capability EnableTable. ActivityId: ...
Microsoft.Azure.Documents.Common/1.10.106.1 (Code: BadRequest)
Error Message:
{ "code": "BadRequest", "message": "Invalid capability
EnableTable.\r\nActivityId: 9cb0e2eb-3b62-4bda-a0f9-e3945eb8148b,
Microsoft.Azure.Documents.Common/1.19.106.1" }
I also tried Edge and Chrome and neither work. I find funny that Microsoft says that we can try Azure's CosmosDB for free but in fact we can't because creation fails and they offer no support for free.
You need to use the below url and select the required service to try Cosmos DB free.
I just created one Cosmos SQL DB for free.
https://azure.microsoft.com/en-us/try/cosmosdb/
Problem Solved
Not really sure if this can be considered an answer, but my problem has solved itself somehow. Apparently the solution is to keep trying multiple times till it works.
If it helps the only different thing I did this time was:
Create first an Azure Cosmos MongoDB, including creating a new resource group
Create now the Azure Table DB using the existing resource group used by Mongo DB.
It worked.
Not sure if it was this or and Azure error or subscription issue, since I created my account today, so It could not be properly configured.

Can I use SignalR with SQL Server Backplace on an existing entity code first database?

According to Scaleout with SQL Server you can use SignalR.SqlServer to keep SignalR synced on a load balancing setup. I have an existing MVC 5 website with its own database created using Entity Code First. The article seems to use a dedicated database with service broker enabled and says not to modify the database.
So do I need to have a separate database for this? I know Entity can be picky if the database schema doesn't match and I worry that if I try to use the SignalR Sql Server package with the existing database that the tables it creates will cause a context changed error.
Also can someone provide me with more information about using the Microsoft.AspNet.SignalR.SqlServer package. The article I linked doesn't give a ton of detail and I don't know if I need to change anything in my hub and groups or if it is all handled automatically.
You should be able to, though you'd likely want to separate your entity framework definitions from signalR. You can either put SignalR in a separate database, or give the two a separate schema.
In terms of configuration, you'll need to make an addition to the Startup class of your web project:
public class Startup
{
public void Configuration(IAppBuilder app)
{
var sqlConnectionString = "connection string here";
GlobalHost.DependencyResolver.UseSqlServer(sqlConnectionString);
this.ConfigureAuth(app);
app.MapSignalR();
}
}

Resources