ElasticSearch in Magento2 Database - database

Without to much backstory:
I am trying to configure elasticsearch port inside the database.
However, I can simply not find the table which holds the elastic search properties.
Can anyone help me out here?
Going trough the backend is not gonna be a solution, it has to go trough database.

You can easily find the Catalog Search settings in the backend.
Stores->Configuration->Catalog->Catalog Search
Under the Catalog Search, you'll see the below settings.
Search Engine
Elasticsearch Server Hostname
Elasticsearch Server Port
and some other parameters too.
But if you wish to update the values from the database you can run the below SQL query in your DB tool or MySQL console.
SELECT * FROM `core_config_data` WHERE path LIKE '%catalog/search/%';
Then update the fields with the appropriate values.
Happy Coding!

Related

Impact of integrating Elasticsearch with SQL Server

I want to use Elastic search for full text searching in mu SQL server.
As i read I can index my SQL server by elastic search.
My questions is about resource usage.
As I understood by indexing I must duplicate my entire SQL server database.
and I must take both elastic and SQL server sync.
Am I wrong?
Thanks
Yes, you are right. I also maintain a sql server + elastic environment.
You need a separate instance of elasticsearch running.
You will have to index / insert all your existing data from sql server to elastic. There are libraries for almost any language to easily insert data.
You will have to sync these two in order to keep the data up to date.
In theory you only need 1 instance / node of elastic running, but it is recommended to have a full cluster, due to failover, etc.

Is it posible to use SQL Server Session Context with Azure elastic queries

I want to know if it's posible to share SQL Server SESSION CONTEXT variables between different Azure Sql databases using Elastic Queries.
I searched in official documentation but i can't found any information about this feature is available or not.
SESSION CONTEXT exists locally to a single server instance in SQL Server. (It's tied to a session). SQL Azure is built using SQL Server but there are some parts of the mapping that are opaque to customers (they can change based on circumstances such as what Edition you use or what version of the internal software we are using to deliver the service).
Elastic Queries is a feature to let you query from one database (source) to one or more other databases (target(s)). In such a model, you have a SQL Server session to the source database, and the elastic query has a separate connection/session to each other database being touched.
I think the question you are asking is "can I set the session context on the source connection/session and have it flow through to all the target connections when running queries there?" (That's my best guess - let me know if it is different). The answer today is "no" - the session variables do not flow from source to target as part of the elastic query. Also, since today elastic query is read-only, you can't use elastic query to set the session context individually on each target database connection/session as part of the operation.
In the future, we'll consider whether there is something like this we can do, but right now we don't have a committed timeline for something like this.
I hope this explains how things work a bit under the convers.
Sincerely,
Conor Cunningham
Architect, SQL

Azure Search from existing database

I have an existing SQL Server database that uses Full Text Search and Semantic search for the UI's primary searching capability. The tables used in the search contain around 1 million rows of data.
I'm looking at using Azure Search to replace this, however my database relies upon the Full Text Enabled tables for it's core functionality. I'd like to use Azure Search for the "searching" but still have my current table structure in place to be able to edit records and display the detail record when something has been found.
My thoughts to implement this is to:
Create the Azure indexes
Push all of the searchable data from the Full Text enabled table in SQL Server to Azure Search
Azure Search to return ID's of documents that match the search criteria
Query the existing database to fetch the rows that contain those ID's to display on the front end
When some data in the existing database changes, schedule an update in Azure Search to ensure the data stays in sync
Is this a good approach? How do hybrid implementations work where your existing data is in an on-prem database but you want to take advantage of Azure Search?
Overall, your approach seems reasonable. A couple of pointers that might be useful:
Azure SQL now has support for Full Text Search, so if moving to Azure SQL is an option for you and you still want to use Azure Search, you can use Azure SQL indexer. Or you can run SQL Server on IaaS VMs and configure the indexer using the instructions here.
With on-prem SQL Server, you might be able to use Azure Data Factory sink for Azure Search to sync data.
I actually just went through this process, almost exactly. Instead of SQL Server, we are using a different backend data store.
Foremost, we wrote an application to sync all existing data. Pretty simple.
For new documents being added, we made the choice to sync to Azure Search synchronously rather than async. We made this choice because we measured excellent performance when adding to and updating the index. 50-200 ms response time and no failures over hundreds of thousands of records. We couldn't justify the additional cost of building and maintaining workers, durable queues, etc. Caveat: Our web service is located in the same Azure region as the Azure Search instance. If your SQL Server is on-prem, you could experience longer latencies.
We ended up storing about 80% of each record in Azure Search. Obviously, the more you store in Azure Search, the less likely you'll have to perform a worst-case serial "double query."

Querying Azure Search from IIS or SQL Server?

It seems easy to apply an Azure Search index to an SQL Azure database. I undertand that you query the search index using REST APIs and that the index then needs to be maintained/updated. Now, consider a web server running IIS, with an underlying SQL Server database.
What is considered best practice; querying and updating the index from the web server or from SQL Server, e.g. from within a CLR stored procedure? Are there specific design considerations here?
I work on Azure Search team and will try to help.
Querying and updating the index are two different use cases. Presumably, you want to query the index in response to user input in your Web app. (It is also possible that you have a SQL stored procedure with some complex logic that needs full test search, but that seems less likely).
Updating the index can be done in multiple ways. If you can tolerate updating your index at most every 5 minutes, use Azure Search SQL indexer automagically update the index for you - see http://azure.microsoft.com/en-us/documentation/articles/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers-2015-02-28/ for details on how to do it. That article describes creating indexers using REST API, but we now have support for that in .NET SDK as well.
OTOH, if you need hard real-time updates, you can update the search index at the same time you produce data to insert / update your SQL database.
Let me know if you have any follow up questions!
HTH,
Eugene

How to crawl links that are stored in database through Fast search server 2010 for sharepoint

I am crawling one database table through Fast Search Server 2010 for Sharepoint, which has a column called "URLS". Every record of this column hold one url of one web page. Thus there are many URLs in the database table.
I want that while crawling the database, Crawler should also hit the links which are there in the table. Is there any way to do so?
Thanks in advance
The possibility I see is:
Create a custom BDC connector in a Visual studio project to crawl your database.
When you get the urls from the database, you do whatever you can in C# (in your connector) to get the information from the site of the url.
You expose the information you get from the url site with properties in your connector
The connector properties can be mapped to managed properties after that to access the information in your search.
Don't hesitate to ask if I'm not clear enough, or if you need examples.

Resources