I have two data source one in MS SQL and one is Azure Table, both of them holds User's data but just different information.
Is there any risk or possible issues if I create two indexer for each of the data source and sync them into single index, so I can do one search instead of two.
I do not see option that allow me to create indexer only, so I wonder if Microsoft trying to prevent that or not.
When you have more than one indexer updating the same index, races between updates to the same document are possible since Azure Search does not yet support document versioning.
If the each indexer updates different document fields, then the order doesn't matter; if multiple indexers update the same field(s), then the last writer wins (and you can't control which indexer will be the last writer).
If you use a deletion detection policy to remove documents from the index, then, to avoid possible races, a document should be marked deleted in both SQL and Azure Table.
I do not see option that allow me to create indexer only, so I wonder if Microsoft trying to prevent that or not.
I don't understand the above question.
Related
We have a multi tenant system where each tenant has their own database. Tenants also have the option to create their own data structures which will be their own table in the database.
This causes an issue where when we run the visual studio schema compare it will always flag these tables as differences and we will have to unselect them. This becomes a big issue as the schema compare has major performance issues when unselecting multiple differences.
These user defined tables will all have a certain naming pattern e.g. UserTable1,UserTable2 so what we really need is a way to perform the schema comparison while ignoring tables that contain a substring in this example it would be UserTable. Is this possible or is their a suitable alternative to using the Visual studio comparison tool?
For those coming here from Google looking for a solution to this.
All you have to do is right click on the section and ta-da, you can
Include or Exclude all objects depending on the existing state of the
objects.
In this case, section means the Delete, Change, and Add parent folders in the schema compare window.
I'm building an Assets search engine.
The data I need to be indexed for each assets is scattered into multiples tables in the SQL database.
Also, there is many events in the application that will trigger update to the asset indexed fields (in Draft, Rejected, Produced, ...).
I'm considering creating a new denormalized table in the SQL database that would exist solely for the Azure Search Index.
It would be an exact copy of the Azure Search Index fields.
The application would be responsible to fill and update the SQL table, through various event handlers.
I could then use an Azure SQL Indexer schedule to automatically import the data into the Azure Search Index.
PROS:
We are used to deal with sql table operations, so the application code remains standard, no need to learn the Azure Search API
Both the transactional and the search model are updated in the same SQL transaction (atomic). Then the Indexer update the index in an eventual consistent manner, and handle the retry logic.
Built-in support for change detection with SQL Integrated Change Tracking Policy
CONS:
Indexed data space usage is duplicated in the SQL database
Delayed Index update (minimum 5 minutes)
Do you see any other pros and cons ?
[EDIT 2017-01-26]
There is another big PRO for our usage.
During development, we regularly add/rename/remove fields from the Azure index. In its current state, some schema modifications to an Azure Index are not possible, we have to drop and re-create the index.
With a dedicated table containing the data, we simply issue a Reset to our indexer endpoint and the new index gets automatically re-populated.
I have available to me a Report that is generated in Microsoft SharePoint, and it holds the quantities for certain items. The reports can be exported as excel documents, but if it is possible i would like to avoid that.
In my Access database I have all the same items but with additional data concerning special requests and item identification in the item's respective documentation folders.
I am looking for a way to have the select few columns that represent the quantities and some other factors, to be automatically updated in my database.
How can I go about this? Is there a specific terminology for what I am attempting to do, I am unable to find it on Google?
So to clarify ... you have item data exported from SharePoint and item data in Access and ideally you'd like to merge both and store the results in Access.
Or maybe another way of putting it, you would like to compliment the data in Access with the data from SharePoint.
If the database that powered the SharePoint report ran in Access as well, the word you are looking for is replication. You want to automatically replicate the data from one server/database to another.
Unfortunately I don't know of any software that replicates data to Access.
Your best bet would be to write a program that scheduled the running of the SharePoint report and then imported that data into Access.
I'm happy to give you the terminology of what to Google for. Just don't make me use SharePoint and Access. :)
If you have the same items in a report in SharePoint and in Access hopefully there is a field that uniquely identifies each item and is used in each table (a unique key). If these items (typically we would say 'records' or 'tuples' in database circles) are inventory SKUs or product numbers would be examples of potential unique keys. If you re taking the information in two tables and merging them together using a unique key we call it a 'Natural Join'. I know Access and SharePoint both support SQL and using SQL this would be done using a SELECT statement.
I would try googling: Natural Join tables in SharePoint and Accesss
Or: SQL SELECT between SharePoint and Access
Hope this helps.
If you choose linked tables to SharePoint (as opposed to importing them local), then you will always have a live copy of the data. In fact this is replicated model in Access 2010. Then a query could be used that joins in the additional table columns with quanity etc. Replication would need caution since any changes to the local access table would go back up to SharePoint and that may not be desired or even allowed.
In this case I would thus simply import the SharePoint tables local and again use a join based on a PK to the tables with quanity etc. that is local. Note that the local copy + cache runs very fast in 2010, and prior to Access 2010 + SharePoint 2010 the speed of such a setup is not so good compared to Access 2010.
If you are using an older version of Access + SharePoint then I would suggest you continue your approach of important the SharePoint tables (as opposed to being linked to the live tables on SharePoint). You then again simply use a query that joins in the additional columns you wish to display in your reports.
Such a results query would not only be of use for reports, but you could export that query into Excel or word.
Best regards.
We have an file upload system and would like to use the new MSSQL2012 semantic search feature for sql server 2012. Is that possible without using filetables?
This is our schema:
I think there are two questions here.
Can you use Semantic Search without using filetable?
Yes, you can. It can be used on any table with Full-Text indexing turned on.
Here is the list of prerequisites:
link.
Basically you can use it on the data, which is loaded into the database.
The second question is whether your schema benefit from Semantic Search an to what extent.
Looking at your scheema I understand, that your database hold only paths to the documents and their "descriptions". Therefore, you can enable Semantic Search on the columns in your database. It will allow to use Semantic Search on FileName and Description, but not on documents' contents.
In order to use Semantic Search on the contents of these documents you'll need to store these documnets in SQL database. FileTable structure helps this task, although you can choose another way of storing whole documnets in your database.
So let's say I have two databases, one for production purposes and another one for development purposes.
When we copied the development database, the full-text catalog did not get copied properly, so we decided to create the catalog ourselves. We matched all the tables and indexes and created the database and the search feature seems to be working okay too (but been entirely tested yet).
However, the former catalog had a lot more files in its folder than the one we manually created. Is that fine? I thought they would have exact same number of files (but the size may vary)
First...when using full text search I would suggest that you don't manually try to create what the wizard does for you. I have to wonder about missing more than just some data. Why not just recreate the indexes?
Second...I suggest that you don't use freetext feature of sql server unless you have no other choice. I used to be a big believer in freetext but was shown an example of creating a Lucene(.net) index and searching it in comparison to creating an index in SQL Server and searching it. Creating a SQL Server index in comparison to creating a Lucene index is considerably slower and hard to maintain. Searching a SQL Server index is considerably less accurate (poor results) in comparison to Lucene. Lucene is like having your own personal Google for searching data.
How? Index your data (only the data you need to search) in Lucene and include the Primary Key of the data that you are indexing for use later. Then search the index using your language and the Lucene(.net) API (many articles written on this topic). In your search results make sure you return the PK. Once you have identified the records you are interested in you can then go get the rest of the data and/or any related data based on the PK that was returned.
Gotchas? Updating the index is also much quicker and easier. However, you have to roll your own for creating the index, updating the index, and searching the index. SUPER EASY to do...but still...there are no wizards or one handed coding here! Also, the index is on the file system. If the file is open and being searched and you try to open it again for another search you will obviously have some issues...so writing some form of infrastructure around opening and reading these indexes needs to be built.
How does this help in SQL Server? You can easily wrap your Lucene search in a CLR function or proc which can be installed in the database that you can then use as though it were native to your t-SQL queries.