Salesforce Composite Connector Query Limitation - salesforce

Suppose I'm using the Salesforce Composite Connector in Mule to query 3 different objects in Salesforce, What are the limits to the number of records returned by each query?

The connector doesn't impose limits. The limits are the ones defined by Salesforce for queries: https://developer.salesforce.com/docs/atlas.en-us.salesforce_app_limits_cheatsheet.meta/salesforce_app_limits_cheatsheet/salesforce_app_limits_platform_soslsoql.htm

Related

Azure Search - Creating a dedicated search table in SQL, for using with an Indexer

I'm building an Assets search engine.
The data I need to be indexed for each assets is scattered into multiples tables in the SQL database.
Also, there is many events in the application that will trigger update to the asset indexed fields (in Draft, Rejected, Produced, ...).
I'm considering creating a new denormalized table in the SQL database that would exist solely for the Azure Search Index.
It would be an exact copy of the Azure Search Index fields.
The application would be responsible to fill and update the SQL table, through various event handlers.
I could then use an Azure SQL Indexer schedule to automatically import the data into the Azure Search Index.
PROS:
We are used to deal with sql table operations, so the application code remains standard, no need to learn the Azure Search API
Both the transactional and the search model are updated in the same SQL transaction (atomic). Then the Indexer update the index in an eventual consistent manner, and handle the retry logic.
Built-in support for change detection with SQL Integrated Change Tracking Policy
CONS:
Indexed data space usage is duplicated in the SQL database
Delayed Index update (minimum 5 minutes)
Do you see any other pros and cons ?
[EDIT 2017-01-26]
There is another big PRO for our usage.
During development, we regularly add/rename/remove fields from the Azure index. In its current state, some schema modifications to an Azure Index are not possible, we have to drop and re-create the index.
With a dedicated table containing the data, we simply issue a Reset to our indexer endpoint and the new index gets automatically re-populated.

How to store large text on Azure platform (SQL database, Table storage, Blobs...)

I need to build a chat application. I also need to store all chat transcripts into some storage - SQL database, Table storage or other Azure mechanism.
It should store about 50 Mega of characters per day. Each bulk of text should be attach to a specific customer.
My question:
What is the best way to store such amount of text in Azure?
Thanks.
I would store them in Azure Tables, using conversationId as the partition key, and messageID as the rowKey. That way, you can easilly aggregate your statistics based on those two, and quickly retrieve conversations.

load data to elastic search from sql server

I'm new to elastic search and I have a basic question.
I want to load data from database and search them by using elastic search in MVC.NET project, but cause of data I have in my database's table I cant't convert all of them to the json and search in thme by using elastic search. How should I fill data of the elastic search from the database in an mvc.net project. I don't want the whole solution because it is impossible just a general and brief explanation. thank you very much.
First of all you should be able to model your data from SQL to ElasticSearch.
As ElasticSearch is a NoSQL and document oriented database/search engine.
You need an indexer to index SQL data to ElasticSearch.
Get all the columns associated with one record that you want to search in ElasticSearch from your SQL database (use joins if data is in multiple tables).
Use a dedicated Stored Procedure to get only needed data and construct a document class, serialize to JSON and index in your ElasticSearch cluster.
Use ElasticSearch.net client as they very neatly expose bulk index APIs.
Hope this will get you started. Have fun

Add user-defined module into Database Engine to pre-processing T-SQL queries

I write a module to translate 1 sql query into another query. When users send sql queries to DB-Engine, then DB-Engine will firstly forward these queries to my defined-module before processing sql syntax.
How can I integrate my-defined module to DB-Engine of SQL Server?
You can redirect queries for certain data to different tables using a partitioned view:
http://technet.microsoft.com/en-US/library/ms188299(v=SQL.105).aspx
In a nutshell, you tell the server some rules as to which values reside in which tables (usually based on primary or foreign key ranges for example). When you query using the partition field, the database can direct your query to the correct remote table. But you can still do queries over all the tables just as if they were held locally (except more slowly).

couch db finding data without views

I want to know is there any way in Couch DB HTTP API to query database without VIEWS ? We can GET all documents / document with specific id but what if we want to query database with key other then ID, without using VIEWS ?
You cannot query a CouchDB database by anything other than primary key (ID) without using views. In CouchDB world, views are queries.
You can get the data from CouchDB without views using HTTP API of couchDB.
http://wiki.apache.org/couchdb/HTTP_Document_API
Documents stored in a CouchDB have a DocID. DocIDs are case-sensitive string identifiers that uniquely identify a document. Two documents cannot have the same identifier in the same database, they are considered the same document.
http://localhost:5984/test/some_doc_id
http://localhost:5984/test/another_doc_id
http://localhost:5984/test/BA1F48C5418E4E68E5183D5BD1F06476

Resources