Save over 10000 record in Cache - asp.net mvc - sql-server

I have some Json Formated records (over 10000) fetched from Database in this Json format:
{\"ID\":7701,\"Lat\":36.78332170249675,\"Lng\":45.729067325592041}
In ASP.net MVC I want to save data in Ram or Cache to prevent Data Retrieval.
But I could not save it in Cache, I think it's because of Cache limit.
What do you suggest me to? Not to re-Reading data from SQL Server Database?
=====================
Below code of Saving in Cache
ObjectCache cache = MemoryCache.Default;
CacheItemPolicy policy = new CacheItemPolicy()
{
Priority = CacheItemPriority.Default,
AbsoluteExpiration = System.DateTimeOffset.UtcNow.AddMilliseconds(RefreshInterval)
};
cache.Set(id, JsonConvert.SerializeObject(listOfData,new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore }) , policy);
for reading I use this code:
foreach (var item in MemoryCache.Default)
{
string key = item.Key;
if (key == id)
{
var w = JsonConvert.DeserializeObject(item.Value, typeof(List<SOME>)) as List<SOME>;
}
}

Related

How to avoid adding duplicate data from CSV file to SQL Server database. Using CSVHelper and C# Blazor

I have my database table named 'JobInfos' in SQL Server which contains many columns.
JobID - (int) auto populates incrementing value when data added
OrgCode - (string)
OrderNumber - (int)
WorkOrder - (int)
Customer - (string)
BaseModelItem - (string)
OrdQty - (int)
PromiseDate - (string)
LineType -(string)
This table gets written to many times a day using a Blazor application with Entity Framework and CSVHelper. This works perfectly. All rows from the CSV file are added to the database.
if (fileExist)
{
using (var reader = new StreamReader(#path))
using (var csv = new CsvReader(reader, config))
{
var records = csv.GetRecords<CsvRow>().Select(row => new JobInfo()
{
OrgCode = row.OrgCode,
OrderNumber = row.OrderNumber,
WorkOrder = row.WorkOrder,
Customer = row.Customer,
BaseModelItem = row.BaseModelItem,
OrdQty = row.OrdQty,
PromiseDate = row.PromiseDate,
LineType = row.LineType,
});
using (var db = new ApplicationDbContext())
{
while (!reader.EndOfStream)
{
if (lineNumber != 0)
{
db.AddRange(records.ToList());
db.SaveChanges();
}
lineNumber++;
}
NavigationManager.NavigateTo("/", true);
}
}
As these multiple CSV files can contain rows that may already be in the database table, I am getting duplicate records when the table is read from, which causes the users to delete all the newer duplicate rows manually to only keep the original entry.
I have no control over the CSV files or their creation. I am trying to only add rows that contain new data based on the WorkOrder number which can not be the same as any others.
I found another post here on StackOverflow which helps but I am stuck with a remaining error I can't figure out.
The Helpful post
I changed my code here...
if (lineNumber != 0)
{
var recordworkorder = records.Select(x => x.WorkOrder).ToList();
var workordersindb = db.JobInfos.Where(x => recordworkorder.Contains(x.WorkOrder)).ToList();
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));
db.AddRange(records.ToList(workordersNotindb));
db.SaveChanges();
}
but this line...
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));`
throws an error at the end (x.WorkOrder) - CS1503 Argument 1: cannot convert from 'int' to 'DepotQ4.Data.JobInfo'
WorkOrder is an int
JobID is the Primary Key and an int
Every record in the table must have a unique WorkOrder
I am not sure what I am not seeing. Could use some help here please?
Your variable workordersindb is a List<JobInfo>. So when you try to select from records.Where(x => !workordersindb.Contains(x.WorkOrder)) you are trying to match the list of JobInfo in workordersindb to the int of x.WorkOrder. workordersindb needs to be a List<int> in order to be able to use it with the Contains. records would have had the same issue, but you solved it by creating the variable recordworkorder and using records.Select(x => x.WorkOrder) to get a List<int>.
if (lineNumber != 0)
{
var recordworkorder = records.Select(x => x.WorkOrder).ToList();
var workordersindb = db.JobInfos.Where(x => recordworkorder.Contains(x.WorkOrder)).Select(x => x.WorkOrder).ToList();
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));
db.JobInfos.AddRange(workordersNotindb);
db.SaveChanges();
}

Store Two query response in to one variable

I am trying to add a In Memory Caching to my .NET core project that uses EF. I have two queries and want that two query response to be stored in Cache so I dont have to query everytime
var settingscheck = "SELECT TOP 1 [EndTime],[StartTime],[OrderDay]"+
"FROM[dbo].[Settings]"+
"where SUBSTRING(DATENAME(weekday, getdate() AT TIME ZONE 'UTC' AT TIME ZONE 'Eastern Standard Time'), 0, 4) = OrderDay";
var holidaycheck = "SELECT Count(*) FROM[dbo].[HolidayWeeks] where FORMAT(getdate(), 'yyyy-MM-dd') = [HolidateDate]";
I already implemented the Caching to store one of the Query response like below
public async Task<IActionResult> Index(string sortOrder, string searchString,
int? pageNumber, string currentFilter)
{
int holidaycheck;
var timeUtc = DateTime.UtcNow;
var easternZone = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time");
var todayDt = TimeZoneInfo.ConvertTimeFromUtc(timeUtc, easternZone);
bool isExist = memoryCache.TryGetValue("HolidayWk", out holidaycheck);
if (!isExist)
{
holidaycheck = (from hc in _context.HolidayWeeks
where hc.HolidateDate.Date == todayDt.Date
select hc).Count();
var cacheEntryOptions = new MemoryCacheEntryOptions()
.SetSlidingExpiration(TimeSpan.FromHours(2));
memoryCache.Set("HolidayWk", holidaycheck, cacheEntryOptions);
}
if (holidaycheck != 0)
{
return View("/Views/Customers/AppNotAvailable.cshtml");
}
else
{
but now I am trying to add one more query, I was thinking if I can create the JSON object and add the response from both the queries and cache them in memory so it can be used by the application. I cannot do JOIN because these two queries dont have anything in common I am not sure how to store the response from both the queries in to one JSON object. Or suggest me if there is any other option of doing this. Any help is greatly appreciated

Limitation on document count for azure search S3 HD pricing tier

When I read the article I see below info
"For S3 High Density services created after late 2017, the 200 million document per partition has been removed but the 1 million document per index limit remains."
I wanted to confirm if 1 million document limit still exists for S3 HD index or this limit has also been removed recently?
Same issues I have faced then I'm converting data into batches.
Please modify the code as per your requirement.
for (int i = 0; i < result.Items.Count; i = i + 31500)
{
searchItems = result.Items.Skip(i).Take(31500);
actionList = new List<IndexAction<AzureSearchItem>>();
foreach (var item in searchItems)
{
actionList.Add(IndexAction.MergeOrUpload(AzureHelper.FormatSearchItem(item)));
}
PostBulkAssortmentDocuments(actionList.AsEnumerable());
}
public virtual void PostBulkAssortmentDocuments(IEnumerable<IndexAction<AzureSearchItem>> actions)
{
if (actions.Count() == 0)
return;
var batch = IndexBatch.New(actions);
try
{
var data = GetIndexClient(IndexName).Documents.Index(batch);
var passResultCount = data.Results.Where(x => x.Succeeded).Count();
var failResultCount = data.Results.Where(x => x.Succeeded == false).Count();
var MessageResult = data.Results.Where(x => !string.IsNullOrEmpty(x.ErrorMessage));
var keyResult = data.Results.Where(x => !string.IsNullOrEmpty(x.Key)).Select(x => x.Key).ToList();
var unikKey = keyResult.Distinct().ToList();
string json = Newtonsoft.Json.JsonConvert.SerializeObject(data);
}
catch (IndexBatchException e)
{
// Sometimes when your Search service is under load, indexing will fail for some of the documents in
// the batch. Depending on your application, you can take compensating actions like delaying and
// retrying. For this simple demo, we just log the failed document keys and continue.
Console.WriteLine(
"Failed to index some of the documents: {0}",
String.Join(", ", e.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key)));
this.WriteToFile("Error - PostBulkAssortmentDocuments -" + e.Message);
}
}
The content of the article remains valid. So 1 Million upper document limit for search indexes in S3 HD service tier remains valid now.

Audit of what records a given user can see in SalesForce.com

I am trying to determine a way to audit which records a given user can see by;
Object Type
Record Type
Count of records
Ideally would also be able to see which fields for each object/record type the user can see.
We will need to repeat this often and for different users and in different orgs, so would like to avoid manually determining this.
My first thought was to create an app using the partner WSDL, but would like to ask if there are any easier approaches or perhaps existing solutions.
Thanks all
I think that you can follow the documentation to solve it, using a query similar to this one:
SELECT RecordId
FROM UserRecordAccess
WHERE UserId = [single ID]
AND RecordId = [single ID] //or Record IN [list of IDs]
AND HasReadAccess = true
The following query returns the records for which a queried user has
read access to.
In addition, you should add limit 1 and get from record metadata the object type,record type, and so on.
I ended up using the below (C# using the Partner WSDL) to get an idea of what kinds of objects the user had visibility into.
Just a quick'n'dirty utility for my own use (read - not prod code);
var service = new SforceService();
var result = service.login("UserName", "Password");
service.Url = result.serverUrl;
service.SessionHeaderValue = new SessionHeader { sessionId = result.sessionId };
var queryResult = service.describeGlobal();
int total = queryResult.sobjects.Count();
int batcheSize = 100;
var batches = Math.Ceiling(total / (double)batcheSize);
using (var output = new StreamWriter(#"C:\test\sfdcAccess.txt", false))
{
for (int batch = 0; batch < batches; batch++)
{
var toQuery =
queryResult.sobjects.Skip(batch * batcheSize).Take(batcheSize).Select(x => x.name).ToArray();
var batchResult = service.describeSObjects(toQuery);
foreach (var x in batchResult)
{
if (!x.queryable)
{
Console.WriteLine("{0} is not queryable", x.name);
continue;
}
var test = service.query(string.Format("SELECT Id FROM {0} limit 100", x.name));
if(test == null || test.records == null)
{
Console.WriteLine("{0}:null records", x.name);
continue;
}
foreach (var record in test.records)
{
output.WriteLine("{0}\t{1}",x.name, record.Id);
}
Console.WriteLine("{0}:\t{1} records(0)", x.name, test.size);
}
}
output.Flush();
}

Preforming Bulk data transactions with SalesForce using .Net C#

I am new to SalesForce (3 months).
Thus far I have been able to create an application in C# that I can use to preform Inserts and Updates to the SalesForce database. These transactions are one at a time.
No I have the need to preform large scale transactions. For example updating thousands of records at a time. Doing them one by one would quickly put us over our allotted API calls per 24 hour period.
I want to utilize the available bulk transactions process to cut down on the number of API calls. Thus far I have not had much luck coding this nor have I found any such documentation.
If anyone could either provide some generic examples or steer me to reliable documentation on the subject I would greatly appreciate it.
FYI, the data I need to use to do the updates and inserts comes from an IBM Unidata database sitting on an AIX machine. So direct web services communication is not realy possible. Getting the data from Unidata has been my headache. I have that worked out. Now the bulk api to SalesForce is my new headache.
Thanks in advance.
Jeff
You don't mention which API you're currently using, but using the soap partner or enterprise APIs you can write records to salesforce 200 at a time. (the create/update/upsert calls all take an array of SObjects).
Using the bulk API you can send data in chunks of thousands of rows at a time.
You can find the documentation for both sets of APIs here
The answers already given are a good start; however, are you sure you need to actually write a custom app that uses the bulk API? The salesforce data loader is a pretty robust tool, includes a command line interface, and can use either the "normal" or bulk data API's. Unless you are needing to do fancy logic as part of your insert/updates, or some sort of more real-time / on-demand loading, the data loader is going to be a better option than a custom app.
(this is the SOAP code though, not the Salesforce "Bulk API" ; careful not to confuse the two)
Mighy be below code provide clear insight on how to do bulk insertion.
/// Demonstrates how to create one or more Account records via the API
public void CreateAccountSample()
{
Account account1 = new Account();
Account account2 = new Account();
// Set some fields on the account1 object. Name field is not set
// so this record should fail as it is a required field.
account1.BillingCity = "Wichita";
account1.BillingCountry = "US";
account1.BillingState = "KA";
account1.BillingStreet = "4322 Haystack Boulevard";
account1.BillingPostalCode = "87901";
// Set some fields on the account2 object
account2.Name = "Golden Straw";
account2.BillingCity = "Oakland";
account2.BillingCountry = "US";
account2.BillingState = "CA";
account2.BillingStreet = "666 Raiders Boulevard";
account2.BillingPostalCode = "97502";
// Create an array of SObjects to hold the accounts
sObject[] accounts = new sObject[2];
// Add the accounts to the SObject array
accounts[0] = account1;
accounts[1] = account2;
// Invoke the create() call
try
{
SaveResult[] saveResults = binding.create(accounts);
// Handle the results
for (int i = 0; i < saveResults.Length; i++)
{
// Determine whether create() succeeded or had errors
if (saveResults[i].success)
{
// No errors, so retrieve the Id created for this record
Console.WriteLine("An Account was created with Id: {0}",
saveResults[i].id);
}
else
{
Console.WriteLine("Item {0} had an error updating", i);
// Handle the errors
foreach (Error error in saveResults[i].errors)
{
Console.WriteLine("Error code is: {0}",
error.statusCode.ToString());
Console.WriteLine("Error message: {0}", error.message);
}
}
}
}
catch (SoapException e)
{
Console.WriteLine(e.Code);
Console.WriteLine(e.Message);
}
}
Please find the small code which may help you to insert the data into salesforce objects using c# and WSDL APIs. I stuck to much to write code in c#. I assigned using direct index after spiting you can use your ways.
I split the column using | (pipe sign). You may change this and also <br>, \n, etc. (row and column breaking)
Means you can enter N rows which are in your HTML/text file. I wrote the program to add order by my designers who put the order on other website and fetch the data from e-commerce website and who has no interface for the salesforce to add/view the order records. I created one object for the same. and add following columns in the object.
Your suggestions are welcome.
private SforceService binding; // declare the salesforce servive using your access credential
try
{
string stroppid = "111111111111111111";
System.Net.HttpWebRequest fr;
Uri targetUri = new Uri("http://abc.xyz.com/test.html");
fr = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create(targetUri);
if ((fr.GetResponse().ContentLength > 0))
{
System.IO.StreamReader str = new System.IO.StreamReader(fr.GetResponse().GetResponseStream());
string allrow = str.ReadToEnd();
string stringSeparators = "<br>";
string[] row1 = Regex.Split(allrow, stringSeparators);
CDI_Order_Data__c[] cord = new CDI_Order_Data__c[row1.Length - 1];
for (int i = 1; i < row1.Length-1; i++)
{
string colstr = row1[i].ToString();
string[] allcols = Regex.Split(colstr, "\\|");
cord[i] = new CDI_Order_Data__c(); // Very important to create object
cord[i].Opportunity_Job_Order__c = stroppid;
cord[i].jobid__c = stroppid;
cord[i].order__c = allcols[0].ToString();
cord[i].firstname__c = allcols[1].ToString();
cord[i].name__c = allcols[2].ToString();
DateTime dtDate = Convert.ToDateTime(allcols[3]);
cord[i].Date__c = new DateTime(Convert.ToInt32(dtDate.Year), Convert.ToInt32(dtDate.Month), Convert.ToInt32(dtDate.Day), 0, 0, 0); //sforcedate(allcols[3]); //XMLstringToDate(allcols[3]);
cord[i].clientpo__c = allcols[4].ToString();
cord[i].billaddr1__c = allcols[5].ToString();
cord[i].billaddr2__c = allcols[6].ToString();
cord[i].billcity__c = allcols[7].ToString();
cord[i].billstate__c = allcols[8].ToString();
cord[i].billzip__c = allcols[9].ToString();
cord[i].phone__c = allcols[10].ToString();
cord[i].fax__c = allcols[11].ToString();
cord[i].email__c = allcols[12].ToString();
cord[i].contact__c = allcols[13].ToString();
cord[i].lastname__c = allcols[15].ToString();
cord[i].Rep__c = allcols[16].ToString();
cord[i].sidemark__c = allcols[17].ToString();
cord[i].account__c = allcols[18].ToString();
cord[i].item__c = allcols[19].ToString();
cord[i].kmatid__c = allcols[20].ToString();
cord[i].qty__c = Convert.ToDouble(allcols[21]);
cord[i].Description__c = allcols[22].ToString();
cord[i].price__c = Convert.ToDouble(allcols[23]);
cord[i].installation__c = allcols[24].ToString();
cord[i].freight__c = allcols[25].ToString();
cord[i].discount__c = Convert.ToDouble(allcols[26]);
cord[i].salestax__c = Convert.ToDouble(allcols[27]);
cord[i].taxcode__c = allcols[28].ToString();
}
try {
SaveResult[] saveResults = binding.create(cord);
}
catch (Exception ce)
{
Response.Write("Buld order update errror" +ce.Message.ToString());
Response.End();
}
if (str != null) str.Close();
}

Resources