Audit of what records a given user can see in SalesForce.com - salesforce

I am trying to determine a way to audit which records a given user can see by;
Object Type
Record Type
Count of records
Ideally would also be able to see which fields for each object/record type the user can see.
We will need to repeat this often and for different users and in different orgs, so would like to avoid manually determining this.
My first thought was to create an app using the partner WSDL, but would like to ask if there are any easier approaches or perhaps existing solutions.
Thanks all

I think that you can follow the documentation to solve it, using a query similar to this one:
SELECT RecordId
FROM UserRecordAccess
WHERE UserId = [single ID]
AND RecordId = [single ID] //or Record IN [list of IDs]
AND HasReadAccess = true
The following query returns the records for which a queried user has
read access to.
In addition, you should add limit 1 and get from record metadata the object type,record type, and so on.

I ended up using the below (C# using the Partner WSDL) to get an idea of what kinds of objects the user had visibility into.
Just a quick'n'dirty utility for my own use (read - not prod code);
var service = new SforceService();
var result = service.login("UserName", "Password");
service.Url = result.serverUrl;
service.SessionHeaderValue = new SessionHeader { sessionId = result.sessionId };
var queryResult = service.describeGlobal();
int total = queryResult.sobjects.Count();
int batcheSize = 100;
var batches = Math.Ceiling(total / (double)batcheSize);
using (var output = new StreamWriter(#"C:\test\sfdcAccess.txt", false))
{
for (int batch = 0; batch < batches; batch++)
{
var toQuery =
queryResult.sobjects.Skip(batch * batcheSize).Take(batcheSize).Select(x => x.name).ToArray();
var batchResult = service.describeSObjects(toQuery);
foreach (var x in batchResult)
{
if (!x.queryable)
{
Console.WriteLine("{0} is not queryable", x.name);
continue;
}
var test = service.query(string.Format("SELECT Id FROM {0} limit 100", x.name));
if(test == null || test.records == null)
{
Console.WriteLine("{0}:null records", x.name);
continue;
}
foreach (var record in test.records)
{
output.WriteLine("{0}\t{1}",x.name, record.Id);
}
Console.WriteLine("{0}:\t{1} records(0)", x.name, test.size);
}
}
output.Flush();
}

Related

How to avoid adding duplicate data from CSV file to SQL Server database. Using CSVHelper and C# Blazor

I have my database table named 'JobInfos' in SQL Server which contains many columns.
JobID - (int) auto populates incrementing value when data added
OrgCode - (string)
OrderNumber - (int)
WorkOrder - (int)
Customer - (string)
BaseModelItem - (string)
OrdQty - (int)
PromiseDate - (string)
LineType -(string)
This table gets written to many times a day using a Blazor application with Entity Framework and CSVHelper. This works perfectly. All rows from the CSV file are added to the database.
if (fileExist)
{
using (var reader = new StreamReader(#path))
using (var csv = new CsvReader(reader, config))
{
var records = csv.GetRecords<CsvRow>().Select(row => new JobInfo()
{
OrgCode = row.OrgCode,
OrderNumber = row.OrderNumber,
WorkOrder = row.WorkOrder,
Customer = row.Customer,
BaseModelItem = row.BaseModelItem,
OrdQty = row.OrdQty,
PromiseDate = row.PromiseDate,
LineType = row.LineType,
});
using (var db = new ApplicationDbContext())
{
while (!reader.EndOfStream)
{
if (lineNumber != 0)
{
db.AddRange(records.ToList());
db.SaveChanges();
}
lineNumber++;
}
NavigationManager.NavigateTo("/", true);
}
}
As these multiple CSV files can contain rows that may already be in the database table, I am getting duplicate records when the table is read from, which causes the users to delete all the newer duplicate rows manually to only keep the original entry.
I have no control over the CSV files or their creation. I am trying to only add rows that contain new data based on the WorkOrder number which can not be the same as any others.
I found another post here on StackOverflow which helps but I am stuck with a remaining error I can't figure out.
The Helpful post
I changed my code here...
if (lineNumber != 0)
{
var recordworkorder = records.Select(x => x.WorkOrder).ToList();
var workordersindb = db.JobInfos.Where(x => recordworkorder.Contains(x.WorkOrder)).ToList();
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));
db.AddRange(records.ToList(workordersNotindb));
db.SaveChanges();
}
but this line...
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));`
throws an error at the end (x.WorkOrder) - CS1503 Argument 1: cannot convert from 'int' to 'DepotQ4.Data.JobInfo'
WorkOrder is an int
JobID is the Primary Key and an int
Every record in the table must have a unique WorkOrder
I am not sure what I am not seeing. Could use some help here please?
Your variable workordersindb is a List<JobInfo>. So when you try to select from records.Where(x => !workordersindb.Contains(x.WorkOrder)) you are trying to match the list of JobInfo in workordersindb to the int of x.WorkOrder. workordersindb needs to be a List<int> in order to be able to use it with the Contains. records would have had the same issue, but you solved it by creating the variable recordworkorder and using records.Select(x => x.WorkOrder) to get a List<int>.
if (lineNumber != 0)
{
var recordworkorder = records.Select(x => x.WorkOrder).ToList();
var workordersindb = db.JobInfos.Where(x => recordworkorder.Contains(x.WorkOrder)).Select(x => x.WorkOrder).ToList();
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));
db.JobInfos.AddRange(workordersNotindb);
db.SaveChanges();
}

Why am i getting DML error in Salesforce update trigger?

Im trying to update BilingCountry in Salesforce via the Bulk API from a csv, there are 1025 entries in the csv file. The Account ids in the csv have a BillingCountry different to what is in Salesforce but i get the following error:
CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY:test_tr_U: System.LimitException: Too many DML rows: 10001:--
Here is my Trigger:
Trigger test_tr_U on Account (after update)
{
if(checkRecursion.runOnce())
{
set<ID> ids = Trigger.newMap.keyset();
List<TestCust__c> list1 = new List<TestCust__c>();
for(ID id : ids)
{
for (Account c: Trigger.new)
{
Account oldObject = Trigger.oldMap.get(c.ID);
if (c.Billing_Country__c!= oldObject.Billing_Country__c ||
c.BillingCountry!= oldObject.BillingCountry )
{
TestCust__c change = new TestCust__c();
change.Field1__c = 'update';
change.Field2__c = id;
change.Field3__c = false;
change.Field4__c = 'TESTCHANGE';
list1.add(change);
}
}
}
Database.DMLOptions dmo = new Database.DMLOptions();
dmo.assignmentRuleHeader.useDefaultRule = true;
Database.insert(list1, dmo);
}
}
Looping over every id then over every updated account will result in the creation of #id * #account. Those are the same value so you'll create #account^2 (1050625) TestCust__c records.
Salesforce splits the 1025 records in chunk of 200 records and since a Bulk API request caused the trigger to fire multiple governor limits are reset between these trigger invocations for the same HTTP request.
Anyway for each trigger run you're creating 200*200 = 40000 TestCust__c records, which are highly above the limit of 10000 records per transaction, therefore the system will raise LimitException.
You should remove the outer loop: it's simply wrong.
Trigger test_tr_U on Account (after update)
{
if(checkRecursion.runOnce())
{
List<TestCust__c> list1 = new List<TestCust__c>();
for (Account c: Trigger.new)
{
Account oldObject = Trigger.oldMap.get(c.Id);
if (c.Billing_Country__c != oldObject.Billing_Country__c ||
c.BillingCountry!= oldObject.BillingCountry)
{
TestCust__c change = new TestCust__c();
change.Field1__c = 'update';
change.Field2__c = c.Id;
change.Field3__c = false;
change.Field4__c = 'TESTCHANGE';
list1.add(change);
}
}
Database.DMLOptions dmo = new Database.DMLOptions();
dmo.assignmentRuleHeader.useDefaultRule = true;
Database.insert(list1, dmo);
}
}

Problem when inserting two consecutive lines to the database

I have this function and it is working perfectly
public DemandeConge Creat(DemandeConge DemandeConge)
{
try
{
var _db = Context;
int numero = 0;
//??CompanyStatique
var session = _httpContextAccessor.HttpContext.User.Claims.ToList();
int currentCompanyId = int.Parse(session[2].Value);
numero = _db.DemandeConge.AsEnumerable()
.Where(t => t.companyID == currentCompanyId)
.Select(p => Convert.ToInt32(p.NumeroDemande))
.DefaultIfEmpty(0)
.Max();
numero++;
DemandeConge.NumeroDemande = numero.ToString();
//_db.Entry(DemandeConge).State = EntityState.Added;
_db.DemandeConge.Add(DemandeConge);
_db.SaveChanges();
return DemandeConge;
}
catch (Exception e)
{
return null;
}
}
But just when i try to insert another leave demand directly after inserting one (without waiting or refreshing the page )
An error appears saying that this new demand.id exists
I think that i need to add refresh after saving changes?
Any help and thanks
Code like this:
numero = _db.DemandeConge.AsEnumerable()
.Where(t => t.companyID == currentCompanyId)
.Select(p => Convert.ToInt32(p.NumeroDemande))
.DefaultIfEmpty(0)
.Max();
numero++;
Is a very poor pattern. You should leave the generation of your "numero" (ID) up to the database via an Identity column. Set this up in your DB (if DB First) and set up your mapping for this column as DatabaseGenerated.Identity.
However, your code raises lots of questions.. Why is it a String instead of an Int? This will be a bugbear for using an identity column.
The reason you will want to avoid code like this is because each request will want to query the database to get the "max" ID, as soon as you get two requests running relatively simultaneously you will get 2 requests that say the max ID is "100" before either can reserve and insert 101, so both try to insert 101. By using Identity columns the database will get 2x inserts and give them an ID first-come-first-serve. EF can manage associating FKs around these new IDs automatically for you when you set up navigation properties for the relations. (Rather than trying to set FKs manually which is the typical culprit for developers trying to fetch a new ID app-side)
If you're stuck using an existing schema where the PK is a combination of company ID and this Numero column as a string then about all you can do is implement a retry strategy to account for duplicates:
const int MAXRETRIES = 5;
var session = _httpContextAccessor.HttpContext.User.Claims.ToList();
int currentCompanyId = int.Parse(session[2].Value);
int insertAttemptCount = 0;
while(insertAttempt < MAXRETRIES)
{
try
{
numero = Context.DemandeConge
.Where(t => t.companyID == currentCompanyId)
.Select(p => Convert.ToInt32(p.NumeroDemande))
.DefaultIfEmpty(0)
.Max() + 1;
DemandeConge.NumeroDemande = numero.ToString();
Context.DemandeConge.Add(DemandeConge);
Context.SaveChanges();
break;
}
catch (UpdateException)
{
insertAttemptCount++;
if (insertAttemptCount >= MAXRETRIES)
throw; // Could not insert, throw and handle exception rather than return #null.
}
}
return DemandeConge;
Even this won't be fool proof and can result in failures under load, plus it is a lot of code to work around a poor DB design so my first recommendation would be to fix the schema because coding like this is prone to errors and brittle.

MVC Model is using values for old table entries, but new entries return NULL

I have an interesting little problem. My controller is assigning values to the properties in my model using two tables. In one of the tables, I have some entries that I made a while ago, and also some that I've just added recently. The old entries are being assigned values correctly, but the new entries assign NULL even though they're in the same table and were created in the same fashion.
Controller
[HttpPost]
[Authorize]
public ActionResult VerifyReservationInfo(RoomDataView model)
{
string loginName = User.Identity.Name;
UserManager UM = new UserManager();
UserProfileView UPV = UM.GetUserProfile(UM.GetUserID(loginName));
RoomAndReservationModel RoomResModel = new RoomAndReservationModel();
List<RoomProfileView> RoomsSelectedList = new List<RoomProfileView>();
GetSelectedRooms(model, RoomsSelectedList);
RoomResModel.RoomResRmProfile = RoomsSelectedList;
RoomResModel.GuestId = UPV.SYSUserID;
RoomResModel.FirstName = UPV.FirstName;
RoomResModel.LastName = UPV.LastName;
RoomResModel.PhoneNumber = UPV.PhoneNumber;
return View(RoomResModel);
}
GetUserProfile from the manager
public UserProfileView GetUserProfile(int userID)
{
UserProfileView UPV = new UserProfileView();
ResortDBEntities db = new ResortDBEntities();
{
var user = db.SYSUsers.Find(userID);
if (user != null)
{
UPV.SYSUserID = user.SYSUserID;
UPV.LoginName = user.LoginName;
UPV.Password = user.PasswordEncryptedText;
var SUP = db.SYSUserProfiles.Find(userID);
if (SUP != null)
{
UPV.FirstName = SUP.FirstName;
UPV.LastName = SUP.LastName;
UPV.PhoneNumber = SUP.PhoneNumber;
UPV.Gender = SUP.Gender;
}
var SUR = db.SYSUserRoles.Find(userID);
if (SUR != null)
{
UPV.LOOKUPRoleID = SUR.LOOKUPRoleID;
UPV.RoleName = SUR.LOOKUPRole.RoleName;
UPV.IsRoleActive = SUR.IsActive;
}
}
}
return UPV;
}
The issue I see is that this database has a somewhat poor design, and that particular record fell into the trap of that poor design. Consider that you have two ID's on that table:
SYSUserProfileID
SYSUserID
That's usually an indication of a bad design (though I'm not sure you can change it), if you can, you should merge anything that uses SYSUserID to use SYSUserProfileID.
This is bad because that last row has two different ID's. When you use db.Find(someId) Entity Framework will look for the Primary Key (SYSUserProfileID in this case) which is 19 for that row. But by the sounds of it, you also need to find it by the SYSUserID which is 28 for that row.
Personally, I'd ditch SYSUserID if at all possible. Otherwise, you need to correct the code so that it looks for the right ID column at the right times (this will be a massive PITA in the future), or correct that record so that the SYSUserID and SYSUserProfileID match. Either of these should fix this problem, but changing that record may break other things.

Composite C1 How would I rewrite this Sql update statement to work in c#?

I have a piece of code in an image sortable grid which sends back a resulting string array of integers based on the user's new sort order for 'propid':
{ 'imgid': '4,2,3,5,6,7,8,9,1','propid':'391' }
The above shows 9 images on the screen. The db image table has both an image id (imgid) field and a sort sequence field (orderseq). I am using a custom namespace datatype:
< connection.Get< ALocal.propimage >()
like all datatype connections in C1.
In direct SQL I would write this:
string []q = imgid.Split(',');
string qry="";
for (int i = 0; i < q.Length; i++)
{
qry += "update ALocal_propimage set propimage_orderseq="+(i+1)+" where prop_id="+propid+" and propimage_id="+q[i]+" ;";
}
sqlHelper obj = new sqlHelper();
obj.ExecuteNonQuery(qry);
return "Record Updated";
How does this convert to writing it using c# into Composite's C1 CMS 'Updating Multiple Data' method as I keep failing at it?
The C1 site 'Updating Multiple Data' method rudimentary example is:
using (DataConnection connection = new DataConnection())
{
var myUsers = connection.Get<Demo.Users>().Where (d => d.Number < 10).ToList();
foreach (Demo.Users myUser in myUsers)
{
myUser.Number += 10;
}
connection.Update<Demo.Users>(myUsers);
}
Any help would be really appreciated.
You would need to split your update code into a get and a update, to let C1 know exactly which entity you would like to update. So something like this
for (int i = 0; i < q.Length; i++)
{
var propimages = connection.Get<ALocal.propimage>().Where(o => o.PropId = propid && p.PropImageId = q[i]);
foreach (var o in propimages)
{
o.OrderSeq = i + 1;
}
connection.Update(propimages);
}

Resources