One time user account - database

I have a billing system project where i have a user accounts database with tables that stores user debt,transactions,statistics etc.
Now besides the normal user accounts i need to have another type of account which is temporary and only required as long as user is considered using provided services until he billed and account is closed.
The first thought was to create a new user for each new service sale/use but it seem that i will end up with thousands accounts pretty soon.
The second approach would have an temporary account pool. The system would generate a new account when no free temporary account exists or assign one from the pool when required.
So basically this temporary accounts will identify an actual person and his transactions for a limited time.
Any ideas for the best practices in my situation?

I think you are going to want to have a new account for each person simply so that you have a paper trail. I'm assuming there is going to be some kind of charge against this account. What happens when the charge is contested and you've deleted the information about the user? You will, at the very least, need to archive these records some place.
However, if you insist on having a temporary account, you could us a "real" account record and mark it as temporary and then run a cleanup routine periodically that deletes temporary accounts that are no longer in use.

Related

Restrict Access for users on Leave

How to restrict access to salesforce application for the time users are on leave on basis of Leave start and end date?
Freeze user on leave start date and then unfreeze on leave end date ?
Any other automation approach ?
What exactly you want to achieve? Nightly batch job that does either of these should be enough? Not sure if you can do time-basrd workflows/processes on users.
You can (de)activate them although it's bit of a nuclear option. Other users might be impacted when they work with deactivated users' data, "operation was performed with inactive user" error.
You can (un)freeze manually or by modifying UserLogin table, each user will have 1 record in it. https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_objects_userlogin.htm
If you want them to be able to log in but do limited set of things - you could look at your sharing rules and temp change their role/group/whatever. Or change profile to read-only. Or unassign a permission set.
Then there are more sophisticated things like maybe disabling their SSO, having a login flow that checks something on user record, checking up addresses (like allow login only from office network, not from home). You could look into "high assurance sessions", multi factor authentication (take their work phone or RSA device when they go on holidays?) or there's interesting trailhead about detecting / preventing suspicious activity. https://trailhead.salesforce.com/content/learn/modules/enhanced_transaction_security and https://trailhead.salesforce.com/en/content/learn/modules/event_monitoring

Conceptual issue: Verifying that two users are present

I'm a programmer who is about to release an intranet site where apprentices can rate their educators and vice versa. Currently the system is working as planned, however HR wants some way to verify that the users are OK with their ratings. If not, they should be able to unlock their ratings so that the other person has to re-do the rating.
Unfortunately, HR also wants to reduce the amount of logins that these users have to endure. In the worst case scenario, users have to:
Log in to rate the educator/apprentice
Log in to unlock the rating
Log in to rate the educator/apprentice again
And so on...
The user who fills the rating has to be user A, while the user who unlocks or confirms the rating has to be user B. User A can also unlock the rating if they have a correction.
This process has to be done twice - once for the educator, once for the apprentice.
There is usually only one workstation present (factory environment).
Possible solution:
My suggestion is a kind of meeting workflow. One user logs in, clicks a button in the appraisal and the workflow starts. The other user is prompted to log in. This starts a kind of "double session" with both users logged in at the same time. This is a way to verify that both of them are present in a meeting.
This process could be used for multiple ratings at the same time, guiding the users through the process one by one.
HR wants both of them to meet and discuss their ratings.
Are there any security and/or best practice concerns that I should be aware of? The system has to be ready on the first of August, so I'm really hoping to solve this issue as easily as possible. Are there better ways to do this?
It turns out that my idea of logging in two users at the same time wasn't so bad. At first, a regular user logs in and launches the meeting mode. In order to verify that the other user is present, this second user logs in. Both user's data is now stored in the session and the meeting workflow launches, guiding both of them through their appraisals. When the last appraisal is finished, the second user is logged out.
This question is solved.

Handling user activity on web portal with performance

The users on my website do operations like login, logout, update profile, change passwords etc. I am trying to come up with something that can store these user activities for my users and also return the matching records in case some system asks me for them based on userIds.
The problem from what I can think of looks more write intensive(since users keep logging into the website very often and I have to record that). Once in a while(say when users clicks on history or some reporting team needs it), the data records are read and returned.
So my application is write intensive
There are various approaches I can think of.
Approach 1. The system that gets those user activities keeps on writing them into a queue and another one keep fetching from that queue(periodically or when it is filled completely) and write them into database(which has been sharded(assume based on hash of userId)).
The problem with this approach is if my activity manager runs on multiple nodes, it has to send those records to various shards which means a lot of data movement over network.
Approach 2:The system that gets those user activities keeps on writing them into a queue and another one keep fetching from that queue(periodically or when it is filled completely) and write them into read though write through cache which would take care of writing into the database.
Problem with this approach is I do not know If I can control as to where those records would be written(I mean to which shard). Basically I do not know if the write through cache works(does it map to a local DB or it can manage to send data to shards).
Approach 3: The login operation is the most common user activity in my system. I can have a separate queue for login which must be periodically flushed to disk.
Approach 4: Use some cloud based storage which acts as a in memory queue where data coming from all nodes in stored. This would be reliable cache that guarantees no data loss. Periodically read from this cache and store that into the database shards.
There are many problems to solve:
1. Ensuring I do not loose the data(What kind of data replication to use i.e. any queue that ensures reliability)
2. Ensuring my frequent writes do not result in performance
3. Avoid single point of failure.
4. Achieving infinite scale
I need suggestion based on above from the existing solution available.

Delphi Solution for data replication between two remote sites loosely connected

I'm using Delphi XE4 Architect (Delphi Xe3 is ok as well)
I need to find a smart solution to the following problem
and I would like to use one of these frameworks: kbmMW or RemOjects SDK / DataAbstract or RealThinClient
Currently I have an application using a very simple MSSQL database on a site A that is used by users of a site B through the remote desktop.
The application sometimes needs to show some pictures and also view some PDF, but it is mostly text data entry.
There is no particular reason for me to use MSSQL,
but it is a database that I found already active and populated and I have not built it myself.
And now, it would be complicated to change it.
(Database is not important, not using specific features nor stored procedures nor triggers)
Users of the Site B are connected to the A site via a network connection very slow
and occasionally the connection is not available for a few hours and up to one day (this is the major problem).
The situation of the connection, unfortunately, can not be improved for various reasons.
The database is quite simple has many tables that hardly ever change,
about ten instead undergo daily updates and potentially they may be subject to competing changes.
Mainly the records of these tables contain data that are locked in update
from a single user to edit some fields and then he saves releasing the lock.
I would like to get something very different to optimize performance.
Users of the A site have higher priority, they are more important, because the A site is the headquarters.
I would like to have a copy of the database at Site A to Site B,
so that users of site B can work in local, much faster without using the remote desktop connecting to the site A.
The RDP protocol is not very optimized and in any case if the connection is absent, users could not work.
Synchronize and update databases lock records between the two databases may not be a big problem.
Basically when a user of the Site B acquires edit a record in the database B,
obviously a user of the site A should not be able to modify the same record on the database of the site A.
This should also work in the opposite direction of course.
My big problem is figuring out how handling to the best the situation that occurs
when the connection between B and A is not available for some hours. (And transaction/events is increasing on site B).
Events on Site A have generally priority (on collision) on events on Site B.
Users of the Site B must be able to continue working.
When the connection becomes active, the changes should be sent to the database at Site A.
Obviously this can result in conflicts, but the changes made on the record
possibly by users B can be discarded or it can be done under the supervision of a selective merge
and approval record by record user of the site B.
Well, I hope the scenario is almost explained clearly.
Additional infos:
DB schema is very simple, only tables, no triggers, stored procedure. So I can build one as example but imagine 10 tables that can be updated in concurrency.
DB is used by a desktop app of sales departement, so it contains most secret data.
Remote connection is typically max 512Kbit, but the main problem here is that the connection sometimes may be not active
and user on remote site must work anyway. THis is the main focus.
Total data of daily updates could be at max 10 Mb, compressed, only for DB connections. There are some other data synchronized
on the same connection but they are not part of this job.
I don't want to use specific MSSQL tools or services (replications or so on), because DB could change in future.
Thanks
We do almost exactly this using a Delphi client app, a kbmMW based Delphi server app, MSSQL database (though it used to work quite happily on on DBISAM database too).
We have some tables that only the head office site users are allowed to modify. The smaller tables are transferred in their entirety each time there is a "merge". The larger tables and the transaction type tables all have a date added and/or a date modified field and only those records that have been changed or added in the last 3 weeks or so (configurable) are transferred. This means sites can still update to the latest data even if they have been disconnected for quite some time - we used to have clients in remote places on dubious dial up lines!
We only run the merge routines once or twice a day but it would work equally well on an hourly basis or other time schedule.
At given times of day each site (including head office) "export" their changed/new records to files (eg client dataset tables or similar). These are then zipped up by the application and placed in an "outgoing" folder. The zip file is named based on the location id, date, time etc. The files are transferred by some external means eg via FTP / file share / email etc etc. Each branch office sends/transfers its data files to head office and head office transfers its data to each branch. The files are transferred by whatever means to an "incoming" folder.
On a regular basis (eg hourly) each location does a check on the incoming folder to see if there is anything new for it to import. If so it adds all the new records, branch locations overwrite the head-office data tables with the new ones and edited records are merged in "somehow". This is the tricky bit. The easiest policy is "head office wins" so all edits are accepted unless there is a conflict in which case the head office version wins. Alternatively you could use "last edited wins" - but then you need to make sure clocks are in sync across locations. The other option is to add conflicting records to some form of "suspense" status and let an end user decide at some point in the future. We do this on one data set. Whichever conflict method you choose you need to record each decision in some form of log table and prompt an administrative level user to check occasionally.
When the head office imports data or when data is added at the head office then a field is set to indicate the data is part of the master data. When branches add data this field is empty to indicate it has yet to reach the master set. This helps when branches export their data as they can include all data that doesn't have this field set.
We have found that you can't run the merge interactively as you'll end up never getting any work done and you won't be able to run the merge at night etc. It needs to be fully automated with the ability for an admin user to make adjustments at some point after the fact.
We've been running this approach for several years now on multi-site operations and once it settled down it has worked pretty much flawlessly. With 2 export/import schedules per day we have found the branch offices run perfectly well and are only ever missing less than a days worth of transactions. Works well in our scenario where we don't often have conflicts. Exported data is in the region of 5-10MB which zips up plenty small enough.
Primary keys are vital! We use a GUID and it hasn't let us down yet.
The choice of database server and n-tier framework are, actually, irrelevant. It's the process that matters here.
Basically when a user of the Site B acquires edit a record in the database B, obviously a user of the site A should not be able to modify the same record on the database of the site A. This should also work in the opposite direction of course.
I can't see how you're ever going to make this bit work reliably if both sites have their own copy of the database and you're allowing for dropped/non-existent inter-site connections on occasion.

Centralized data access or variables

I'm trying to find a way to access a centralized database for both retrieval and update.
the following is what I'm looking for,
Server 1 has this variable for example
int counter;
Server 2 will be interacting with the user, and will increase the counter whenever the user uses the service, until a certain threshold is reached. when this threshold is reached then server 2 will start rejecting the user access.
Also, the user will be able to use multiple servers (like server 2) from multiple locations and each time the user accesses the access any server the counter will be increased.
I tried google but it's hard to search for something without a name.
One approach to designing this is to do sharding by user - i.e. split the users between your servers depending on the ID of the user. That is, if you have 10 servers, then users with ID's ending with 2 would have all of their data stored on server 2, and so on. This assumes that user ID's are distributed uniformly.
One other approach is to shard the users by location - if you have servers in Asia vs Europe, for example. You'd need a property in the User record that tells you where the user is located; based on that, you'll know which server to route them to.
Ultimately, all of these design options have a concept of "where does the master record for a user reside?" Each of these approaches attempts to definitively answer this question.
A different category of approaches has to do with multi-master replication, which is supported by some database vendors; this approach does not scale as well (i.e. it's hard to get it to scale to 20 servers), but you might want to look into it, too.

Resources