Active Directory lookup failure due to replication delay - active-directory

We have a third party tool that we use to create AD objects (users and groups). This tool uses ADSI to create the objects, and we do not and cannot specify a DC that it will write to. As such, it might write to DC1 today and DC2 tomorrow. Everything replicates around though, so no worries.
The problem we have is our process for creating groups looks like this:
Issue group create to 3rd party tool.
If success, lookup group object in AD via LDAP calls (this is a Java app) to get the SID. (The third party tool doesn't return this)
The problem is that the Java LDAP calls do specify a DC when performing a lookup. Let's say Java is set to read from DC1. If the third party tool writes to DC2, then the java lookup to DC1 fails to find the group.
the AD replication delay is small, so if we add a 15 second delay between the create and lookup, then it works, but it is a little ugly.
Also, I tried querying all DC's from Java. This works for the above example, but it still has the same basic trouble when we update an attribute on an user or group and immediately try to read it back. Delay seems to be the only working approach, but it seems like there should be a better approach than this.

3d party tools should not be used in this way to update a directory. The eventual consistency model prevents results from being predictable in any meaningful way. The correct procedure is to perform the update (add/mod/delete) in the application code using an ADD, MODIFY, DELETE, or MODIFY DN request with the post-read request control attached. This method is defined by the standards process, and is guaranteed to be predictable if the update worked. Please carefully study the information at "LDAP: Programming Practices" and its accompanying article.

Related

Creating platform on automating supply chain functions with Memgraph

I would like to know is Memgraph a good option to create platform on automating supply chain functions. Something like Procure-to-Pay as all documents within the process has highly connected data like purchase order number which ties up with all other document.
The use case seems very suited to be represented and handled as a graph. All the connections between documents, stakeholders and supply chain links can be easily modeled as a graph. It would also be very easy and computationally efficient to find connections to other documents in the network.
You can also use some built-in features to help automate the process. For example, triggers and query modules:
• Triggers can be used to define events on which a procedure will be triggered. For example, if you need to notify a person about a document once it’s created and added to the database, you can create a trigger that will fire every time a specific document is created and send a notification (email, push notification, etc.) to the person responsible for it or all users who are connected to the document in the database.
• Query modules can be used to create custom procedures and you can implement them in Python, C/C++ and Rust. For example, you can create the aforementioned notification procedure using query modules. It could be a simple Python script that sends an email to the specified email address.
Also, depending on the architecture of your system, if you are using a message broker like Apache Kafka, you can connect it directly to Memgraph and ingest the data instead of having to implement an intermediary service that would instead connect to the database.
I hope that this info can be of help to you. Feel free to ping me if you have any further questions or you need more info

Proper change-logging impossible with Entity Framework?

I'd like to log all changes made an SQL Azure database using Entity Framework 4.
However, I failed to find a proper solution so far..
So far I can track all entities themself by overriding SaveChanges() and using the ObjectStateManager to retrieve all added, modified and deleted entities. This works fine. Unfortunately I don't seem to be able to retrieve any useful information out of RelationshipEntries. In our database model we some many-to-many relationships, where I want to log new / modified / deleted entries too.
I want to store all changes in an Azure Storage, to be able to follow changes made by a user and perhaps roll back to a previous version of an entity.
Is there any good way to accomplish this?
Edit:
Our scenario is that we're hosting a RESTful WebService that contains all business logic and stores the data in the Azure SQL Database. A client must be authenticated as a user with the WebService, and I'd need to store the information which user changed the data.
See FrameLog, an Entity Framework logging library that I wrote for this purpose. It is open-source, including for commercial use.
Even if you don't want to use the library, you can look at the code to see one way of handling logging relationships. It handles all relationship multiplicities.
Particularly, see the code for the private methods logRelationshipChange, and logForeignKeyChange in the ChangeLogger class.
You can do it with a tracing provider.
You may want to consider just using a database trigger for this. Whenever a value in a table is changed, copy the row to another Archive table. It has worked pretty well for me.

Is there a way to prevent users from doing bulk entries in a Postgresql Database

I have 4 new data entry users who are using a particular GUI to create/update/delete entries in our main database. The "GUI" client allows them to see database records on a map and make modifications there, which is fine and preferred way of doing it.
But lately lot of guys have been accessing local database directly using PGAdmin and running bulk queries (i.e. update, insert, delete,etc) which introduces lot of problems like people updating lot of records without knowing or making mistakes while setting values. It also effects our logging procedures as we are calculating averages and time stamps for reporting purposes which are quite crucial to us.
So is there a way to prevent users from using PGAdmin (please remember lot of these guys are working from home and we do not have access to their machines) and running SQL queries directly in the database.
We still have to give them access to certain tables and allow them to execute sql as long as it's coming through a certain client but deny access to same user when he/she tries to execute a query directly in the db.
The only sane way to control access to your database is converting your db access methods to 3-tier structure. You should build a middleware (maybe some rest API or something alike) and use this API from your app. Database should be hidden behind this middleware, so no direct access is possible. From DB point of view, there are no ways to tell if one database connection is from your app, or from some other tool (pgadmin, simple psql or some custom build client). Your database should be accessible only from trusted hosts and clients should not have access to those hosts.
This is only possible if you use a trick (which might get exploited, too, but maybe your users are not smart enought).
In your client app set some harmless parameter like geqo_pool_size=1001 (if it is 1000 normally).
Now write a trigger that checks if this parameter is set and outputs "No access through PGAdmin" if this parameter is not set like from your app (and the username is not your admin username).
Alternatives: Create a temporary table and check for its existance.
I believe you should block direct access to the database, and set an application to which your clients (humans and software ones) will be able to connect.
Let this application filter and pass only allowed commands.
A great care should be taken in the filtering - I would carefully think whether raw SQL would be allowed at all. Personally, I would design some simplified API, which would make me sure that a hypothetical client-attacker (In God we trust, all others we monitor) would not find a way to sneak with some dangerous modification.
I suppose that from security standpoint your current approach is very unsafe.
You should study advanced pg_hba.conf settings.
this file is the key point for use authorization. Basic settings imply only simple authentification methods like passwords and lists of IP, but you can have some more advanced solution.
GSSAPI
kerberos
SSPI
Radius server
any pam method
So your official client can use a more advanced method, like somthing with a third tier API, some really complex authentification mechanism. Then without using the application it will at least becomes difficult to redo these tasks. If the kerberos key is encrypted in your client, for example.
What you want to do is to REVOKE your users write access, then create a new role with write access, then as this role you CREATE FUNCTION defined as SECURITY DEFINER, which updates the table in a way you allow with integrity checks, then GRANT EXECUTE access to this function for your users.
There is an answer on this topic on ServerFault which references the following blog entry with detailed description.
I believe that using middleware as other answers suggest is an unnecessary overkill in your situation. The above solution does not require for the users to change the way they access the database, just restricts their right to modify the data only through the predefined server side methods.

Current status of DifferentDatabaseScope implementation

my current project requires to connect to two different database (same schema, different database engine) using castle activerecord. The connection string for first database is same all the time, but the second one is changed dynamically based on user input. I decided that the best solution is to use DifferentDatabaseScope for second database.
From castle project documentation, it stated that DifferentDatabaseScope Still very experimental and it's not bullet proof for all situations. I want to know what are the situations that make DifferentDatabaseScope failed or not behave as intended.

how to restrict or filter database access according to application user attributes

I've thought about this too much now with no obviously correct solution. It might be a real wood-for-the-trees situation, so I need stackoverflow's help.
I'm trying to enforce database filtering on a regional basis. My system has various users and each one is assigned to a regional office. I only want users to be able to see data that is associated with their regional office.
Put simply my application is: Java App -> JPA (hibernate) -> MySQL
The database contains object from all regions, but I only want the users to be able to manipulate objects from their own region. I've thought about the following ways of doing it:
1) modify all database querys so they read something like select * from tablex where region="myregion". This is nasty. It doesn't work to well with JPA eg the entitymanager.find() method only accepts primary key. Of course I can go native, but I only have to miss one select statement and my security is shot
2) use a mysql proxy to filter results. kind of funky, but then the mysql proxy just sees the raw call and doesn't really know how it should be filtering them (ie which region the user that made this request belongs to). Ok, I could start a proxy for each region, but it starts getting a little messy..
3) use separate schemas for each region. yeah, simple, I'm using spring so I could use the RoutingDataSource to route the requests via the correct datasource (1 datasource per schema). Of the course the problem now is somewhere down the line I'm going to want to filter by region and some other category. ohps.
4) ACL - not really sure about this. If a did a select * from tablex; would it quietly filter out objects I don't have access for or would a load of access exceptions be thrown?
But am I thinking too much about this? This seems like a really common problem. There must be some easy solution I'm just too dumb to see. I'm sure it'll be something close to / or in the database as you want to filter as near to source as possible, but what?
Not looking to be spoonfed - any links, keywords, ideas, commerical/opensource product suggestions would be really appreciated!! thanks.
I've just been implementing something similar (REALbasic talking to MySQL) over the last couple of weeks for a hierarchical multi-company extension to an accounting package.
There's a large body of existing code which composes SQL statements so we had to live with that and just do a lot of auditing to ensure the restrictions were included in each table as appropriate. One gotcha was related lookups where lookup tables were normally only used in combination with a primary table but for some maintenance GUIs would load the lookup table itself, directly.
There's a danger of giving away implied information such as revealing that Acme Pornstars are a client of some division of the company ;-)
The only solution for that part was very careful construction of DB diagrams to show all implied relationships and lots of auditing and grepping source code, with careful commenting to indicate areas which had been OK'd as not needing additional restrictions.
The one pattern I've come up with to make this more generalised in future is, rather than explicit region=currentRegionVar type searches, using an arbitrary entityID which is supplied by a global CurrentEntityForRole("blah") function.
This abstraction allows for sharing of some data as well as implementing pseudo-entities which represent other restriction boundaries.
I don't know enough about Java and Spring to be able to tell but is there a way you could use views to provide a single-key lookup, where the views are restricted by the region filter?
The desire to provide aggregations and possible data sharing was why we didn't go down the separate database route.
Good Question.
Seems like #1 is the best since it's the most flexible.
Region happens to be what you're filtering on today, but it could be region + department + colour of hair tomorrow.
If you start carving up the data too much it seems like you'll be stuck working harder than necessary to glue them all back together for reporting.
I am having the same problem. It is hard to believe that such a common task (filtering a list of model entities based on the user profile) has not a 'standard' way, pattern or best-practice to do it.
I've found pgacl, a PostgreSQL module. Basically, you do your query like you normally would, and then you tack on an acl_access() predicate to work as a filter.
Maybe there is something similar for MySQL.
I suggest you to use ACL. It is more flexible than other choices. Use Spring Security. You can use it without using Spring Framework. Read the tutorial from link text

Resources