I have been assigned a task to export the AD Attributes than find out what systems are using these attributes. I have not had much luck in scripting or a tool that can provide just that. Is this feasible and if so how? I have already exported attributes. Just need to find what systems are using them.
This isn't possible with any reasonable accuracy, especially if "using" isn't defined for you.
The event logs on the domain controllers will tell you where login events are coming from, but only by IP. That doesn't tell you which application is authenticating. You would have to do monitoring on that computer and see which application is making the connection. But then the logs would be cluttered with connections made by Windows itself, or Exchange (if you use Exchange for email). It it would be very difficult to identify what is coming from an 3rd-party application rather than Windows itself.
Also, applications can request more information than they need. It's very easy when programming with LDAP to request every attribute for an object, even if you only intend to use one. For example, take this C# code:
var de = new DirectoryEntry("LDAP://example.com");
Console.WriteLine(de.Properties["name"].Value);
That only "uses" the name attribute. But because of the way LDAP works, it actually requests every non-constructed attribute that has a value. (there is a way to specifically ask for only one attribute, but you have to know that and use that)
So even if you could find logs saying that "this IP requested all of these attributes", and then figure out which application made that request, that doesn't mean it "used" all of those attributes.
Related
I am currently trying to find out AD users password expiry date.
Using the methods described on numerous pages for e.g. here work fine until a user or group in AD is using a fine grained password policy that does not follow the users domain password policy.
I found a property called msDS-UserPasswordExpiryTimeComputed that figures that all out without trying to do any calculations.
This works well, until we are using a global catalog since this property is not replicated by default. When I attempting to replicate the msDS-UserPasswordExpiryTimeComputed property in my global catalog, I get the following error:
Is there anyway to replicate this property or is something wrong with my setup that is not allowing me to replicate this property? Is there a better way to calculate user password expiry to take into account the fine grained password policy?
I suspect you can't. I can't find any authoritative documentation saying it is not possible, but here are the reasons I think it's not possible:
The attribute is constructed, meaning it's not stored, but it's calculated at the time you ask for it.
The date depends on the policy on the domain, thus the server returning the data needs to know the policy on the domain of the user.
Since a GC may not be on the domain of the user you find, it may not have the information needed to be able to calculate the value.
As a workaround, you can just rebind to a DC to get the value. You didn't say which language you're working with, but usually you can take the path of the object you found, which will start with "GC://", and just replace that with "LDAP://". Then grab the msDS-UserPasswordExpiryTimeComputed value.
I use a self hosted service within a WPF application for certain tasks. The service host is started at runtime and its base address is http://localhost:Whatever-port-is-free-at-runtime. This works fine when the user has admin rights but problems arise when the application is ran by a restricted user.
I found some suggestions on the web that suggested reserving the url using netsh/httpcfg which works fine for admin users but fails for restricted users because they presumably do not have the rights to use these tools to reserve a url. As the port number is not known until runtime the url reservation command can logically only be run at runtime which means the process will be initiated by a restricted user without the right privilege to execute the command. Am i correct in thinking this?
What i would like to know is if there is a suitable work around? Also, i would like to know if a restricted user can open a locally hosted WCF service at all, since solving the aforementioned problem will be pointless if the restricted user couldn't do this.
This question perfectly describes my first issue of URL reservation
In WCF, the HTTP and HTTPS bindings use HTTP.sys under the cover to reserve a required URL for a specific WCF service, which is the same path IIS itself follows while doing the bindings for the websites it manages. This explains why the process performing the HTTP/HTTPS binding is required to run in elevated mode.
That being said, I would solve your issue in two different ways:
Option 1: use a different kind of binding. NetTcpBinding and NetNamedPipesBinding, for example, do not generally require administrative privileges: this is by far the easiest way to go.
Option 2: setup the required namespace reservation at installation time. This way you may ask your users to perform the installation in elevated mode and later allow restricted accounts to run it. While performing the initial installation/reservation you may also find out an available port to use (and perhaps save it in a configuration file for later reuse).
In a domain with AD Sites and Services configured is it possible to get the Site of a computer from LDAP? Is it stored as an attribute?
Unless this has changed over the last couple of years outside of my knowledge, there is not. Historically this was never done as AD site knowledge was ephemeral...the assumption was that computers move around so storing where they are is silly. Plus there was no global need for the knowledge.
You could of course add this. By this i mean, you could do something like, extend the schema with a new attribute for this and set a start-up script on your domain-joined machines to write this (if it has changed since they last wrote) to the directory. Obviously you'll want to test this well to ensure it doesn't create more problems than it solves...
On the Win32 point of view you've got the DsAddressToSiteNamesEx API. I don't know how to find it using pure LDAP.
I have a GWT app, which is deployed on the app engine. The application is basically an exam simulator. All the exam questions and answers are stored in an XML file on the server. I use JAXB parser to parse the XML file and send a list of objects to the client through GWT RPC.
I noticed that during the transit (server -> client), the entire data is visible in plain text in Firebug. Since the data (exam questions and answers) are my intellectual property (IP) and something that I give lot of value to, I'm concerned that it's very easy to steal that data. Therefore, I'm trying to find ways to do some basic encryption and obfuscate the content when it's being sent over from the server to the client.
After Googling, I came across gwt-crypto project, and within a few minutes, I was able to achieve the exact result that I wanted. The server would encrypt the data, and the client would decrypt it. In Firebug, it would show the data in encrypted format, and not as plain text.
However, I ran into an issue. After implementing encryption/decryption, I noticed that my application would not load inside my company's network, which is obviously protected by a firewall. The application works perfectly from home or even on a 3G network on my phone. Another version of the application, which does not use encryption/decryption works perfectly from within my company's network. I confirmed this by creating 2 exact same versions of the app, with the only difference between a boolean flag, that determines whether encryption/decryption is enabled or disabled.
I have the following questions here:
What is the best way to achieve the result that I want to achieve? Is gwt-crypto a good solution for that? I'm fine with any simple approach to obfuscate the data during transit. It doesn't have to be a sophisticated algorithm.
What could be the possible reason for a GWT app, with encryption/decryption enabled, not working inside a firewall? I'm really clueless on this.
I'll appreciate any help on this issue.
Using SSL is the right way to go.
In your case, given AppEngine SSL limitations, you should load your HTML normally from non-SSL domain and use cross-site RPC to load your data via SSL domain.
Update:
What is the best way to achieve the result that I want to achieve?
If you want to secure the data in transit then the only secure option is SSL/HTTP. Usually it's also the simplest one as it does not require you to change the application code, just server configuration. In your particular case (appengine with private domain), it takes more work as described above.
Is gwt-crypto a good solution for that?
No. gwt-crypto uses a key to encrpt/decrypt the data. You also need a secure way to discribute this key.
I'm fine with any simple approach to obfuscate the data during transit.
Security through obscurity in not security. It's a false sense of security, which is even more dangerous than no security. It's enough that only one of the technically capable students cracks this and soon everybody would do it.
Possible attack would go like this:
Snoop the network, get username/password of user.
Login as that user, have browser load exam data, which is now unencrypted in memory.
Dump the DOM and inspect it for exam questions.
-
What could be the possible reason for a GWT app, with encryption/decryption
enabled, not working inside a firewall?
Use firebug to make sure network connections are identical, except for the encrypted content. Firewalls should not work that deep. Talk to sysadmin about it.
Anybody here has actually implemented any logging strategy when application is running as XBAP ? Any suggestion (as code) as to how to implement a simple strategy base on your experience.
My app in desktop mode actually logs to a log file (rolling log) using integrated asop log4net implementation but in xbap I can't log cause it stores the file in cache (app2.0 or something folder) so I check if browser hosted and dont log since i dont even know if it ever logs...(why same codebase)....if there was a way to push this log to a service like a web service or post error to some endpoint...
My xbap is full trust intranet mode.
I would log to isolated storage and provide a way for users to submit the log back to the server using either a simple PUT/POST with HttpWebRequest or, if you're feeling frisky, via a WCF service.
Keep in mind an XBAP only gets 512k of isolated storage so you may actually want to push those event logs back to the server automatically. Also remember that the XBAP can only speak back to it's origin server, so the service that accepts the log files must run under the same domain.
Here's some quick sample code that shows how to setup a TextWriterTraceListener on top of an IsolatedStorageFileStream at which point you can can just use the standard Trace.Write[XXX] methods to do your logging.
IsolatedStorageFileStream traceFileStream = new IsolatedStorageFileStream("Trace.log", FileMode.OpenOrCreate, FileAccess.Write);
TraceListener traceListener = new TextWriterTraceListener(traceFileStream);
Trace.Listeners.Add(traceListener);
UPDATE
Here is a revised answer due to the revision you've made to your question with more details.
Since you mention you're using log4net in your desktop app we can build upon that dependency you are already comfortable working with as it is entirely possible to continue to use log4net in the XBAP version as well. Log4net does not come with an implementation that will solve this problem out of the box, but it is possible to write an implementation of a log4net IAppender which communicates with WCF.
I took a look at the implementation the other answerer linked to by Joachim Kerschbaumer (all credit due) and it looks like a solid implementation. My first concern was that, in a sample, someone might be logging back to the service on every event and perhaps synchronously, but the implementation actually has support for queuing up a certain number of events and sending them back to the server in batch form. Also, when it does send to the service, it does so using an async invocation of an Action delegate which means it will execute on a thread pool thread and not block the UI. Therefore I would say that implementation is quite solid.
Here's the steps I would take from here:
Download Joachim's WCF appender implementation
Add his project's to your solution.
Reference the WCFAppender project from your XBAP
Configure log4net to use the WCF appender. Now, there are several settings for this logger so I suggest checking out his sample app's config. The most important ones however are QueueSize and FlushLevel. You should set QueueSize high enough so that, based on how much you actually are logging, you won't be chattering with the WCF service too much. If you're just configuring warnings/errors then you can probably set this to something low. If you're configuring with informational then you want to set this a little higher. As far as FlushLevel you should probably just set this to ERROR as this will just guarantee that no matter how big the queue is at the time an error occurs everything will be flushed at the moment an error is logged.
The sample appears to use LINQ2SQL to log to a custom DB inside of the WCF service. You will need to replace this implementation to log to whatever data source best suits your needs.
Now, Joachim's sample is written in a way that's intended to be very easy for someone to download, run and understand very quickly. I would definitely change a couple things about it if I were putting it into a production solution:
Separate the WCF contracts into a separate library which you can share between the client and the server. This would allow you to stop using a Visual Studio service reference in the WCFAppender library and just reference the same contract library for the data types. Likewise, since the contracts would no longer be in the service itself, you would reference the contract library from the service.
I don't know that wsHttpBinding is really necessary here. It comes with a couple more knobs and switches than one probably needs for something as simple as this. I would probably go with the simpler basicHttpBinding and if you wanted to make sure the log data was encrypted over the wire I would just make sure to use HTTPS.
My approach has been to log to a remote service, keyed by a unique user ID or GUID. The overhead isn't very high with the usual async calls.
You can cache messages locally, too, either in RAM or in isolated storage -- perhaps as a backup in case the network isn't accessible.
Be sure to watch for duplicate events within a certain time window. You don't want to log 1,000 copies of the same Exception over a period of a few seconds.
Also, I like to log more than just errors. You can also log performance data, such as how long certain functions take to execute (particularly out-of-process calls), or more detailed data in response to the user explicitly entering into a "debug and report" mode. Checking for calls that take longer than a certain threshold is also useful to help catch regressions and preempt user complaints.
If you are running your XBAP under partial trust, you are only allowed to write to the IsolatedStorage on the client machine. And it's just 512 KB, which you would probably want to use in a more valuable way (than for logging), like for storing user's preferences.
You are not allowed to do any Remoting stuff as well under partial trust, so you can't use log4net RemotingAppender.
Finally, under partial trust XBAP you have WebPermission to talk to the server of your app origin only. I would recommend using a WCF service, like described in this article. We use similar configuration in my current project and it works fine.
Then, basically, on the WCF server side you can do logging to any place appropriate: file, database, etc. You may also want to keep your log4net logging code and try to use one of the wcf log appenders available on the internets (this or this).