Creating a PrincipalSearcher takes very long - active-directory

Creating an instance of a PrincipalSearcher for accessing a local ActiveDirectory takes about 11 - 22 seconds. Interestingly the time is always 11 or 22 seconds, +/- a few milliseconds. The OU I'm trying to read out consists of not more than 30 users and 20 groups. If I understand it right the AD is completely read during the creation of the PrincipalSearcher. But because my AD is tiny I would not expect it to take that long. Am I doing something wrong or is this an expected behavior?
This is the test code I use to reproduce the described behavior.
const string name = "localhost:389";
const string container = "OU=OUNAME,DC=DCNAME,DC=local";
var principalContext = new PrincipalContext(ContextType.ApplicationDirectory, name, container);
var userPrincipal = new UserPrincipal(principalContext);
// The next call takes about 22 - 44 seconds
var principalSearcher = new PrincipalSearcher(userPrincipal);
var result = principalSearcher.FindAll();
When I do a performance profiling I get the following result:
UPDATE #1:
The local AD is done with AD LDS and I found this question that seems to describe a similar problem. But I have both, the AD and the application, running on the same machine and I'm using localhost or 127.0.0.1. So this did not helped in my case.
UPDATE #2:
If I'm no longer connected to the company's domain, the same code takes less that 100ms. What could cause that different behavior? And more important, how can I overcome that behavior while connected to the domain?

Try using the options parameter of the PrincipalContext constructor and see if some combination solves the problem for you.
The default (if you don't specify anything) is:
ContextOptions.Negotiate | ContextOptions.Signing | ContextOptions.Sealing
But see what combination of ContextOptions makes a difference, if any.

Related

How to display XML-RPC.Net Server instance data in the UI?

We've trying recently to use XML-RPC.Net library on our project.
Both server(.Net Remoting) and client have been made according to the instructions we've found on http://xml-rpc.net/.
The connection has been made, we obtain data from the server and so on.
As the title states, now, we'd like to know how to make our XML-RPC server instance, which is created after the first client call, be able to give feedback to a WPF UI.
What we'd like to accomplish is to register an event on a server property so the call could arrive to the UI thread.
We are open to any suggestions in this regard.
Here is the code that registers the channel on server side:
IDictionary props = new Hashtable();
props["name"] = "SubsetHttpChannel";
props["port"] = 5678;
channel = new System.Runtime.Remoting.Channels.Http.HttpChannel(
props,
null,
new XmlRpcServerFormatterSinkProvider()
);
ChannelServices.RegisterChannel(channel, false);
RemotingConfiguration.RegisterWellKnownServiceType( typeof(SubsetServer), "subsetserver.rem", WellKnownObjectMode.Singleton);
This is the code that shows how we'd like to set the property after Server instance is created on the UI:
Server = new SubsetServer();
Server.Machine.OnChangeState += delegate(State actual, State next, Event pEvent)
{
uiWindowInstance.PostMessage(string.Format("Subset Server: {0} -> {1}", actual.Name, next.Name));
};
Technologies used: VS2012, WPF 4.5 and XML-RPC.NET 2.5.0
Thanks in Advance
Thanks to anyone that took the time to read it and try to answer.
I found a solution that fits me for the moment. I'd like to share it with you in the hope someone could give any hints wheter this is a solution that may generate problems in the future.
After analyzing, I found out that both server instances run in the same process. So I've created a Singleton as a property inside my Server.
I've put whatever I need inside the Singleton, so for the delegate I'd like to use in my question, the code now is:
Server = new SubsetServer();
Server.**singleton**.Machine.OnChangeState += delegate(State actual, State next, Event pEvent)
{
uiWindowInstance.PostMessage(string.Format("Subset Server: {0} -> {1}", actual.Name, next.Name));
};
Hope this helps anyone else.
Please comment if you find any flaws.

Is there any way to trace\log the sql using Dapper?

Is there a way to dump the generated sql to the Debug log or something? I'm using it in a winforms solution so the mini-profiler idea won't work for me.
I got the same issue and implemented some code after doing some search but having no ready-to-use stuff. There is a package on nuget MiniProfiler.Integrations I would like to share.
Update V2: it supports to work with other database servers, for MySQL it requires to have MiniProfiler.Integrations.MySql
Below are steps to work with SQL Server:
1.Instantiate the connection
var factory = new SqlServerDbConnectionFactory(_connectionString);
using (var connection = ProfiledDbConnectionFactory.New(factory, CustomDbProfiler.Current))
{
// your code
}
2.After all works done, write all commands to a file if you want
File.WriteAllText("SqlScripts.txt", CustomDbProfiler.Current.ProfilerContext.BuildCommands());
Dapper does not currently have an instrumentation point here. This is perhaps due, as you note, to the fact that we (as the authors) use mini-profiler to handle this. However, if it helps, the core parts of mini-profiler are actually designed to be architecture neutral, and I know of other people using it with winforms, wpf, wcf, etc - which would give you access to the profiling / tracing connection wrapper.
In theory, it would be perfectly possible to add some blanket capture-point, but I'm concerned about two things:
(primarily) security: since dapper doesn't have a concept of a context, it would be really really easy for malign code to attach quietly to sniff all sql traffic that goes via dapper; I really don't like the sound of that (this isn't an issue with the "decorator" approach, as the caller owns the connection, hence the logging context)
(secondary) performance: but... in truth, it is hard to say that a simple delegate-check (which would presumably be null in most cases) would have much impact
Of course, the other thing you could do is: steal the connection wrapper code from mini-profiler, and replace the profiler-context stuff with just: Debug.WriteLine etc.
You should consider using SQL profiler located in the menu of SQL Management Studio → Extras → SQL Server Profiler (no Dapper extensions needed - may work with other RDBMS when they got a SQL profiler tool too).
Then, start a new session.
You'll get something like this for example (you see all parameters and the complete SQL string):
exec sp_executesql N'SELECT * FROM Updates WHERE CAST(Product_ID as VARCHAR(50)) = #appId AND (Blocked IS NULL OR Blocked = 0)
AND (Beta IS NULL OR Beta = 0 OR #includeBeta = 1) AND (LangCode IS NULL OR LangCode IN (SELECT * FROM STRING_SPLIT(#langCode, '','')))',N'#appId nvarchar(4000),#includeBeta bit,#langCode nvarchar(4000)',#appId=N'fea5b0a7-1da6-4394-b8c8-05e7cb979161',#includeBeta=0,#langCode=N'de'
Try Dapper.Logging.
You can get it from NuGet. The way it works is you pass your code that creates your actual database connection into a factory that creates wrapped connections. Whenever a wrapped connection is opened or closed or you run a query against it, it will be logged. You can configure the logging message templates and other settings like whether SQL parameters are saved. Elapsed time is also saved.
In my opinion, the only downside is that the documentation is sparse, but I think that's just because it's a new project (as of this writing). I had to dig through the repo for a bit to understand it and to get it configured to my liking, but now it's working great.
From the documentation:
The tool consists of simple decorators for the DbConnection and
DbCommand which track the execution time and write messages to the
ILogger<T>. The ILogger<T> can be handled by any logging framework
(e.g. Serilog). The result is similar to the default EF Core logging
behavior.
The lib declares a helper method for registering the
IDbConnectionFactory in the IoC container. The connection factory is
SQL Provider agnostic. That's why you have to specify the real factory
method:
services.AddDbConnectionFactory(prv => new SqlConnection(conStr));
After registration, the IDbConnectionFactory can be injected into
classes that need a SQL connection.
private readonly IDbConnectionFactory _connectionFactory;
public GetProductsHandler(IDbConnectionFactory connectionFactory)
{
_connectionFactory = connectionFactory;
}
The IDbConnectionFactory.CreateConnection will return a decorated
version that logs the activity.
using (DbConnection db = _connectionFactory.CreateConnection())
{
//...
}
This is not exhaustive and is essentially a bit of hack, but if you have your SQL and you want to initialize your parameters, it's useful for basic debugging. Set up this extension method, then call it anywhere as desired.
public static class DapperExtensions
{
public static string ArgsAsSql(this DynamicParameters args)
{
if (args is null) throw new ArgumentNullException(nameof(args));
var sb = new StringBuilder();
foreach (var name in args.ParameterNames)
{
var pValue = args.Get<dynamic>(name);
var type = pValue.GetType();
if (type == typeof(DateTime))
sb.AppendFormat("DECLARE #{0} DATETIME ='{1}'\n", name, pValue.ToString("yyyy-MM-dd HH:mm:ss.fff"));
else if (type == typeof(bool))
sb.AppendFormat("DECLARE #{0} BIT = {1}\n", name, (bool)pValue ? 1 : 0);
else if (type == typeof(int))
sb.AppendFormat("DECLARE #{0} INT = {1}\n", name, pValue);
else if (type == typeof(List<int>))
sb.AppendFormat("-- REPLACE #{0} IN SQL: ({1})\n", name, string.Join(",", (List<int>)pValue));
else
sb.AppendFormat("DECLARE #{0} NVARCHAR(MAX) = '{1}'\n", name, pValue.ToString());
}
return sb.ToString();
}
}
You can then just use this in the immediate or watch windows to grab the SQL.
Just to add an update here since I see this question still get's quite a few hits - these days I use either Glimpse (seems it's dead now) or Stackify Prefix which both have sql command trace capabilities.
It's not exactly what I was looking for when I asked the original question but solve the same problem.

Datastore write limit tests - trying to break app engine, but it won´t break ;)

We´re trying to test the write limit exceptions mentioned to be about 1 write / second to prep our code for it (https://developers.google.com/appengine/docs/python/datastore/exceptions -> Timeout)
So I´m creating a item and updating it with the loop count 10k times via tasks and 10k times via a loop... It doesn´t seem to trigger a exception although the writes per second should be high enough (I remember something like more than one write per second gets critical).
Always the same: things don´t break when your´re you want them to ;).
class Message(ndb.Model):
text = ndb.StringProperty()
count = ndb.IntegerProperty()
#defined in seperate file
class DeferredClass(object):
def put(self, id, x):
msg = Message.get_by_id(id)
msg.count = x
try:
msg.put()
except:
logging.error("error putting the Message")
logging.error(sys.exc_info()[0])
msg = Message(text="TestGreeting", count=0)
key = msg.put()
id = key.id()
test = DeferredClass()
for x in range(10000):
deferred.defer(test.put, id, x)
for x in range(10000):
msg.count = x
try:
msg.put()
except:
logging.error("error putting the Message")
logging.error(sys.exc_info()[0])
self.response.out.write("done")
PS: We´re aware that the docs are for db and the code is ndb... the basic limitations should still exist... Also: Docs on ndb Exceptions would be great! Anyone?
Using a non-default TaskQueue with a increased rate limit of 350/tasks/sec led to 20 instances being fired up and plenty of Timeout Exceptions... Thanks Mr. Steinrücken!
The Exception is google.appengine.api.datastore_errors.Timeout, which is the same as documented for the db package - so no ndb extras there.
PS: Our idea is to catch the exception in our cache handling class as a sign of datastore overload and automatically set up shading for that item... monitoring the quests a minute and diable shading again when not needed...

Error 503 Will Not Perform when Adding a Group Entry using UnboundID

I'm trying to add a Group to my Active Directory service using the UnboundID LDAP SDK, and keep getting error 503: Will Not Perform.
I have verified I'm using an SSL connection, and that I'm connecting with a user that belongs to the Administrators group, which -unless I'm mistaken - gives him the right to create new entries.
I have also raised the logging level of the LDAP Interface Events all the way to 5, and the event viewer registers a number of events, none of which are useful in explaining why the service is unwilling to perform my create entry operation.
Any ideas on what can be causing this problem?
Below is a sample of the scala code I'm using:
val connection = connect("MyAdminUser", "MyAdminPass")
val addGroupResult = connection.add("CN=TestGroup2,OU=Groups,OU=mydomain,DC=mydomain,DC=local",
new Attribute("objectClass", "top", "group"),
new Attribute("name","TestGroup2"),
new Attribute("sAMAccountName","TestGroup2"),
new Attribute("sAMAccountType","268435456"),
new Attribute("objectCategory","CN=Group,CN=Schema,CN=Configuration,DC=mydomain,DC=local"),
new Attribute("cn","TestGroup2"),
new Attribute("distinguishedName","CN=TestGroup2,OU=Groups,OU=mydomain,DC=mydomain,DC=local"),
new Attribute("instanceType","4"),
new Attribute("groupType","-2147483646")
)
private def connect(user: String, pass: String) = {
val options = new LDAPConnectionOptions()
options.setFollowReferrals(true)
val sslUtil = new SSLUtil(new TrustAllTrustManager())
val socketFactory = sslUtil.createSSLSocketFactory()
new LDAPConnection(socketFactory, options, host, securePort, DN(user), pass)
}
And here's the error message I'm getting:
Exception in thread "main" LDAPException(resultCode=53 (unwilling to perform), errorMessage='0000209A: SvcErr: DSID-031A104A, problem 5003 (WILL_NOT_PERFORM), data 0', diagnosticMessage='0000209A: SvcErr: DSID-031A104A, problem 5003 (WILL_NOT_PERFORM), data 0')
My error was including too many attributes in the Add operation, some of which are not supposed to be set manually but rather by the SAM (Security Account Manager).
The correct code is as follows:
val addGroupResult = connection.add("CN=TestGroup2,OU=Groups,OU=simpleBI,DC=domain,DC=local",
new Attribute("objectClass", "top", "group"),
new Attribute("name","TestGroup2"),
new Attribute("sAMAccountName","TestGroup2"),
new Attribute("objectCategory","CN=Group,CN=Schema,CN=Configuration,DC=domain,DC=local")
)
Note that I've removed a few attributes, including sAMAccountType, which were rejected by AD. I've also removed some redundant ones. I believe what I have is the minimal attribute set that fulfills my needs.
The connection code was unchanged.

Connections with Entity Framework and Transient Fault Handling Block?

We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider

Resources