In my Yesod web app I found my code - which had worked perfectly before - impossible to launch properly.
The error message was this:
Database migration: manual intervention required.The following actions are >considered unsafe: DROP TABLE "config_d_b";
The database consists of this code:
share [mkPersist sqlSettings, mkMigrate "migrateAll"]
[persistLowerCase|
ConfigDB
numberOfParticipants Int
setEndOfRegDate Bool Maybe
endOfRegistration Day Maybe
stopRegistration Bool
groupName Text
deriving Show
|]
I'm working on fpcomplete and now, after logging out and leaving it alone for ten minutes, it works fine.
I still don't want to run the risk of this happening again (presentation due in 4 days).
So, what is going on?
From this related question:
Haskell Persistent out of sync
I got the impression that it had something to do with
endOfRegistration Day Maybe
but deleting all the related code yielded no different result.
Thanks in advance, Sophia
Related
I am new to the GCP world. I have to check whether my batchSetting for publishing message to pub sub really work or not. This is the batch setting:
private BatchingSettings getBatchingSettings() {
long requestBytesThreshold = 10000L;
long messageCountBatchSize = 100L;
Duration publishDelayThreshold = Duration.ofMillis(2000);
BatchingSettings batchingSettings = BatchingSettings.newBuilder()
.setElementCountThreshold(messageCountBatchSize)
.setRequestByteThreshold(requestBytesThreshold)
.setDelayThreshold(publishDelayThreshold)
.build();
return batchingSettings;
}
I have to check whether pub sub publishes the message in batch of 100 or not.
Is there any way to check how many messages really published by per batch?
As it is explained in the documentation you can monitor Pub/Sub in Cloud monitoring. When you follow the link you will go to the Cloud Monitoring on your project.
In Metrics explorer its possible to create a metric of flowing configuration:
Recourse type: Cloud Pub/Sub Topic
Metric: Publish message operations
Group by: topic_id
Aggergator: sum
Minimum alignment period: 1 minutes
In "SHOW ADVANCED OPTIONS" set:
Aligner: sum
If you search such chart in some dashboard you can check the count of published massages there. Now just submit separate testing batch and wait for peak on the chart. When you hoover on the chart line you will see the number of massages in particular time period. Sometimes it will be decided into more parts, but in such small batch like 100 it should be no more than 2. So its enough to add 2 numbers.
Of course you can create more sophisticated metrics. This is just example.
I have an application that doesn't appear to be responding to SikuliX's (v 1.1.2) .focus(). It is one of three custom WinForms applications I have running. I found this link pertaining to my exact situation, but the suggestions did not help.
I have been able to get the code to work for Chrome, SQL Server, and other random applications I have running at the time. The problem seems to come in when I have more than one type of application running. If my applications are named "Version Launcher", "Device 1", and "Alternative", I am able to switch to "Version Launcher", but "Device 1" and "Alternative" aren't found.
class myDevice:
def startApp(self):
#my_app = App("Chrome") # works
#my_app = App("Visual Studio") # works
#my_app = App("Version Selector") # works
#my_app = App("Device 1") # does NOT work
my_app = App("Alternative") # does NOT work
my_app.focus(); wait(1)
my_device = myDevice()
my_device.startApp()
In order to rule out a bad name (perhaps, on some level, the application is not really named "Device 1"), I'd like to build a list of every application that Sikulix can detect at runtime. Has anyone ever tried such a thing? I've searched all over the documentation and cannot find any features that allows for this sort of querying.
Raimund Hocke, the maintainer of the SikuliX project, answered my question over on launchpad.
https://answers.launchpad.net/sikuli/+question/664004
In short, SikuliX uses the 'tasklist' command in Windows to grab the list of available applications.
error message: form1.execValidate is not a function. but this has worked fine for years!
Last week the client (large bank) rolled out a new version of Adobe Reader XI 11.0.21. Perhaps registry keys were changed as well - don't know.
So now all livecycle forms are crashing. Below is one error message seen on the console followed by the crash.
The code being used has been executed 10K+ times over ~5 years, over roughly 5 different forms over many versions.
form1.FirstPage.sfBody.sfSectionB.sfEnder.SendReferral::click - (JavaScript, client)
var res = form1.execValidate(); // does form validation, if all good returns true
if (res) {
cLookFeel.fMailTo(event.target);
}
(Code is attached to the click method on a button, cLookFeel is the name of my code block.)
And strangely - Reader then seems to (often) crash. Go figure.
followed by a crash:
Okay, turns out it's a known bug by Adobe on 11.0.21. They've issued a fix.
https://helpx.adobe.com/acrobat/release-note/acrobat-dc-august-11-2017.html
When I attempt to run the line:
MyDBContext.Database.Log = Console.Write
The compiler smiles and tells me I don't know what I am doing...
The app won't compile because of the line and the error on that line is:
Overload resolution failed because no accessible Write accepts this number of arguments.
That makes sense. 'Console.Write' returns nothing and I am setting it equal to a System.Action(Of String)
This just seems kind of half baked.
I tried numerous ways to fix it including delegates, and some of the other 'new possibilities' moving this off the Context is supposed to offer but still no dice.
What am I missing? Is it something that was changed at the last minute?
I have two large edmx files (one connects to SQL Server and the other to Oracle) in the solution and all of that is working great.
Here are my version numbers if that can help.
EntityFramework 6.0.0.0 (folder is ...\EntityFramework.6.1.3\lib\net45\EntityFramework.dll)
EntityFramework.SqlServer 6.0.0.0 (folder is ...\EntityFramework.6.1.3\lib\net45\EntityFramework.dll)
Oracle.ManagedDataAccess.EntityFramework 6.121.2.0
I have a tool I created that lets me paste the output of the L2S 'mycontext.log' into it and it then parses it and creates SSMS ready SQL with variables... it has been incredibly useful. This has been one of my favorite features of L2S.
Please help me understand why this isn't working.
Thanks in advance.
This technique works for me:
public override int SaveChanges()
{
SetIStateInfo();
#if DEBUG
Database.Log = s => Debug.WriteLine(s);
#endif
return base.SaveChanges();
}
http://blogs.msdn.com/b/mpeder/archive/2014/06/16/how-to-see-the-actual-sql-query-generated-by-entity-framework.aspx
Well, the answer was to research the Action(T) Delegate which showed me how to do it.
#If DEBUG Then
myctx.Database.Log = AddressOf Console.Write
#End If
Just needed the AddressOf and I was back in business.
We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider