Using DateTime?.Value.TimeOfDay in LINQ Query - sql-server

I'm trying to do a Query with LINQ on ASP.NET MVC 3.
I have a model, lets call it Event. This Event object has a Date property, of DateTime?. What I want is to fetch the Events that are between 2 TimeSpans.
Right now my code looks like the following:
TimeSpan From = new TimeSpan(8,0,0);
TimeSpan Until = new TimeSpan(22,0,0);
var events =
from e in db.Events
where e.Date.Value.TimeOfDay >= From,
e.Date.Value.TimeOfDay <= Until
select e;
An exception is thrown, telling me that "The specified type member 'TimeOfDay' is not supported in LINQ to Entities."
I don't get a way around this problem, and I have been all day trying. Please help me, I'm so frustrated. :(
EDIT:
I Forgot to write here the "TimeOfDay" after e.Date.Value. Anyway, I did in my code.
I can't use DateTime because I have to filter Events that occur between certain time of the day, despite the date of the event.

Use the Date and Time Canonical Functions for LINQ-to-Entities. Specifically, look at
CreateTime(hour, minute, second)
If you need help calling a canonical function, look at How To: Call Canonical Functions.

Related

Flink drop late records even I specified the side output

I use flink to process dynamoDB stream data.
Watermark strategy: periodic, extract approximate time stamp from stream events and use it under withTimeStampAssigner.
Idleness: 10s(may not be useful at all as we only use parallism of 1.)
The data work flow looks like this:
InputStream.assignTimeStampsAndWatermarks().keyby().window(TumblingEventTimeWindow.of(1min).sideOutputLateData().reduce().map()
Then I getSideOutput(), and process the late events using exactly similar above workflow with small change such as no need to assign time stamp and watermark, no need for late output.
My logs show that all things work perfectly if ddb stream data has right timstamp, the corresponding window can close without issue and I can see the output after window is closed.
However, after I introduced late events, the late records processing logic is never triggered. I am sure that the late record’s timestamp corresponding window has closed. I put a log after I call getSideOutPut(), it never triggered. I used debugger and I am sure the getSideOutput() code is not triggered as well.
Can someone help to check this issue? Thank you.
I tried to use a different watermark strategy for late records logic. This doesn’t work as well. I want to understand why the late records are not collected to the late stream.
Without seeing more details from your implementation is difficult to give an accurate diagnosis, but based on your description, I wouldn't expect this to work:
Then I getSideOutput(), and process the late events using exactly similar above workflow with small change such as no need to assign time stamp and watermark, no need for late output.
If you are trying to apply event time windowing to the stream of late events, that's not going to work unless you adjust the allowed lateness for those windows enough to accommodate them.
As a starting point, have you tried printing the stream of late events?

How to get today's date in BPMN timer

So I need that timer in BPMN that will be type: date, but instead of putting exact date (like 2022-08-04T08:30:00) I want to get today's date and the hour that is right now. Is it possible to do it in Camunda Modeler?
Thanks.
You can use expressions in the modeler that will be executed at runtime. There exist internal context functions that might be interesting for you.
Untested, but entering this as the timer value should work:
${dateTime().plusHours(1)}
If the builtin functions are not enough for you, you could just use the value of an existing process-variable (${variableName}) and set the variable value to any date value you like (via executionListener, service task, ...).

What's the workaround for not being able to pass heap objects to a future method?

This seriously is one of the biggest thorns in my side. SFDC does not allow you to use complex objects or collections of objects as parameters to a future call. What is the best workaround for this?
Currently what I have done is passed in multiple parallel arrays of primitives which form a complete object based on the index. Meaning if I need to pass a collections of users, I may pass 3 string arrays, say - Name[], Id[], and Role[]. Name[0], Id[0]. and Role[0] are the first user, etc. This means I have to build all these arrays and build the future method to reconstruct the relevant objects on the other end as well.
Is there a better way to do this?
As to why, once an Apex "transaction" is complete, the VM is destroyed. And generally speaking, salesforce will not serialize your object graph for resuming at a future time.
There may be a better way to get this task done. Can the future method query for the objects it needs to act on? Perhaps you can pass the List of Ids and the future method can use this in a WHERE clause. If it's a large number of objects, batch apex may be useful to avoid governor limits.
I would suggest creating a new custom object specifically for storing the information required in your custom apex class. You can then insert these into the database and then query for the records in the #future method before using them for the callout.
Then, once the callout has completed successfully you can then delete those records from the database to keep things nice and tidy.
My answer is essentially the same. What I do is prepare a custom queue object with all relevant Ids (User/Contact/Lead/etc.) along with my custom data that then gets handled from the #Future call. This helps with governor limits since you can pull from the queue only what your callout and future limitations will permit you to handle in a single thread. For Facebook, for example, you can batch up 20 profile updates per single callout. Each #Future allows 10 callouts and each thread permits 10 #Future calls which equals 2000 individual Facebook profile updates - IF you're handling your batches properly and IF you have enough Salesforce seats to permit this number of #Future calls. It's 200 #Future calls per user per 24 hours last I checked.
The road gets narrow when you're performing triggered callouts, which is what I assume you're trying to do based on your need to callout in an #Future method in the first place. If you're not in a trigger, then you may be able to handle your callouts as long as you do them before processing any DML. In other words, postpone any data saves in any particular thread until you're done calling out.
But since it sounds like you need to call out from a trigger, batching it up in sObjects is really the way to go. It's a bit of work, but essentially serializing your existing heap data is the road to travel here. Also consider doing this from an hourly scheduled Batch Apex call since with the queue approach you'll be able to process all of your callouts eventually. If you run into governor limits (or rather, avoid hitting them) in a particular thread, it will wake up an hour later and finish the work left in your queue. Launching that process looks something like this:
String jobId = System.schedule('YourScheduleName', '0 0 0-23 * * ?', new ScheduleableClass());
This will instantiate an instance of ScheduleableClass once an hour which would pull the work from your queue object and process the maximum amount of callouts.
Good luck and sorry for the frustration.
Just wanted to give my answer on how I do this very easily in case anyone else stumbles across this question. Apex has functions to easily serialize and de-serialize objects to and from JSON encoding. Let's say I have a list of cases that I need to do something with in a future call:
String jsonCaseList = '';
List<Case> caseList = [SELECT id, Other fields FROM Case WHERE some conditions];
//Populate the list
//Serialize your list
jsonCaseList = JSON.serialize(caseList);
//Pass jsonCaseList as a string parameter to your future call
futureCaseActivity(jsonCaseList);
#future
public static void futureCaseActivity(string jsonCases){
//De-serialize the string back into a list of cases
List<Case> futureCaseList = (List<Case>)JSON.deserialize(jsonCases, List<Case>);
//Do whatever you want with your cases
for(Case c : futureCaseList){
//Stuff
}
Update futureCaseList;
}
Anyway, seems like a much better option than adding database clutter with a new custom object and prevents needing to query the database again for info you already have, which just makes me hurt inside.
Almost forgot to add the link: https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_json_json.htm

How to create a java.util.Calendar instance at epoch time?

Is there any existing way to get a Calendar populated with time at epoch using Calendar API's other than explicitly setting them at epoch? All I was able to do was to get the current time.
There is no pre defined constructor or factory method to do this, but it is fairly simple:
Calendar c = Calendar.getInstance();
c.setTimeInMillis(0);

When does the defered execution occur?

I've got a situation which I want to fetch data from a database, and assign it to the tooltips of each row in a ListView control in WPF. (I'm using C# 4.0.) Since I've not done this sort of thing before, I've started a smaller, simpler app to get the ideas down before I attempt to use them in my main WPF app.
One of my concerns is the amount of data that could potentially come down. For that reason I thought I would use LINQ to SQL, which uses deferred execution. I thought that would help and not pull down the data until the user passes their mouse over the relevant row. To do this, I'm going to use a separate function to assign the values to the tooltip, from the database, passed upon the parameters I need to pass to the relevant stored procedures. I'm doing 2 queries using LINQ to SQL, using 2 different stored procedures, and assigning the results to 2 different DataGrids.
Even though I know that LINQ to SQL does use deferred execution, I'm beginning to wonder if some of the code I'm writing may defeat my whole intent of using LINQ to SQL. For example, in testing in my simpler app, I am choosing several different values to see how it works. One selection of values brought no data back, as there was no data for the given parameters. I thought this could potentially cause the user confusion, so I thought I would check the Count property of the list that I assign from running the DBML associated method (related to the stored procedure). Thinking about it, I would think it would be necessary for LINQ to run the query, in order to give me a result for the Count property. Am I not correct?
If I eliminate the call to the list's Count property, I'm still wondering if I might have a problem; if LINQ may still be invoked, because I'm associating the tooltip to the control via a function call?
You are correct, when you call the Count property it iterates over the result set. Not clear on your last question, but the LINQ probably gets called at the point where you populate your DataGrids, way after the tooltip comes into play.
EDIT: however, this does not mean there is anything wrong with deffered execution or your use of it, it executes at the latest possible stage, right when you need the data. If you still want to check the Count ahead of actually fetching all the data, you could have a simple LINQ to SQL function that checks for Any() rows. (Actually Any() is probably what you want more than Count > 0)
You should use Any(), not Count(), but even Any() will cause the query to be executed - after all, it can't determine whether or not there are any rows in the result set without executing the query. But there's executing the query, and there's fetching the result set. Any() will fetch one row, Count() will fetch them all.
That said, I think that having a non-instantaneous operation that occurs on mouseover is just a bad idea. There was a build of Outlook, once, that displayed a helpful tooltip when you moused over the Print button. Less helpfully, it got the data for that tooltip by calling the system function that finds out what printers are available. So you'd be reaching for a menu, and the button would grab the mouse pointer and the UI would freeze for two seconds while it went out and figured out how to display a tooltip that you weren't even asking for. I still hate this program today. Don't be this guy.
A better approach would be to get your tooltip data asynchronously after populating the visible data on the screen. It's easy enough to create a BackgroundWorker that fetches the data into a DataTable, and then make the DataTable available to the view models in the RunWorkerCompleted event handler. (Do it there so that you don't do any updates to UI-bound data on the UI thread.) You can implement a ToolTip property in your view model that returns a default value (probably null, but maybe something like "Fetching data...") if the DataTable containing tool tip data is null, and that calculates the value if it's not. That should work admirably. You can even implement property-change notification so that the ToolTip will still get updated if the user keeps the mouse pointer over it while you're fetching the data.
Alex is correct that calling Count() or Any() will enumerate the LINQ expression causing the query to execute. I would recommend re-thinking your design as you probably don't want a query to the database executed every time the user moves his/her mouse. There is also the issue of the delay to query the database. What might be instantaneous on your dev box with a local database might have a multi-second delay on a heavily loaded server. I would recommend creating a DisplayTooltip() function that takes a lazily evaluated LINQ expression. You can then cache the results or apply other heuristics to decide whether you should actually be querying the database or not.

Resources