How to loop informatica sessions - loops

I want to load one table for data for say 1 month starting from 1 April to 30 April in successive manner.
i.e after loading data for 1 April, date should automatically increment to 2, load the data and increment and so on, till its 30 April.
Also, data of 2 April depends on 1 April data. So i cannot give a date range to load randomly.
How can I do it?
It would be preferable to get the loads done in single session run, instead of running the session for several times.

Sort the source data by date and use a Transaction Control transformation to enforce a commit every time the date changes.

Related

Google Analytics (GA4) transaction data not accurate

1. Background
Using GA4 without GA360
2. Description
When i want to search the records by transactional level (Record by datetime) which will show the detailed record one by one, the event count is varied following by filtered date.
3. Hope to Achieve
I would like to achieve a normal report that will show every event count as 1 by filtering months or longer.
4. Weird Cases
Event count of every record is 1 when filtering 0.5 month, eg 1 Oct to 16 Oct.
Event count of every record is 2 when filtering 1 month, eg 1 Oct to 30 Oct.
Event count of every record is 3 when filtering 1.5 month, eg 15 Sep to 30 Oct.
5. Supporting
It is the sampling data from reports.
Heavily Sampled exploration
This report is based on 9.52% of available data. A smaller sample size means that the data in this report is less accurate. Learn More

Inserting deleted data back into dataset using temp table. Qlikview

I am having trouble working out the logic to this little scenario. Basically I have a data set and it is stored on weeks of the year and each week the previous weeks data is deleted from the data set. What I need to do is copy the previous weeks data before its removed from the data set and then add it back after it's removed. So for example if today is week 33, I need to save this and then next week add it back in. Then next week I need to take week 34 and save that to add in during week 35. A picture explains better than a thousand words so here it is.
As you can see I need the minimum week from the data set before I add the previous weeks data. The real issue that I'm finding is that the dataset can be rerun more than once each week so I would need to keep the temp data set until the next week while extracting the Minimum weeks data set.
It's more logic I'm after here...Hope it makes sense and thanks in Advance.
QVD's are the way forward! Although maybe not as another (very good) answer states.
--Load of data from system
Test:
Load *
, today() as RunDate
From SourceData
--Load of data from QVD
Test:
Load *
From Test.QVD
--Store current load into QVD
Store Test into Test.QVD
This way you only have one QVD of data that continually expands.
Some warnings
You will need to bear in mind that report runs multiple times a week so you will need to cater for duplication in the data load.
QVD loads aren't encrypted, so put your data somewhere safe
when loading from a QVD and then overwriting it, if something goes wrong (the load fails) you will need to recover your QVD so make sure your backup solution is up to the task.
I also added the RunDate field so that it is easier for you to take apart when reviewing as this gives you the same split as storing in separate QVD's would.
Sounds like you should store the data out into weekly QVD files as part of an Extract process and then load the resulting files in.
The logic would be something like the below...
First run (week 34 for week 33 data):
Get data for previous week
Store into file correctly dated - e.g. 2016-33 for week 33 of 2016
Drop this table
Load all QVDs (in this case just 1)
Next week run (week 35 for week 33 & 34 data):
Get data for previous week
Store into file correctly dated - e.g. 2016-34 for week 34 of 2016
Drop this table
Load all QVDs (in this case 2)
Repeat run next week (week 35 for week 33 & 34 again data):
Get data for previous week
Store into file correctly dated - e.g. 2016-34 for week 34 of 2016 (this time overwrite it)
Drop this table
Load all QVDs (in this case 2)
Sensible file naming solve the problem, but if you really actually need to inspect the data to check the week number, you would need to first load all existing QVDs, query the minimum week number and take it from there probably.

Manufacturing process cycle time database design

I want to create a database to store process cycle time data. For example:
Say a particular process for a certain product, say welding, theoretically takes about 10 seconds to do (the process cycle time). Due to various issues, the machine's actual cycle time would vary throughout the day. I would like to store the machine's actual cycle time throughout the day and analyze it over time (days, weeks, months). How would i go about designing the database for this?
I considered using a time series database, but i figured it isn't suitable - cycle time data has a start time and an end time - basically i'm measuring time performance over time - if this even makes sense. At the same time, I was also worried that using relational database to store and then display/analyze time related data is inefficient.
Any thoughts on a good database structure would be greatly appreciated. Let me know if any more info is needed and i will gladly edit this question
You are tracking the occurrence of an event. The event (weld) starts at some time and ends at some time. It might be tempting to model the event entity like so:
StationID StartTime StopTime
with each welding station having a unique identifier. Some sample data might look like this:
17 08:00:00 09:00:00
17 09:00:00 10:00:00
For simplicity, I've set the times to large values (1 hour each) and removed the date values. This tells you that welding station #17 started a weld at 8am and finished at 9am, at which time the second weld started which finished at 10am.
This seems simple enough. Notice, however, that the StopTime of the first entry matches the StartTime of the second entry. Of course it does, the end of one weld signals the start of the next weld. That's how the system was designed.
But this sets up what I call the Row Spanning Dependency antipattern: where the value of one field of a row must be synchronized with the value of another field in a different row.
This can create any number of problems. For example, what if the StartTime of the second entry showed '09:15:00'? Now we have a 15 minute gap between the end of the first weld and the start of the next. The system does not allow for gaps -- the end of each weld also starts the next weld. How should be interpret this gap. Is the StopTime of the first row wrong. Is the StartTime of the second row wrong? Are both wrong? Or was there another row between them that was somehow deleted? There is no way to tell which is the correct interpretation.
What if the StartTime of the second entry showed '08:45'? This is an overlap where the start of the second cycle supposedly started before the first cycle ended. Again, we can't know which row contains the erroneous data.
A row spanning dependency allows for gaps and overlaps, neither of which is allowed in the data. There would need to be a large amount of database and application code required to prevent such a situation from ever occurring, and when it does (as assuredly it will) there is no way to determine which data is correct and which is wrong -- not from within the database, that is.
An easy solution is to do away with the StopTime field altogether:
StationID StartTime
17 08:00:00
17 09:00:00
Each entry signals the start of a weld. The end of the weld is indicated by the start of the next weld. This simplifies the data model, makes it impossible to have a gap or overlap, and more precisely matches the system we are modeling.
But we need the data from two rows to determine the length of a weld.
select w1.StartTime, w2.StartTime as StopTime
from Welds w1
join Welds w2
on w2.StationID = w1.StationID
and w2.StartTime =(
select Max( StartTime )
from Welds
where StationID = w2.StationID
and StartTime < w2.StartTime );
This may seem like a more complicated query that if the start and stop times were in the same row -- and, well, it is -- but think of all that checking code that no longer has to be written, and executed at every DML operation. And since the combination of StationID and StartTime would be the obvious PK, the query would use only indexed data.
There is one addition to suggest. What about the first weld of the day or after a break (like lunch) and the last weld of the day or before a break? We must make an effort not to include the break time as a cycle time. We could include the intelligence to detect such situation in the query, but that would increase the complexity even more.
Another way would be to include a status value in the record.
StationID StartTime Status
17 08:00:00 C
17 09:00:00 C
17 10:00:00 C
17 11:00:00 C
17 12:00:00 B
17 13:00:00 C
17 14:00:00 C
17 15:00:00 C
17 16:00:00 C
17 17:00:00 B
So the first few entries represent the start of a cycle, whereas the entry for noon and 5pm represents the start of a break. Now we just need to append the line
where w1.Status = 'C'
to the end of the query above. Thus the 'B' entries supply the end times of the previous cycle but do not start another cycle.

Calculate Facebook likes, comments, and shares for different time zones from saved UTC

I've been struggle with this for a while and hope someone can give me an idea to tackle this.
We have a service that goes out and collects Facebook likes, comments, and shares for each status update multiple times a day. The table that stores this data is something like this:
PostId EngagementTypeId Value CollectedDate
100 1(for likes) 10 1/1/2013 1:00
100 2 (comments) 2 1/1/2013 1:00
100 3 0. 1/1/2013 1:00
100. 1. 12 1/1/2013 3:00
100. 2. 3. 1/1/2013 3:00
100. 3 5. 1/1/2013 3:00
Value holds the total for each engagement type at the time of collection.
I got a requirement to create a report that shows new value per day at different time zones.
Currently,I'm doing the calculation in a stored procedure that takes in a time zone offset and based on that I calculate the delta for each day. If this is for someone in California, the report will show 12 likes, 3 comments, and 5 shares for 12/31/2012. But if someone with the time zone offset of -1, he will see 10 likes on 12/31/2012 and 2 likes on 1/1/2013.
The problem I'm having is doing the calculation on the fly can be slow if we have a lot of data and a big date range. We're talking about having the delta pre-calculated for each day and stored in a table and I can just query from that ( we're considering SSAS but that's for the next phase). But doing this, I would need to have the data for each day for 24 time zones. Am I correct (and if so, this is not ideal) or is there a better way to approach this?
I'm using SQL 2012.
Thank you!
You need to convert UTC DateTime stored in your column to Date based on users UTC time. This way you don't have to worry about any table that has to be populated with data. To get users date from your UTC column you will use something like this
SELECT CONVERT(DATE,(DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), '01/29/2014 04:00')))
AS MyLocalDate
The select statement above figures out Local date based on the difference of UTC date and local Date. You will need to replace GETDATE() with users DATETIME that is passed in to your procedure and replace '01/29/2014 04:00' with your column. This way when you select any date from your table it will be according to what that date was at users local time. Than you can calculate other fields accordingly.

Storing and searching opening/closing times for stores

I'm writing an application that indexes data for our stores, some of which are open late (8 am - 2 am). We need to be able to search this database quickly -- basically, to run a query to find which stores are open at a given point in time (now, Sunday at 1 am, whatever).
In addition, the open/close times can vary day-by-day -- some stores are closed on Sundays, for example.
The obvious solution to me would be to make a table where I have a row with the store ID, day, open time, and close time. For something like Monday, 8 am - 2 am, that would actually be two rows, one for Monday 0800 - 2400, and one for Tuesday 0000 - 0200.
We have a lot of stores, so the search has to perform well (basically, the data has to be index-friendly), but I'll also have to display this data back out in a human-readable format. With my current solution, that'd look something like this:
Monday: 8:00 - Midnight
Tuesday: Midnight - 2:00 am; 8:00 am - Midnight
I'm just wondering if anybody else has alternative solutions before I jump right to an implementation. Thanks!
When PBS (the US Public Broadcasting System) faced this same problem a couple of years ago, they invented the idea of the "30 hour day" -- Where 00:00 is midnight at the start of the day, 24:00 is midnight at the end of the day, 25:00 is 1am the next day, 30:00 is 6am the next day. That way Mon closing time of 26:00 is 2am Tues morning.
Rather than two records representing a single store's times for a day, it may be more object oriented to think of the "store day" as the object. That way 1 record = 1 store's times for a day. If you want to store the two sets of open/close times, just use four fields in the record instead of two--and adjust your queries appropriately.
Remember that your queries should use a library/api that you write and publish. The library will then deal with the data store and its data layout. No one but your library should be looking at the db directly.
Time zones are very important in this sort of app too. (Hopefully) at some point, the store chain will expand to cover more than one time zone. You'll then need to determine the local time of the query. -- May not the same as the time zone of your server which is handling the queries.
Further thoughts--
I now see that you're standardizing to GMT. Good. You could also use datetime values (vs time values) and standardize to a given week in time. Eg open time is Sun Jan 1, 1995 10am - Mon Jan 2, 1995 2am (using Jan 1, 1995 as a base since it was a Sunday).
Then rationalize your "current time and date" to match the same point in the week of Jan 1, 1995. Then query to find open store days.
HTH,
Larry

Resources