I need some help with the following:
I am setting up a booking system (kind of hotel booking) and I have inserted a check in date and a check out date into database, how do I go to check if a room is already booked?
I have no clue how to manage the already booked days in the database. Is there anyone who can give me a clue how this works? And maybe where I can find more information?
Well, I didn't understand very well your question, but my suggestion is to you to add a state field, in which you can control the current state of the "booked" item. something like
Available
Under Maintenance
Occupied
Or whatever bunch of states that work for you.
[EDIT]
The analysis that I use to do for that case is as follows:
Take for instance, your room is currently booked with these date range:
Init Date: Feb 8
End Date: Feb 14
Success Booking Examples
Init Date: Feb 2
End Date: Feb 6
Init Date: Feb 15
End Date: Feb 24
You should check that the booking attempt satisfies these conditions:
Neither "Booking Init Date" nor "Booking End Date" can be inside of the already booked date.
Example:
Init Date: Feb 2
End Date: Feb 10 (Inside the current range (Feb 8 to 14))
Init Date: Feb 12 (Inside the current range (Feb 8 to 14))
End Date: Feb 27
if "Booking Init Date" is less than current init date, "Booking End Date" should also be less than current init date
Init Date: Feb 2.
End Date: Feb 27 (Init date before, but end date later)
This is an interesting question - not least because I don't believe that there is a single ideal answer as it will depend to some extent on the nature of the "Hotel" and the way in which people are placed in rooms and to a further extent on the scale of the question.
At the most basic level you have two ways that you can track occupancy of rooms:
You can have a table with one row per room per day that defines the state of that room on that date (and possibly the related booking if occupied)
For each room you maintain a list of "bookings" (which as already suggested will have to include states for when a room is unavailable for maintenance).
The reason that its an interesting question is that these both immediately present you with certain challenges when maintaining the data and searching for occupancy in the case of the former you're holding a lot of data that may not be needed and in the case of the latter finding gaps in the occupancy for new bookings is perhaps more interesting.
You then (in both cases) have a futher problem of resource allocation - if your bookings tend to be for two days and you system results in 1 day gaps between bookings you're going to need to re-arrange the occupancy to optimise usage... but then you have to be careful with bookings that need to be explicitly tied to specific rooms.
Ideally you would defer assigning a booking to a room for as long as possible (which is why it depends on the hotel - fine for 400 modular rooms rather less so for a dozen unique ones) - so long as there are sufficient rooms of the necessary standard available (which you invert, so long as there are fewer rooms booked than real rooms) during a target period you can take the booking. Of course you've still then got to represent the state of the rooms so this is in addition to the data you've got to manage.
All of which is what makes it interesting - there is considerable depth to the problem, you need to have a fairly decent understanding of the specific problem to produce an appropriate solution.
I have come to the booking problem from the perspective of avoiding a highly populated table, giving that the inventory is thousands rather than hundreds.
solution one - intervals
solution two - populate slots only when are occupied using the smallest unit (1 day)
solution three - generate slots in advance for each resource and manage the status
Solution one has smallest size footprint but since you cannot guess if your searched range is already in an interval or not - you have to read and compare the whole table.
Solution two solves this problem and you can search for a specific time-frame only but the table contains more lines. However since the empty slots are not written anywhere a high vacancy will reduce the size of the table.
Another advantage is that old bookings can be transferred to a separate table.
Solution three increases the size of the table to a maximum minSlotresourcetime and the lines are generated in advance. The only advantage that I can think of is the cost of finding empty slots with a simple select.
However generating the slots in advance looks like a terrible idea to me.
Related
in Salesforce, how to create a formula that calculate the highest figure for last month? for example, if I have an object that keeps records that created in Sept, now would like to calculate its max value (in this case, should be 20 on 3/8/2019) in last month's (August). If it's in July, then need to calculate for June. How to construct the right formula expression? Thanks very much!
Date Value
1/9/2019 10
1/8/2019 14
2/8/2019 15
3/8/2019 20
....
30/8/2019 15
You can't do this with normal formulas on records because they "see" only current records (and some related via lookup), not other rows in same table.
You could make another object called "accounting periods" or something like that. Link all these entries to periods (months) in master-detail relationship. You'll then be able to use rollup summary with MAX(). Still not great because you need lookup to previous month to pull it but should give you an idea.
You could make a report that achieves something like that. PREVGROUPVAL will let you do some amazing & scary stuff. https://trailhead.salesforce.com/en/content/learn/projects/rd-summary-formulas/rd-compare-groups Then... if all you need is a report - great. If you really need it saved somewhere - you could look into reporting snapshots & save results in helper object...
If you want to do it without any data model changes like that master-detail or helper object - you could also write some code. Nightly batch job (running daily? only on 1st day of month?) should be pretty simple.
Without code - in a pinch you could make a Flow that queries records from previous month. Bit expensive to run such thing for August every time you add a September record but if you discarded other options...
I have data generated on daily basis.
let me explain through a example:
On World Market, the price of Gold change on seconds interval basis. and i want to store that price in Redis DBMS.
Gold 22 JAN 11.02PM X-price
22 JAN 11.03PM Y-Price
...
24 DEC 11.04PM X1-Price
Silver 22 JAN 11.02PM M-Price
22 JAN 11.03PM N-Price
I want to store this data on daily basis. want to apply ML (Machine Leaning) on last 52 Week data. Is this possible?
Because As my knowledge goes. redis work on Key Value.
if this is possible. Can i get data from a specific date(04 July) and DateRange(01 Feb to 31 Mar)
In redis, a Sorted Set is appropriate for time series data. If you score each entry with the timestamp of the price quote you can quickly access a whole day or group of days using the ZRANGEBYSCORE command (or ZSCAN if you need paging).
The quote data can be stored right in the sorted set. If you do this make sure each entry is unique. Adding a record to a sorted set that is identical to an existing one just updates the existing record's score (timestamp). This moves the old record to the present and erases it from the past, which is not what you want.
I would recommend only storing a unique key/ID for each quote in the sorted set, and store the data in it's own key or hash field. This will allow you to create additional indexes for the data as needed and access specific records more easily if necessary.
I am having trouble working out the logic to this little scenario. Basically I have a data set and it is stored on weeks of the year and each week the previous weeks data is deleted from the data set. What I need to do is copy the previous weeks data before its removed from the data set and then add it back after it's removed. So for example if today is week 33, I need to save this and then next week add it back in. Then next week I need to take week 34 and save that to add in during week 35. A picture explains better than a thousand words so here it is.
As you can see I need the minimum week from the data set before I add the previous weeks data. The real issue that I'm finding is that the dataset can be rerun more than once each week so I would need to keep the temp data set until the next week while extracting the Minimum weeks data set.
It's more logic I'm after here...Hope it makes sense and thanks in Advance.
QVD's are the way forward! Although maybe not as another (very good) answer states.
--Load of data from system
Test:
Load *
, today() as RunDate
From SourceData
--Load of data from QVD
Test:
Load *
From Test.QVD
--Store current load into QVD
Store Test into Test.QVD
This way you only have one QVD of data that continually expands.
Some warnings
You will need to bear in mind that report runs multiple times a week so you will need to cater for duplication in the data load.
QVD loads aren't encrypted, so put your data somewhere safe
when loading from a QVD and then overwriting it, if something goes wrong (the load fails) you will need to recover your QVD so make sure your backup solution is up to the task.
I also added the RunDate field so that it is easier for you to take apart when reviewing as this gives you the same split as storing in separate QVD's would.
Sounds like you should store the data out into weekly QVD files as part of an Extract process and then load the resulting files in.
The logic would be something like the below...
First run (week 34 for week 33 data):
Get data for previous week
Store into file correctly dated - e.g. 2016-33 for week 33 of 2016
Drop this table
Load all QVDs (in this case just 1)
Next week run (week 35 for week 33 & 34 data):
Get data for previous week
Store into file correctly dated - e.g. 2016-34 for week 34 of 2016
Drop this table
Load all QVDs (in this case 2)
Repeat run next week (week 35 for week 33 & 34 again data):
Get data for previous week
Store into file correctly dated - e.g. 2016-34 for week 34 of 2016 (this time overwrite it)
Drop this table
Load all QVDs (in this case 2)
Sensible file naming solve the problem, but if you really actually need to inspect the data to check the week number, you would need to first load all existing QVDs, query the minimum week number and take it from there probably.
I am currently working on a requirement as follows and would appreciate some help in figuring out a way to configure the aggregation of my measure:
I have a fact table that contains the following Item ID, DateID,StoreID, ReceivedComments. The way received comments work is that on a daily basis a new record is created that adds to the value of received comments (for example if Item 5 in Store 5 on 1 Jan had 23 Received Comments and it received 5 comments the following day, the row for Jan 2 would be Item 5, Store 5, Jan 2, 28)
We created a measure using MAX and it works fine whenever Item ID is used in the query. When we start moving to a higher level the max produces wrong results. Our requirement is to setup the measure to be as follows:
If the member selected is on the Item Level then MAX, if it's on any other level (Date or Store) then the measure should aggregate the Max of all Items under this date or store.
Due to the business rules and structure of the database Store and Item are different dimensions so I can not include them in 1 Hierarchy.
We have been playing around with Custom RollUps but so far haven't been able to get it to work.
Thanks
I would solve this by using a more traditional approach to your fact table. Instead of keeping a cumulative count in the ReceivedComments column, I would keep only the number of comments received THAT DAY.
That way, instead of using MAX, you can create your measure using SUM, and it will automatically rollup when you go to higher levels.
The only disadvantage I can see to this approach is that you will need to use a range of dates, instead of only the most recent date, to get a full total of all the comments for a given item/store/date. But that's a very small change to your MDX.
Someone suggested using ISLEAF to determine the level, Instead of using ISLeaf i went with AS CASE WHEN [Item].[ItemID].CURRENTMEMBER.LEVEL IS [Item].[ItemID].[(All)] so I don't have to account for other dimensions such as Date, Store, etc as I have several other dimensions that all behave the same way.
And then I went with this formula to determine the Sum of the Max of the items in a particular store like this:
SUM({[Item].[Item ID].children},[Measures].[ReceivedComments]), Now I expect some performance issues with this measure but we are currently running some tests to see if it's gonna be reliable to work with it on actual data.
I'm writing an application that indexes data for our stores, some of which are open late (8 am - 2 am). We need to be able to search this database quickly -- basically, to run a query to find which stores are open at a given point in time (now, Sunday at 1 am, whatever).
In addition, the open/close times can vary day-by-day -- some stores are closed on Sundays, for example.
The obvious solution to me would be to make a table where I have a row with the store ID, day, open time, and close time. For something like Monday, 8 am - 2 am, that would actually be two rows, one for Monday 0800 - 2400, and one for Tuesday 0000 - 0200.
We have a lot of stores, so the search has to perform well (basically, the data has to be index-friendly), but I'll also have to display this data back out in a human-readable format. With my current solution, that'd look something like this:
Monday: 8:00 - Midnight
Tuesday: Midnight - 2:00 am; 8:00 am - Midnight
I'm just wondering if anybody else has alternative solutions before I jump right to an implementation. Thanks!
When PBS (the US Public Broadcasting System) faced this same problem a couple of years ago, they invented the idea of the "30 hour day" -- Where 00:00 is midnight at the start of the day, 24:00 is midnight at the end of the day, 25:00 is 1am the next day, 30:00 is 6am the next day. That way Mon closing time of 26:00 is 2am Tues morning.
Rather than two records representing a single store's times for a day, it may be more object oriented to think of the "store day" as the object. That way 1 record = 1 store's times for a day. If you want to store the two sets of open/close times, just use four fields in the record instead of two--and adjust your queries appropriately.
Remember that your queries should use a library/api that you write and publish. The library will then deal with the data store and its data layout. No one but your library should be looking at the db directly.
Time zones are very important in this sort of app too. (Hopefully) at some point, the store chain will expand to cover more than one time zone. You'll then need to determine the local time of the query. -- May not the same as the time zone of your server which is handling the queries.
Further thoughts--
I now see that you're standardizing to GMT. Good. You could also use datetime values (vs time values) and standardize to a given week in time. Eg open time is Sun Jan 1, 1995 10am - Mon Jan 2, 1995 2am (using Jan 1, 1995 as a base since it was a Sunday).
Then rationalize your "current time and date" to match the same point in the week of Jan 1, 1995. Then query to find open store days.
HTH,
Larry