I'm working on creating a database in Access for input of budget requests. What I want to do is create a form to allow users to input monthly budget forecast amounts that would be formatted like:
projectname ---month1amount --- month2amount --- month3amount ... for 12 months, then possibly yearly after that
The problem I have is that I don't know how to do this with my current table structure for the monthly information. It seems like a bad idea to create a table with hundreds of fields for each period, but that is the only way I can think of to input this in a horizontal manner.
The main table looks like
tbl_Project
project_id
description
budget_group
phys_location
expected_start <- Date
expected_end <- Date
The monthly table looks like
tbl_monthly
project_id
monthly_id
period(yyyymm)
budget_amount
If you use that design, you won't be able to use the databinding in access. Leave your form unbound, and then in the OnLoad event, query your tbl_monthly table to get all the records that should be displayed on the form. Loop through them and set the values of the fields. You will then have to run multiple update statements when it is time the save the changes. You will also need to implement your own controls to select which project you are editing.
Related
I create one attendance management system right now. I stuck in someplace like I need to display all employee attendance report at a single table in that table I need to display name, days, and total hour of the month columns. I try some ways to achieve to create the day's column but I am failed to find a proper way to achieve this table so please help me how can I create this table.
i want the final output like this :-
I'm working with a "slowly changing fact table" that keeps track of changes to a reservation by stamping the effective start and effective end dates on when a change is effective for. The latest change has an effective end date of 12/31/9999 23:59:59.999 to keep it effective forever until if/when a new change comes in. I have a second table called an "As-Of" date table that has every single date since 8/1/2008 to current with a timestamp of 23:59:59.999 that I can use to join back to the slowly changing fact where "as-of" dates are between the effective start and end dates of in the fact. The purpose of this is to be able to go back to any specific date in time and see what the reservation looked like on that date. As you can imagine, each new as-of date has exponentially more data.
I've been tasked with creating an SSAS tabular model that has every "as-of" date available in a drop down for end users to select and be able to see data for that particular date. I'm concerned about storage and performance issues of having all as-of dates joined back to the fact table to provide end users the freedom to select any as-of date they want at any given time.
If I create a view that has the fact and as-of date table joined together, is it possible to pass the as-of dates they select in the drop down in SSAS back to a dynamic where clause in the view so that I am only joining the fact and as-of date tables dynamically for the as-of dates they actually need to see? Is it possible to create some type of "live" connection that only joins the fact and as-of date tables on the fly so I don't have to blow out the underlying data for each as-of date?
It seems as though the two tables will have to be joined in a SQL view before I bring it into SSAS since it doesn't seem possible to join two tables on more than one column in SSAS.
Can someone please tell me if this is something that is even technically possible? Or if you have any other ideas on the best way to represent this data in SSAS?
For some table, you can use Direct Query (live query) instead of Import.
DirectQuery Documentation
You can use a virtual relationship in your measure (where you can specifying more than one column).
VirtualRelationship
example:
CALCULATE (
<target_measure>,
TREATAS (
SUMMARIZE (
<lookup_table>
<lookup_granularity_column_1>
<lookup_granularity_column_2>
),
<target_granularity_column_1>,
<target_granularity_column_2>
)
)
There is a table which keeps the login information of users:
UserID LoginTime MacAddress IPAdress
1 2017-02-05 20:02:40 -- 192.168.10.3
This table has billion of records, we are going to get the last login time of each user with different filters, for example in 6 month ago. Also, this table should be join with Users Table for retrieving users information, also filters on Users table may be requested for example :
Where UserName='xxxx' and Last_Login_Time in 6 Month Ago, and any other filters.
I know that there are ways like RowNumber and a way like this:
SELECT MAX(LoginTime) AS [Last Login Time], UserID
FROM UsersLoginHistory
GROUP BY UserID;
But these ways takes long time.
Can anyone suggest a better query (prefer to use offset for paging) for this issue?
With current data model you will need to read through all table anyway to retrieve information about all users and all their last logins. To make this kind of report fast, you should pre-calculate it.
You can suggest one of the following ways:
Store the last login time in the UsersLogin table. Your back-end should properly update this table in the same transaction with inserting into UsersLoginHistory.
Create index on UserID, LoginTime.
You can replicate the logic of #1 somewhere in the database (using after insert trigger, for example), but i do not recommend doing this, because business logic will eventually bloat in your database.
I'm working on a SSIS/SAAS project to build a BI solution.
One of my data sources contains informations about a Service Desk.
A user can create a new request related to a service catalog (for example because his laptop crashed).
So it will generate a new row in the Request table (creation date, comment, tracking number, etc.).
To solve this issue, few actions will be perform. So these actions will be recorded in the action table (there is a One to many relationship between Request and Action tables).
An action can be : "try to format computer", "change hard drive", etc.
In the production environnent a Request contains aproximatly from 10 to 100 actions.
I'm facing a problem about designing this because many columns of my fact table cannot be aggregated.
In fact there are many date columns, tracking number (string), bollean values and only few SUM attributes.
Here is an extract of the dw model :
FactRequest :
ID (DW primary key)
Business Key (original PK)
Request number (string)
Begin date (datetime)
End date (datetime)
Max resolution date (datetime)
Time to solve request
Comment (string)
Delay (int)
...
FactAction :
ID (DW primary key)
Business Key (original PK)
Begin date (datetime)
End date (datetime)
Name (string)
Time to solve action
...
I know adding non aggregable data in a fact table is not the best solution.
In my SSAS project, I created a new cube based on my FactRequest table.
It works fine except for "string" attributes such as the request identifier because it is a string.
Should I use an SSAS "fact dimension" to create a "Request" dimension based on my FactRequest table ?
Any idea ?
Thanks so much,
Sounds like you are lacking specific requirements (which is very common in BI projects). Is the textual data required to be displayed in the report at all? If yes: is it required only in some detail view?
Columns like ID, Business Key, Request number typically have little value in your cube. This data is only interesting for detailed reports (e.g. getting all actions taken for a certain request ID) and such lists often require no aggregates. You do not need a cube for lists like that, you can query the database directly with SQL.
Only if you require a summary report (e.g. getting the average time taken to solve a request per weekday) the cube could make sense - it may still not be worth the effort to use an SSAS database if you can get almost the same query response time with direct SQL queries.
I am working on a multiple properties booking system and making me headache about the best practice schema design. Assume the site hosts for example 5000 properties where each of it is maintained by one user. Each property has a booking calendar. My current implementation is a two-table-system with one table for the available dates and the other for the unavailable dates, with a granularity of 1 day each.
property_dates_available (property_id, date);
property_dates_booked (property_id, date);
However, i feel unsure if this is a good solution. In another question i read about a single table solution with both states represented. But i wonder if it is a good idea to mix them up. Also, should the booking calendar be mapped for a full year with all its 365 days per year into the database table or was it better to map only the days a property is available for booking? I think of the dramatically increasing number of rows every year. Also i think of searching the database lately for available properties and am not sure if looking through 5000 * 365 rows might be a bad idea compared to i.e. only 5000 * av. 100 rows.
What would you generally recommend? Is this aspect ignorable? How to best practice implement this?
I don't see why you need a separate table for available dates. If you have a table for booked dates (property_id, date), then you can easily query this table to find out which properties are available for a given date
select properties.property_name
from properties where not exists
(select 1 from property_dates_booked
where properties.property_id = property_dates_booked
and property_dates_booked.date = :date)
:date being a parameter to the query
Only enter actual bookings into the property_dates_booked table (it would be easier to rename the table 'bookings'). If a property is not available for certain dates because of maintenance, then enter a booking for those dates where the customer is 'special' (maybe the 'customer' has a negative id).