How to merge two Excel sheets - sql-server

I have an Excel document with 10000 rows of data in two sheets, the thing is one of these sheets have the product costs, and the other has category and other information. These two are imported automatically from the sql server so I don't want to move it to Access but still I want to link the product codes so that when I merge the product tables as product name and cost on the same table, I can be sure that I'm getting the right information.
For example:
Code | name | category
------------------------------
1 | mouse | OEM
4 | keyboard | OEM
2 | monitor | screen
Code | cost |
------------------------------
1 | 123 |
4 | 1234 |
2 | 1232 |
7 | 587 |
Let's say my two sheets have tables like these, as you can see the next one has one that doesn't exist on the other- I put it there because in reality one has a few more, preventing a perfect match. Therefore I couldn't just sort both tables to A-Z and get the costs that way- as I said there are more than 10000 products in that database and I wouldn't want to risk a slight shift of costs -with those extra entries on the other table- that would ruin the whole table.
So what would be a good solution to get the entry from another sheet and inserting it to the right row when merging? Linking two tables with field name??... checking field and trying to match it with the other sheet??... Anything at all.
Note: When I use Access I would make relationships and when I would run a query it would match them automatically... I was wondering if there's a way to do that in excel too.

Why not use a vlookup? If there is a match, it will list the cost. Assuming the top is sheet1 and the other sheet2 and they both start on cell A1. You just need this in cell D2.
=VLOOKUP(A2,Sheet2!A:B,2,0)
You can then drag it down. Easiest way to fill all your 10000 rows is to hover over the bottom left corner of the cell with your cursor. It will turn from a white plus sign into a thin black one. Then simply double click.

Just use VLOOKUP - you can add a row to your first sheet, and find the cost based on code in the other sheet.

Related

Is it possible to create a repeating table in SSRS Report based on data from SQL database?

I have created a Powerapp which is used to audit schools and the data saves to my SQL database. I have designed a report in SSRS to display the findings of the audit. The SQL table, shown below, stores the items in each room that were audited (i.e. desks, pcs, shelves etc) plus the name of the room and whether any actions need to take place. I need my report to display one table per room with the items down the left hand side and the name of the room as a title. This should be repeated for each room. There may be a different number of rooms in each report so this will be varied. I've included a screenshot of what the table needs to look like. When I create the table, I can only get the room names down the left hand side in one table and the items across the top. Please help.
Too long for a comment so I'll have to reply here.
Your data is not a a format that is particularly suited to this. I can't see how you can determine 'Compliant' from the data you have shown in your screen shots although it maybe that you have not shown everything you have available.
However, I would start by looking into the t-sql UNPIVOT function to get your data into a more normalised format. Using UNPIVOT you could turn your data into something like..
AuditID | Room | Item | Present
------------------------------------------
3019 | Reception | PC | True
3019 | Reception | Desks | True
3019 | Class 1 | PC | False
3019 | Class 1 | Desks | True
You can obviously extend this to include all pertinent data.
Once you have your data in this format, create tablix with 'item' and 'present' columns only. You will have a 'detail' rowgroup at this point. Right-click the rowgroup and add a parent group, set this group to be grouped by Room.
This will give you the basic layout, from there you can add some padding or blank rows to the room group or even page breaks.
If you cannot get past the UNPIVOT function then I suggest you post a new question specifically on that topic then return here once you have the data in the correct format

Load big table into web browser using react in on-demand instantiation of table row

I'm building a Excel-like table into web browser with React.js using only <div> not <table>.
Number of columns are about 90, rows are about 24000.
As we know, it is impossible to load whole data into HTML at single web page due to performance issue.
So I decided to show partial data to user using scrolling.
The main concept is simple, build HTML near user's viewport.
Guess if user is seeing 1800th to 1900th data in single viewport. I'will load only about 1750th ~ 1950th data into HTML. If user scroll up, I'll load HTML for 1700th ~ 1750th data and remove 1900th ~ 1950th data.
I think I need to manually manipulate scroll offset for getting pos where user is at. If each row's height is same as 40px and height of viewport is 1000px, then user will see 25 items at single viewport, so I need to load about 25(front) + 25(currently seeing) + 25(end) data and if user go upside or downside, I'll load additional data and remove data which far away from user.
However, I found that, requirement for my table is not matched with this situations. Here's my situation.
First, Each row's height is not same. Basically my table will show rows of row as single row. What I mean is, table single row can be looks like below,
| Photo| ProductName | Size Pool | Stock |
.... // Below are single row
+------+---------------+-------------------+------------+
| | Boots | 110-120 | 24 | // Row header (Shows Summary of child row)
+ +---------------+-------------------+------------+
| | Boots | 110 | 16 | // Row's row #1
+ +---------------+-------------------+------------+
| | Boots | 120 | 8 | // Row's row #2
+------+------------------------------------------------+
...
+------+---------------+-------------------+------------+
| |Leather Shoe | 120 | 8 | // Row can come with no header row, only single
+------+---------------+-------------------+------------+
...
Like above, if product has more than 2 options, then it merge into rows of single row and show with summary header. And if not a option product, it shows only it's row. And if content inside the row is big, it will stretch to fit the content inside
All data came from remote DataBase which retrieve data via REST API.
DataBase scheme is like below, 2 table as example.
Table #1 ProductInfo
+--------------+------------+------------+-----------+
| GroupNumber |ProductName | Size | Stock |
+--------------+------------+------------+-----------+
| 1 | Boots | 110 | 16 |
+--------------+------------+------------+-----------+
| 1 | Boots | 120 | 8 |
+--------------+------------+------------+-----------+
| 2 |Leather Shoe| 120 | 8 |
+--------------+------------+------------+-----------+
Table #2 GroupInfo
+-----------+------------+--------------+
|GroupNumber| SizePool | ImageURL |
+-----------+------------+--------------+
| 1 | 110-120 | https://abc |
+-----------+------------+--------------+
| 2 | 120 | https://def |
+-----------+------------+--------------+
And future requirements are below, (And most of them are implemented)
Sort by each columns, multi-pivot sort by row of row OR row (Handled via SQL)
Filter data by expression (Handled by client)
Hiding, resizing, change order of column(s) (Handled by client)
Interactable component inside cell like DatePicker, Pop-up etc... (Handled by client)
I succeed to create such table with page based method. But I need scrolling viewport table.
The table contains lots of dependent value column like sum, average which are not in stored in DB except for special reason (Like performance). (Most of them are handled by DB View or Procedure including sorting, calculations etc). So overall performance is really important.
I considered few questions and way to handle this, Can you check and give me a advice?
Q1. How can I decide when data should be loaded and removed and it's amount?
Data height is not consistent, so I think I cannot use scroll offset or data number as measurement criteria. (Is it possible with predictable way?)
Is it possible to archive by accessing DOM element? I'm new to Web dev. Sorry.
Q2. I can get a data from DB in 2 different ways.
Getting ProductInfo And GroupInfo seperately [<ProductInfo>,...] And [<GroupInfo>,...]
Getting Single group which object like this { group:<GroupINfo>, values:[<ProductInfo>,...] }
which is better for performance in this case in typical situations?
Q3. If I got a data like { group:<GroupINfo>, values:[<ProductInfo>,...] }, is there any problems with performance?
Like query overhead (I need to use query joined 6 times with maximum 6 depth nested SELECT query with 30 calculated columns for single data retrieval attempt. -- Pre-calculated view or table can have problems because I have many user to use it and update frequently. So I need to worry about Mutual Exclusive at least on updating.
I'm sure that above query's performance is sufficient for cropping if I got data like [<ProductInfo>,...] And [<GroupInfo>,...]. But I think later one is better. so I need to change interface if possible.
Q4. If I crop whole data from DB and structurize at the beginning, and load and remove data only for DOM, Can it be a good way?
Of course, Q1 is my primary matter, but this also seems good except for data sync with DB (Cause other user can update value while client contain outdated data)
I considered of using Infinite-Scrolling, but this is not for my case, I need perform load data and remove data at the same time. But infinite-scrolling seems dose not support removing data from viewport. Also inconsistent row height may be a problem.
I found react-virtualized and it works.
It also support dynamic resizing of row and it greatly helped

Another way to build database structure

I have to optimize my little-big database, because it's too slow, maybe we'll find another solution together.
First of all let's talk about data that are stored in the database. There are two objects: users and let's say messages
Users
There is something like that:
+----+---------+-------+-----+
| id | user_id | login | etc |
+----+---------+-------+-----+
| 1 | 100001 | A | ....|
| 2 | 100002 | B | ....|
| 3 | 100003 | C | ....|
|... | ...... | ... | ....|
+----+---------+-------+-----+
There is no problem inside this table. (Don't afraid of id and user_id. user_id is used by another application, so it has to be here.)
Messages
And the second table has some problem. Each user has for example messages like this:
+----+---------+------+----+
| id | user_id | from | to |
+----+---------+------+----+
| 1 | 1 | aab | bbc|
| 2 | 2 | vfd | gfg|
| 3 | 1 | aab | bbc|
| 4 | 1 | fge | gfg|
| 5 | 3 | aab | gdf|
|... | ...... | ... | ...|
+----+---------+------+----+
There is no need to edit messages, but there should be an opportunity to updated the list of messages for the user. For example, an external service sends all user's messages to the db and the list has to be updated.
And the most important thing is that there are about 30 Mio of users and average user has 500+ of messages. Another problem that I have to search through the field from and calculate number of matches. I designed a simple SQL query with join, but it takes too much time to get the data.
So...it's quite big amount of data. I decided not to use RDS (I used Postgresql) and decided to move to databases like Clickhouse and so on.
However I faced with a problem that for example Clickhouse doesn't support UPDATE statement.
To resolve this issues I decided to store messages as one row. So the table Messages should be like this:
Here I'd like to store messages in JSON format
{"from":"aaa", "to":bbe"}
{"from":"ret", "to":fdd"}
{"from":"gfd", "to":dgf"}
||
\/
+----+---------+----------+------+ And there I'd like to store the
| id | user_id | messages | hash | <= hash of the messages.
+----+---------+----------+------+
I think that full-text search inside the messages column will save some time resources and so on.
Do you have any ideas? :)
In ClickHouse, the most optimal way is to store data in "big flat table".
So, you store every message in a separate row.
15 billion rows is Ok for ClickHouse, even on single node.
Also, it's reasonable to have each user attributes directly in messages table (pre-joined), so you don't need to do JOINs. It is suitable if user attributes are not updated.
These attributes will have repeated values for each users' message - it's Ok because ClickHouse compresses data well, especially repeated values.
If users' attributes are updated, consider to store users table in separate database and use 'External dictionaries' feature to join it.
If message is updated, just don't update it. Write another row with modified message to a table instead and leave old message as is.
Its important to have right primary key for your table. You should use table from MergeTree family, which constantly reorders data by primary key and so maintains efficiency of range queries. Primary key is not required to be unique, for example you could define primary key as just (from) if you would frequently write "from = ...", and if these queries must be processed in short time.
And you could use user_id as primary key: if queries by user id are frequent and must be processed as fast as possible, but then queries with predicate on 'from' will scan whole table (mind that ClickHouse do full scan efficiently).
If you need to fast lookup by many different attributes, you could just duplicate table with different primary keys. It's typically that table will be compressed well enough and you could afford to have data in few copies with different order for different range queries.
First of all, when we have such a big dataset, from and to columns should be integers, if possible, as their comparison is faster.
Second, you should consider creating proper indexes. As each user has relatively few records (500 compared to 30M in total), it should give you a huge performance benefit.
If everything else fails, consider using partitions:
https://www.postgresql.org/docs/9.1/static/ddl-partitioning.html
In your case they would be dynamic, and hinder first time inserts immensely, so I would consider them only as last, if very efficient, resort.

Optimal View Design To Find Mismatches Between Two Sets of Data

A bit of background...my company utilizes a piece of software that stores information about a mortgage loan in independent fields. These fields are broken up across many tables in the loan database.
My current dilemma revolves around designing a view(s) that will allow me to find mismatched data on a subset of loans from the underwriting side of our software and the lock side of our software.
Here is a quick example of the data returned from the two views that already exist:
UW View
transID | DTIField | LTVField | MIField
50000 | 37.5 | 85.0 | 1
Lock View
transID | DTIField | LTVField | MIField
50000 | 42.0 | 85.0 | 0
In the above situation, the view should return the fields that are not matching (in this case the DTIField and the MIField). I have built a comparison view that uses a series of CASE statements to return either a 0 for not matched or a 1 for matched already:
transID | DTIField | LTVField | MIField
50000 | 0 | 1 | 0
This is fine in itself but it is creating a bit of an issue downstream on the reporting side. We want to be able to build a report that would display only those transIDs that have mismatched data and show which columns are not matched. Crystal Reports is the reporting solution in question.
Some specifics about the data sets...we have 27 items of the loan that we are comparing (so a total 54 fields). There are over 4000 loans in the system and growing. There are already indexes on the transID fields.
How would you structure the view to return all the data needed for the report? We can do a good amount of work in Crystal Reports but ideally much of the logic would be handled in MSSQL.
Thanks for any assistance.
I think there should be no issue in comparing the 27 columns for a given row. Since you'll be reading the row just once and comparing the columns on that row in both the tables, it shouldn't really pose any performance issues. You can use some hash functions HASHBYTES to assign a hash value to the combination of these 27 fields in both the tables and then use this field to compare which rows should be returned by the view. This should result in some performance improvement. Testing will reveal more.

How to store different types of address data in db?

I have to create a database combined with 4 types of xls files, for example A, B, C and D. Every year new file is created, starting from 2004. A have 7 sheets with 800-1000 rows, B - D have one sheet with max 200 rows.
Everyone knows that people are lazy, so in excel files, address data are stored differently in each sheet. One of them, from 2008, have address data stored separately, but every other sheets have this data combined into one column.
Sooo, here is a question - how should I design a datatable? Something like this?
+---------+----------+----------+-------------+--------------------------------+
| Street | House Nr | City | Postal Code | Combined Address |
+---------+----------+----------+-------------+--------------------------------+
| Street1 | 20 | Somwhere | 00-000 | null |
| Street2 | 98 | Elswhere | 99-999 | null |
| null | null | null | null | Somwhere 00-000, street3 29 |
| null | null | null | null | st. Street2 65 12-345 Elswhere |
+---------+----------+----------+-------------+--------------------------------+
There will be a lot of nulls, so maybe best solution would be 2 different tables?
Most important thing is that users will search by using this data, and in the future add data into that database without excel files.
There are at least two different angles of view here: Normalization and efficiency, leading to different results.
Normalization
If this is the most important criterion you would make even three tables. Obviously Combined Address needs a place of it's own, but also Postal Code and City have to be stored into another table, because there is a dependency between them. Just one of the two, most probably Postal Code will stay. (Yes, there even is sth. about Street and Postal Code too, but I'm clearly not going to be pedantic.)
Efficiency
Normalization as an end in itself doesn't necessarily make the best result. If you permit yourself to be a bit sloppy on that and leave it the way it is in the model you posted, things might become easier in coding. You could use a trigger to make sure Combined Address is never null or use a (materialized) view that pretends it is and just search in Combined Address for the time being.
Imagine the effort if you use different tables and there is a need for referencing these addresses in other tables (Which table to use when? How to provide a unique id? Clearly a problem.).
So, decide on what's more important.
If I'm not mistaken we are taking about some 2000 rows or some 8000 rows if it is '7 sheets with 800-1000 rows each' actually. Even if the latter applies this isn't a number that makes data correction impracticable. If the number of different input pattern in the combined column is low, you might be able to do this (partly) automatically and just have some-one prove reading.
So you might want to think about a future redesign as well and choose what's more convenient in this case.

Resources