How are ABAP data clusters stored in the database? - database

It's possible to store data clusters inside the database using import and export statements, together with a dictionary table which adheres to a template (at least has the fields MANDT, RELID, SRTFD, SRTF2, CLUSTR, CLUSTD).
Here's two example statements which store/retrieve an entire internal table ta_test as a data cluster with name testtab and id TEST in the database, using dictionary table ztest, and area AA
export testtab = ta_test to database ztest(AA) id 'TEST'.
import testtab = ta_test from database ztest(AA) id 'TEST'.
Looking at the contents of the ztest table, I see the following records (first 4 fields are the primary key):
MANDT 200
RELID AA
SRTFD TEST
SRTF2 0 (auto-incremented for each record)
CLUSTR integer value with a maximum of 2.886
CLUSTD a 128 character hexadecimal string
I've also noticed that the amount of data stored this way is a lot less than the data which was inside the internal table (for instance, 1.000.000 unique records in the internal table result in only 1.703 records inside the ztest table). Setting compression off on the export statement does increase the amount of records, but it's still a lot less.
My questions: does anyone know how this works exactly? Is the actual data stored elsewhere and does ztest contain pointers to it? Compression? Encryption? Is the actual data accessible directly from the database (skipping the ABAP layer)?

The internal format of data clusters is not documented (at least not in an official documentation). From my experience, it does contain the entire data and not just pointers: Transporting the table entries to a different system -- as you do frequently when transporting ALV list layouts -- is sufficient to move the contents over. Moreover, the binary blob does not seem to contain much information about the data structure - if you change the source/target structure in an incompatible way, you risk losing data. Direct access from the database layer is not possible (this is actually stated in numerous places all over the documentation). It might be possible to reverse-engineer the marshalling/unmarshalling algorithm, but why bother when you've got the language statement to access the contents at hand?

Related

How does SQLITE DB saves data of multiple tables in a single file?

I am working on a project to create a simplified version of SQLite Database. I got stuck when trying to figure out how does it manages to store data of multiple tables with different schema, in a single file. I suppose it should be using some indexes to map the data of different tables. Can someone shed more light on how its actually done? Thanks.
Edit: I suppose there is already an explanation in the docs, but looking for some easier way to understand it better and faster.
The schema is the list of all entities (tables, views etc) (the database as a whole) rather than a database existing of many schemas on a per entity basis.
Data itself is stored in pages each page being owned by an entity. It is these blocks that are saved.
The default page size is 4k. You will notice that the file size will always be a mutliple of 4K. You could also, with experimentation create a database with some tables, note it's size, then add some data, and if the added data does not require another page, see that the size of the file is the same. This demonstrating how it's all about pages rather than a linear/contiguos stream of data.
It, the schema, is saved in a table called sqlite_master. This table has columns :-
type (the type e.g. table etc),
name (the name given to the entity),
tbl_name (the tale to which the entity applies )
root page (the map to the first page)
sql (the SQL used to generate the entity, if any)
note that another schema, sqlite_temp_master, may also exist if there are temporary tables.
For example :-
Using SELECT * FROM sqlite_master; could result in something like :-
2.6. Storage Of The SQL Database Schema

Does the number of fields in a table affect performance even if not referenced?

I'm reading and parsing CSV files into a SQL Server 2008 database. This process uses a generic CSV parser for all files.
The CSV parser is placing the parsed fields into a generic field import table (F001 VARCHAR(MAX) NULL, F002 VARCHAR(MAX) NULL, Fnnn ...) which another process then moves into real tables using SQL code that knows which parsed field (Fnnn) goes to which field in the destination table. So once in the table, only the fields that are being copied are referenced. Some of the files can get quite large (a million rows).
The question is: does the number of fields in a table significantly affect performance or memory usage? Even if most of the fields are not referenced. The only operations performed on the field import tables are an INSERT and then a SELECT to move the data into another table, there aren't any JOINs or WHEREs on the field data.
Currently, I have three field import tables, one with 20 fields, one with 50 fields and one with 100 fields (this being the max number of fields I've encountered so far). There is currently logic to use the smallest file possible.
I'd like to make this process more generic, and have a single table of 1000 fields (I'm aware of the 1024 columns limit). And yes, some of the planned files to be processed (from 3rd parties) will be in the 900-1000 field range.
For most files, there will be less than 50 fields.
At this point, dealing with the existing three field import tables (plus planned tables for more fields (200,500,1000?)) is becoming a logistical nightmare in the code, and dealing with a single table would resolve a lot of issues, provided I don;t give up much performance.
First, to answer the question as stated:
Does the number of fields in a table affect performance even if not referenced?
If the fields are fixed-length (*INT, *MONEY, DATE/TIME/DATETIME/etc, UNIQUEIDENTIFIER, etc) AND the field is not marked as SPARSE or Compression hasn't been enabled (both started in SQL Server 2008), then the full size of the field is taken up (even if NULL) and this does affect performance, even if the fields are not in the SELECT list.
If the fields are variable length and NULL (or empty), then they just take up a small amount of space in the Page Header.
Regarding space in general, is this table a heap (no clustered index) or clustered? And how are you clearing the table out for each new import? If it is a heap and you are just doing a DELETE, then it might not be getting rid of all of the unused pages. You would know if there is a problem by seeing space taken up even with 0 rows when doing sp_spaceused. Suggestions 2 and 3 below would naturally not have such a problem.
Now, some ideas:
Have you considered using SSIS to handle this dynamically?
Since you seem to have a single-threaded process, why not create a global temporary table at the start of the process each time? Or, drop and recreate a real table in tempdb? Either way, if you know the destination, you can even dynamically create this import table with the destination field names and datatypes. Even if the CSV importer doesn't know of the destination, at the beginning of the process you can call a proc that would know of the destination, can create the "temp" table, and then the importer can still generically import into a standard table name with no fields specified and not error if the fields in the table are NULLable and are at least as many as there are columns in the file.
Does the incoming CSV data have embedded returns, quotes, and/or delimiters? Do you manipulate the data between the staging table and destination table? It might be possible to dynamically import directly into the destination table, with proper datatypes, but no in-transit manipulation. Another option is doing this in SQLCLR. You can write a stored procedure to open a file and spit out the split fields while doing an INSERT INTO...EXEC. Or, if you don't want to write your own, take a look at the SQL# SQLCLR library, specifically the File_SplitIntoFields stored procedure. This proc is only available in the Full / paid-for version, and I am the creator of SQL#, but it does seem ideally suited to this situation.
Given that:
all fields import as text
destination field names and types are known
number of fields differs between destination tables
what about having a single XML field and importing each line as a single-level document with each field being <F001>, <F002>, etc? By doing this you wouldn't have to worry about number of fields or have any fields that are unused. And in fact, since the destination field names are known to the process, you could even use those names to name the elements in the XML document for each row. So the rows could look like:
ID LoadFileID ImportLine
1 1 <row><FirstName>Bob</FirstName><LastName>Villa</LastName></row>
2 1 <row><Number>555-555-5555</Number><Type>Cell</Type></row>
Yes, the data itself will take up more space than the current VARCHAR(MAX) fields, both due to XML being double-byte and the inherent bulkiness of the element tags to begin with. But then you aren't locked into any physical structure. And just looking at the data will be easier to identify issues since you will be looking at real field names instead of F001, F002, etc.
In terms of at least speeding up the process of reading the file, splitting the fields, and inserting, you should use Table-Valued Parameters (TVPs) to stream the data into the import table. I have a few answers here that show various implementations of the method, differing mainly based on the source of the data (file vs a collection already in memory, etc):
How can I insert 10 million records in the shortest time possible?
Pass Dictionary<string,int> to Stored Procedure T-SQL
Storing a Dictionary<int,string> or KeyValuePair in a database
As was correctly pointed out in comments, even if your table has 1000 columns, but most of them are NULL, it should not affect performance much, since NULLs will not waste a lot of space.
You mentioned that you may have real data with 900-1000 non-NULL columns. If you are planning to import such files, you may come across another limitation of SQL Server. Yes, the maximum number of columns in a table is 1024, but there is a limit of 8060 bytes per row. If your columns are varchar(max), then each such column will consume 24 bytes out of 8060 in the actual row and the rest of the data will be pushed off-row:
SQL Server supports row-overflow storage which enables variable length
columns to be pushed off-row. Only a 24-byte root is stored in the
main record for variable length columns pushed out of row; because of
this, the effective row limit is higher than in previous releases of
SQL Server. For more information, see the "Row-Overflow Data Exceeding
8 KB" topic in SQL Server Books Online.
So, in practice you can have a table with only 8060 / 24 = 335 nvarchar(max) non-NULL columns. (Strictly speaking, even a bit less, there are other headers as well).
There are so-called wide tables that can have up to 30,000 columns, but the maximum size of the wide table row is 8,019 bytes. So, they will not really help you in this case.
yes. large records take up more space on disk and in memory, which means loading them is slower than small records and fewer can fit in memory. both effects will hurt performance.

How to transfer only new records between two different databases (ie. Oracle and MSSQL) using SSIS?

Do you know how to transfer only new records between two different databases (ie. Oracle and MSSQL) using SSIS? There is no problem transfering new data only between two tables in the same database and server, but is this possible to do such operation between completely different servers and databases?
Ps. I know about solution using Lookup but it is not very efficient if anybody needs to check and add a lot of records (50k and more) several times per day. I would like to operate with new data only.
You have several options:
Timestamp based solution
If you have a column which stores the insertation time in the source system, you can select only the new records created since the last load. With the same logic, you can transfer modified records too, just mark the records with the timestamp value when it change.
Sequence based solution
If there is a sequence in the source table, you can load the new records based on that sequence. Query the last value from the destination system, then load avarything which is larger than that value.
CDC based solution
If you have CDC (Change Data Capture) in your source system, you can track the changes and you can load them based on the CDC entries.
Full load
This is the most resource hungry solution: you have to copy all data from the source to the destination. If you do not have any column which marks the new records, you should use this solution.
You have several options to achieve this:
TRUNCATE the destination table and reload it from source
Use a Lookup component to determine which records are missing
Load all data from source to a temporary table and write a query which retrieves the new/changed records.
Summary
If you have at least one column, which marks the new/modified records, you can use it to implement a differential/incremental load with SSIS. If you do not have any clue, which columns/rows are changed, you have to load (or at least query) all of them.
There is no solution which enables a one-query (INSERT .. SELECT) solution using multiple servers without transferring all data. (Please note, that a multi-server query using Linked Servers are transfers the data from the source system).
What about variables? Is it possible to use the same variable between different databases and servers in SSIS?
I would like to transfer last id number from a destination table and transfer it to the source table (different server!).
I can set a variable in a database scope like this:
DECLARE #Last int
SET #Last = (SELECT TOP 1 Id FROM dbo.Table_1 ORDER BY Id DESC)
SELECT *
FROM dbo.Table_2
WHERE ID > #Last;
However it works between two tables in the same database (as a SQL command) only. I can create a variable for a entire SSIS package in Variables --> Add variable, but I don't know it is possible to use the variable in a similar way as above - to keep an information about last id in a destination table and pass it to another table on a source server as data limit.

Use XML column instead of denormalization?

We have a database storing ~5m contacts, each with multiple addresses.
The DB is normalized, a separate Addresses-Table with FK to Contacts, properly indexed.
~10m Addresses.
Address again references Provinces & Countries tables.
Since this is a platform where one Contact is viewed at a time and no search is needed on Address properties (and if - we use Lucene.Net) .... would it make sense to just put the Address information into an XML field on the Contact?
<Addresses>
<Address Street="" City="" ... />
<Address Street="" City="" ... />
</Addresses>
Its not really denormalization - but still getting rid of 3 Joins (Address/Country/Province)
Or asked another way.... will 5mio records and 10mio records joined even put any strain on the database? (say for example with ~50 requests hitting the DB concurrently?)
Would it be premature optimization to do such a thing? (event when I'm absolutely sure that I will never query for Address properties...)
We are using SQL Server 2012
If you use this column as nosql - just to store values and get whole data this solution can be used. But there are some limitations:
Hard to search data in this column (but possible)
Hard to modify data in this column (but possible)
You can't get part of data in this column using EF for example (just using stored procedures)
No foreing keys (you have to support data consistency yourself)
And you get this benefits:
Simple storage (one table with all data)
Simple queries
Shortly, yes, you can use this solution because you say you won't query part of this data or change it.

Does a full table scan read through all columns (for every row)?

Say I have a table with 3 columns - firstname, middlename, lastname - in that order .. there are also no indexes, and all the fields are varchar.
If I do a full table scan to determine if a given firstname exists in the table.. Will Oracle (or MSSQL) have to read through all the other columns as well? (or is it smart enough to skip over them?).
What about if I'm searching through the 2nd column instead of the 1st one ? Will the first column have to be read? What about if the 1st column is a varchar with close to 2000bytes of data? .. will all the bytes have to be read, or will they somehow be skipped over?
For a programmer the definition of 'read' is probably 'will look at the memory address and read every byte', but for a database this definition is incorrect. In a database 'read' means 'access the disk and fetch the data in memory'. Since both Oracle and SQL Server are page oriented rowstores then yes, all the columns will be 'read'. Things get complicated once you consider off-row storage and such, and things get really muddy if you consider the buffer pool. But for that level of detail you need to express the question in more detail (what you're trying to achieve and why are you asking?) and you need to understand the storage format in great detail as well.
As a side note, both SQL Server and Oracle support columnstore format which is explicitly designed for reading in column oriented fashion (i.e. do not read columns not needed), but it is very unlikely that this is what you want and columnar storage is a very, very, special case.
All data in a data page is read unless it is not queried and stored out of bound. When varchar(max) column exceeds the 8k data it will continue in a new set of pages. These pages are not read when the field is not queried.

Resources