Using the Snowflake UI - after a given query has completed - one has access to the History tab in order to access/examine corresponding metadata.
Something confusing occurs when I compare the Bytes Scanned field (see via History tab and Image 1) to the Scanned Bytes (see via clicking query ID hyperlink and Image 2)
Why are these different? Do they mean different things?
IMAGE 1
IMAGE 2
As far I know, this "Scanned Bytes" shows total bytes read from Snowflake tables. If you are executing a COPY command, it's normal that you see zero bytes (I also see it shows 0 when querying secure views).
Related
I have large string in my postgresql database. It's a base64 encoded mp3 and I have to select the column with that large string and get all the data with one select. If I write normal select
SELECT * FROM public.song_data WHERE id=1;
it will return just 204 kB from that string and the string has 2.2 MB.
Also the datagrip shows me just 204 kB of data from that string. Is there a way to get all the data with just one select?
It's strange. Are you sure so your data was not trimmed somewhere? You can use function length for check of actual size.
postgres=# select length('aaa');
┌────────┐
│ length │
╞════════╡
│ 3 │
└────────┘
(1 row)
Two MB are nothing for Postgres, but some clients (or protocols) can problem with it. Sometimes is necessary to use functions lo_import and lo_export as workaround for client / protocol limits. For selecting data from table you should to use SELECT statement. There is not any other way. Theoretically you can transform any string to large object and then by function lo_export you can download this large object from database with LO special protocol. For 2MB it should not be necessary I think.
Please, try to check if your data was stored to postgres correctly. Theoretical limit for text, varchar is 1GB. Practical limit is less value - about 100MB. It is significantly higher value than 2MB.
Postgres has special data type for binary data - bytea. It does conversation to hex code by default and base64 encoding is supported too.
You can select number of chars you want to show using left with negative values (which removes characters from the end) and concat:
SELECT left(concat(public.song_data,' '), -1) from mytable;
The client was actually the problem. Datagrip can’t print of 2MB. I tried another client (DBeaver and Heidisql) and it was ok. Then I selected a row of 2MB with a php select and I got all that data.
I need help with something I'm trying to do and cannot find help anywhere.
I'm trying to upload a file to Host via ISPF (ISPF -> Command -> "Send File to Host"). And the problem I'm having is that the file have variable length (it was exported from a DB2 database via a SH script) and it's not working well.
What I mean is:
In windows, the file looks like this:
This is line one
This is the second line
And this is the third
But in Host it always ends being like this:
This is line one This is
the second line and this
is the third
Or similar, depending on the "Record length" I set when allocating the data set.
I don't know if the problem is how I'm creating the file on Host. If the problem is with the send parameters.. or maybe is with the TXT file.
I tried creating the dataset with different Record Formats (F, FB, V, VB) and with all was the same.
And also tried modifing the Send parameters in here:
Send parameters
And checked the txt file, but it seems to be ok.
Well, thanks in advance for the help! and sorry for my the poor english.
UPDATE 03/18
Hi! I'm still trying to solve this. But now I have a more info!
It seems that problem is within the file exported, not the configuration of the terminal.
I'm using a linux script to export the file from a DB2 database, and I'm trying to upload it from a Windows PC (that have the E3270 terminal).
I read a lot, and noticed that the file exported from DB2 to linux only use the "New Line" code to mark an End of Line (0A in hex), while Windows use "Carriage Return + New Line" (which are "0D 0A" in hex).
Could the problem be there?
I tried creating a new txt file with Windows (which end each line with 0D 0A).. and it worked great! But I tried to modify the exported file.. adding an "space" at the end, and then changing that space hex (20) with the 0D (so I had 0D 0A.. it didn't let me "add" a new hexa).. but it didn't work. That.. throw me away the whole theory haha, but maybe I'm doing something wrong.
well, thanks!
From the Host output the file (dataset) is being considered as fixed length of 24. It needs to be specified as Variable (VB) in the send.
From here Personal Communications 6.0.0>Product Documentation>Books>Emulator User's Reference>Transferring Files it appears that you can specify this as per :-
Record Format
Valid only for VM/CMS and MVS/TSO when APPEND is not specified for
file transmission. You can select any of the following:
Default
Fixed (fixed length)
Variable (variable length)
Undefined (undefined mode for MVS/TSO only)
If you select the Default value, the record format is selected
automatically by the host system.
Specifying Variable for VM file transfer enables host disk space to be
used efficiently. Logical Record Length (LRECL)
Valid only for VM/CMS and MVS/TSO when APPEND is not specified for
file transmission.
Enter the logical record length to be used (host record byte count) in
the LRECL text box. If Variable and Undefined Mode are specified as
the record format, the logical record length is the maximum record
length within a file. The maximum value is 32767.
The record length of a file sent from a workstation to the host system
might exceed the logical record length specified here. If so, the host
file transfer program divides the file by the logical record length.
When sending a text file from a workstation to a host, if the text
file contains 2-byte workstation codes (such as kanji codes), the
record length of the file is changed because SO and SI have been
inserted.
To send a file containing long records to the host system, specify a
sufficiently long logical record length.
Because the record length of a workstation file exceeds the logical
record length, a message does not appear normally if each record is
divided. To display a message, add the following specification to the
[Transfer] item of the workstation profile:
DisplayTruncateMessage = Y
As I don't have access I can't actually look into this further but I do recall that it can be a little confusing to use the file transfer.
I'd suggest using the 32767 as the LRECL, along with variable, and perhaps having a look at the whole page that has been linked. Something on the PC side will have to know how to convert the file (ie at LF determine the length of the record and prefix the record with that record length (if I recall correctly 2 bytes/a word)) so you might have to use variable in conjunction with another selectable parameter.
If you follow the link, you will see that Record Format is part of the Defining Transfer Types, you may have to define a transfer type as per :-
Click Edit -> Preferences -> Transfer from the session window.
Click the tab for your host type or modem protocol.
The property page for the selected host or modem protocol opens. The items that appear depend on the selected host system.
Enter transfer-type names in the Transfer Type box, or select them from the drop-down list.
Select or enter the required items (see Items to Be Specified).
To add or replace a transfer type, click Save. To delete a transfer type, click Delete.
A dialog box displays, asking for confirmation. Click OK.
I am facing an issue regarding a significantly large database that I have to reorganize. There are two columns, one consists of the Service Code of an item and next is a column containing the Description of the relevant item. Below is an example:
TSB Trim Booklet
LMN Loading Manual
GLM Grain Loading Manual
etc.
There are a total of 170 different items.
The problem is this: On a different Excel file, there is a column containing (mixed around 16,000 times) only the Descriptions of the items without the 3-letter Service Code.
How can I link them quickly?
Assumptions: you want to take the service code from file 1 and apply it to the descriptions from file 2 and a single description always has the same service code.
Use the following formula in file 2 (the big one you want to add service codes to)
=INDEX([file1]Sheetname!$A:$A,MATCH([file2]Sheetname!A2,[file1]Sheetname!$B:$B,0))
Where
[file1]Sheetname!$A:$A
is the column with service codes in the file/sheet with both the code and the description
[file2]Sheetname!A2
is the cell with description in the file/sheet with just descriptions
and
[file1]Sheetname!$B:$B
is the column with descriptions in the file/sheet with both the code and the description
I have encountered some errors with the SDP where one of the potential fixes is to increase the sample size used during schema discovery to 'unlimited'.
For more information on these errors, see:
No matched schema for {"_id":"...","doc":{...}
The value type for json field XXXX was presented as YYYY but the discovered data type of the table's column was ZZZZ
XXXX does not exist in the discovered schema. Document has not been imported
Question:
How can I set the sample size? After I have set the sample size, do I need to trigger a rescan?
These are the steps you can follow to change the sample size. Beware that a larger sample size will increase the runtime for the algorithm and there is no indication in the dashboard other than the job remaining in 'triggered' state for a while.
Verify the specific load has been stopped and the dashboard status shows it as stopped (with or without error)
Find a document https://<account>.cloudant.com/_warehouser/<source> where <source> matches the name of the Cloudant database you have issues with
Note: Check https://<account>.cloudant.com/_warehouser/_all_docs if the document id is not obvious
Substitute "sample_size": null (which scans a sample of 10,000 random documents) with "sample_size": -1 (to scan all documents in your database) or "sample_size": X (to scan X documents in your database where X is a positive integer)
Save the document and trigger a rescan in the dashboard. A new schema discovery run will execute using the defined sample size.
I'm converting one of our Delphi 7 projects to Delphi X3 because we want to support Unicode. We're using MS SQL Server 2008/R2 as our database server. After changing some database fields from VARCHAR to NVARCHAR (and the fields in the accompanying ClientDatasets to ftWideString), random crashes started to occur. While debugging I noticed some unexpected behaviour by the TClientDataset/DbExpress:
For a NVARCHAR(10) databasecolumn I manually create a TWideStringField in a clientdataset and set the 'Size' property to 10. The 'DataSize' property of the field tells me 22 bytes are needed, which is expected since TWideStringField's encoding is UTF-16, so it needs two bytes per character and some space for storing the length. Now when I call 'CreateDataset' on the ClientDataset and write the dataset to XML (using .SaveToFile), in the XML file the field is defined as
<FIELD WIDTH="20" fieldtype="string.uni" attrname="TEST"/>
which looks ok to me.
Now, instead of calling .CreateDataset I call .Open on the TClientDataset so that it gets its data through the linked components ->TDatasetProvider->TSQLDataset (.CommandText = a simple select * from table)->TSQLConnection. When I inspect the properties of the field in my watch list, Size is still 10, Datasize is still 22. After saving to XML file however, the field is defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
..the width has doubled?
Finally, if I call .Open on the TClientDataset without creating any fielddefinitions in advance at all, the Size of the field will afterwards be 20(incorrect !) and Datasize 42. After saving to XML, the field is still defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
Does anyone have any idea what is going wrong here?
Check the fieldtype and it's size at the SQLCommand component (which is before DatasetProvider).
Size doubling may be a result of two implicit "conversions": first - server provides NVarchar data which is stored into ansi-string field (and every byte becomes a separate character), second - it is stored into clientdataset's field of type Widestring and each character becomes 2 bytes (size doubles).
Note that in prior versions of Delphi string field size mismatch between ClientDataset's field and corresponding Query/Command field did not result in an exception but starting from one of XE*'s it offten results in AV. So you have to check carefully string field sizes during migration.
Sounds like because of the column datatype being changed, it has created unexpected issues for you. My suggestion is to
1. back up the table,multiple ways to doing this,pick your poison figuratively speaking
2. delete the table,
3. recreate the table,
4. import the data from the old table to the newly created table. See if that helps.
Sql tables DO NOT like it when column datatypes get changed, and unexpected issues may arise from doing just that. So try that, and worst case scenario, you have wasted maybe ten minutes of your time trying a possible solution.