I'm trying to capture a table from an application build in PowerBuilder,It records table as Control always instead of table which doesn't allows any table functions on it. Can anybody guide me with this?
e.g. Control abcd
locator "[#priorlabel='abcd'][1]"
Silk Test does not provide specific support PowerBuilder. You might be able to automate the table using a combination of Click(), TypeKeys(), TextClick() and ImageClick().
Related
I'm testing out a trial version of Snowflake. I created a table and want to load a local CSV called "food" but I don't see any "load" data option as shown in tutorial videos.
What am I missing? Do I need to use a PUT command somewhere?
Don't think Snowsight has that option in the UI. It's available in the classic UI though. Go to Databases tab, select a database. Go to Tables tab and select a table the option will be at the top
If the classic UI is limiting you or you are already using Snowsight and don't want to switch back, then here is another way to upload a CSV file.
A preliminary is that you have installed SnowSQL on your device (https://docs.snowflake.com/en/user-guide/snowsql-install-config.html).
Start SnowSQL and perform the following steps:
Use the database where to upload the file to. You need various privileges for creating a stage, a fileformat, and a table. E.g. USE MY_TEST_DB;
Create the fileformat you want to use for uploading your CSV file. E.g.
CREATE FILE FORMAT "MY_TEST_DB"."PUBLIC".MY_FILE_FORMAT TYPE = 'CSV';
If you don't configure the RECORD_DELIMITER, the FIELD_DELIMITER, and other stuff, Snowflake uses some defaults. I suggest you have a look at https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html. Some of the auto detection stuff can make your life hard and sometimes it is better to disable it.
Create a stage using the previously created fileformat
CREATE STAGE MY_STAGE file_format = "MY_TEST_DB"."PUBLIC".MY_FILE_FORMAT;
Now you can put your file to this stage
PUT file://<file_path>/file.csv #MY_STAGE;
You can find documentation for configuring the stage at https://docs.snowflake.com/en/sql-reference/sql/create-stage.html
You can check the upload with
SELECT d.$1, ..., d.$N FROM #MY_STAGE/file.csv d;
Then, create your table.
CREATE TABLE MY_TABLE (col1 varchar, ..., colN varchar);
Personally, I prefer creating first a table with only varchar columns and then create a view or a table with the final types. I love the try_to_* functions in snowflake (e.g. https://docs.snowflake.com/en/sql-reference/functions/try_to_decimal.html).
Then, copy the content from your stage to your table. If you want to transform your data at this point, you have to use an inner select. If not then the following command is enough.
COPY INTO mycsvtable from #MY_STAGE/file.csv;
I suggest doing this without the inner SELECT because then the option ERROR_ON_COLUMN_COUNT_MISMATCH works.
Be aware that the schema of the table must match the format. As mentioned above, if you go with all columns as varchars first and then transform the columns of interest in a second step, you should be fine.
You can find documentation for copying the staged file into a table at https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html
If you can check the dropped lines as follows:
SELECT error, line, character, rejected_record FROM table(validate("MY_TEST_DB"."MY_SCHEMA"."MY_CSV_TABLE", job_id=>'xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'))
Details can be found at https://docs.snowflake.com/en/sql-reference/functions/validate.html.
If you want to add those lines to your success table you can copy the the dropped lines to a new table and transform the data until the schema matches with the schema of the success table. Then, you can UNION both tables.
You see that it is pretty much to do for loading a simple CSV file to Snowflake. It becomes even more complicated when you take into account that every step can cause some specific failures and that your file might contain erroneous lines. This is why my team and I are working at Datameer to make these types of tasks easier. We aim for a simple drag and drop solution that does most of the work for you. We would be happy if you would try it out here: https://www.datameer.com/upload-csv-to-snowflake/
BACKGROUND
After moving our MSSQL DB to an ELK stack version 5.2, we realised we needed a relatiopnal visauliser. By then downgrading to 2.4.1 we were able to link Kibi.
TRIED
I have created the relations between the tables as shown below;
However, when trying to build a simple line graph to compare the tables tblaccounts and aspnetusers , it simply uses the values in tblaccountusers. None of the Kibi documentation seems to help, from what I have seen.
THE PROBLEM
The problem with this is I need the actual values to be used from both tables, through the child table of tblaccountusers, as to display account names rather than the ID number such as 1,2... etc.
If anyone has any guidance or links to help with this, please comment below.
Scenario: I'm in IntelliJ IDEA DB console and looking at
SELECT * FROM TableXY;
I want to see the definition of the TableXY. One way of doing it is:
ctrl+click on the table name: Looks up the table in the Database window.
F4: Opens the table editor.
select the Text tab
The problem is that I'm on a DB with a lot of tables and the first step takes forever because IDEA loads the full list of tables.
Is there a way to jump to the table editor directly?
I am not sure if this is what you are looking for, but to quickly view the table definition you can use the Quick Documentation pop-up:
Place your cursor within the table name and hit CTRL+Q (or F1 on Mac). This will show you some information about the table, the first rows, and the table definition (output from SHOW CREATE TABLE).
You can also configure the Quick Documentation under Settings > Tools > Database (see Intellij IDEA on-line help).
I am developing a web application using Oracle ADF. I have a bounded task flow in that i have a page fragment in that I have a table. I am generating this table from managed bean. The following is my table
I have pasted "#{pageFlowScope.tableUtilBean.tableList}" in value field of table in the property inspector. My table is successfully generated.
I have a method in a managed bean called generateTable(). The table will be generated after executing a query. Suppose the query result contains 10 records the table will have 10 rows.
My problem is suppose if the query result is having 100 records this method is executing 100 times and the query is executing 100 times. Due to this, It is taking too much time to generate the table. I need to make sure that this method gets executes only once.
Please help me. How do I achieve this.
Thanks in advance.
In your task flow try to create a Method Activity and make it default activity. This method action should call #{pageFlowScope.tableUtilBean.generateTable}, before fragment is loaded.
And when you have a query to show result then why are you populating table from managed bean
just create a ViewObject from SQL Query and drop it on page as af:table
Make use of ADF Business components
Ashish
We're using Magento 1.4.0.1 and want to use an extension from a 3rd party developer. The extension does not work, because of a join to the table "sales_flat_shipment_grid":
$collection = $model->getCollection()->join('sales/shipment_grid', 'increment_id=shipment', array('order_increment_id'=>'order_increment_id', 'shipping_name' =>'shipping_name'), null,'left');
Unfortunately this table does not exist n our database. So the error "Can't retrieve entity config: sales/shipment_grid" appears. If I comment this part out, the extension is working, but I guess, it does not proper work.
Does anybody know something about this table? There are a backend-option for the catalog to use the "flat table" option, but this is only for the catalog. And the tables already exists, no matter which option is checked.
As it is obvious from table name, this table contains information about shipments and is used in grid on backend. The problem is that this table was created in 1.4.1.1, so you won't find it in your store.
I see 3 ways of solving the problem:
You can create this table and write some script, that will fill it
with necessary data by cron
You can rewrite SQL-query in that 3rd party extension so that it took necessary data from other sources
You can upgrade your Magento at least to 1.4.1.1 (highly recommended)