In netezza need to convert columns to rows please help for example
emp id dept
abc 10 CS
xyz 20 Maths
need output as
abc
10
CS
xyz
20
Maths
You can use UNION ALL to accomplish this, but you need to decide what data type the target column should be.
You don't specify the types for your sample data, but it might be something like this:
create table row_split (emp varchar(40), id integer, dept varchar(15));
In this case you would need to cast the columns to something that would accommodate the data types of each source column. In this case, varchar(40) is the obvious choice.
select emp::varchar(40) result_column from row_split
union all
select id::varchar(40) result_column from row_split
union all
select dept::varchar(40) result_column from row_split;
RESULT_COLUMN
---------------
Maths
xyz
20
CS
abc
10
(6 rows)
There is much easier way to do this using nzsql netezza querying tool (if you are using on linux / unix to access netezza database)
nzsql offers output Field delimiter option.
Let's assume data is stored in employee table and all the environment variables are set and exported for host, username, password and database.
export NZ_USER=<username>
export NZ_PASSWORD=<password>
export NZ_DATABASE=<database>
export NZ_HOST=<hostname>
nzsql -F $'\n' -Atc 'select * from employee;'
abc
10
CS
xyz
20
Maths
Only downside to this solution is, it only works on Linux / Unix where you can use nzsql utility.
If you are trying to convert columns to rows,
then shouldn't you be expecting:
emp abc xyz
id 10 20
dept CS Maths
If you you are trying to transpose a matrix, you may use TRANSPOSE procedure.
Related
I have a requirement that requires a table design , where need to maintain the files uploaded for any applications. Is there a simple way I can achieve this with out any trigger/Sproc?
ID AppName FileName fileorder
1 abc file1 1
2 abc file2 2
3 abc file1 3
4 xyz test1 1 - start a fresh
5 xyz test2 2
6 abc file3 4 - resume from previous value of 'abc'
7 xyz test3 3 - resume from previous value of 'xyz'
8 grt file1 1 - start a fresh
No, there is nothing you can write in a CREATE TABLE that will automatically populate the fileorder column this way. The only way you can do this is with some custom coding.
I don't know your reason for wanting to create a column like this in your table, but for most reasons I can think of, you're probably better off not storing this value in your table at all and either calculating it at query time, or making a VIEW that includes the calculation of this column.
Instead of writing a value of fileorder to the table you'd be better off writing a query to read the table that looks like this:
SELECT ID, AppName, FileName, ROW_NUMBER() OVER(PARTITION BY AppName ORDER BY ID) AS fileorder
FROM YOUR_TABLE -- Whatever your table name is
WHERE AppName in ('abc','xyz','grt') -- Any relevant WHERE clause
ORDER BY ID;
Unfortunately, you can't use Window Functions in a computed column, which would be the only way outside of a stored procedure or a trigger to do this.
I'm trying to take a raw data set that adds columns for new data and convert it to a more traditional table structure. The idea is to have the script pull the column name (the date) and put that into a new column and then stack each dates data values on top of each other.
Example
Store 1/1/2013 2/1/2013
XYZ INC $1000 $2000
To
Store Date Value
XYZ INC 1/1/2013 $1000
XYZ INC 2/1/2013 $2000
thanks
There are a few different ways that you can get the result that you want.
You can use a SELECT with UNION ALL:
select store, '1/1/2013' date, [1/1/2013] value
from yourtable
union all
select store, '2/1/2013' date, [2/1/2013] value
from yourtable;
See SQL Fiddle with Demo.
You can use the UNPIVOT function:
select store, date, value
from yourtable
unpivot
(
value
for date in ([1/1/2013], [2/1/2013])
) un;
See SQL Fiddle with Demo.
Finally, depending on your version of SQL Server you can use CROSS APPLY:
select store, date, value
from yourtable
cross apply
(
values
('1/1/2013', [1/1/2013]),
('2/1/2013', [2/1/2013])
) c (date, value)
See SQL Fiddle with Demo. All versions will give a result of:
| STORE | DATE | VALUE |
|---------|----------|-------|
| XYZ INC | 1/1/2013 | 1000 |
| XYZ INC | 2/1/2013 | 2000 |
Depending on the details of the problem (i.e. source format, number and variability of dates, how often you need to perform the task, etc), it very well may be much easier to use some other language to parse the data and perform either a reformatting function or the direct insert into the final table.
The above said, if you're interested in a completely SQL solution, it sounds like you're looking for some dynamic pivot functionality. The keywords being dynamic SQL and unpivot. The details vary based on what RDBMS you're using and exactly what the specs are on the initial data set.
I would use a scripting language (Perl, Python, etc.) to generate an INSERT statement for each date column you have in the original data and transpose it into a row keyed by Store and Date. Then run the inserts into your normalized table.
i have a table like this :
CREATE TABLE [Mytable](
[Name] [varchar](10),
[number] [nvarchar](100) )
i want to find [number]s that include Alphabet character?
data must format like this:
Name | number
---------------
Jack | 2131546
Ali | 2132132154
but some time number insert informed and there is alphabet char and other no numeric char in it, like this:
Name | number
---------------
Jack | 2[[[131546ddfd
Ali | 2132*&^1ASEF32154
i wanna find this informed row.
i can't use 'Like' ,because 'Like' make my query very slow.
Updated to find all non numeric characters
select * from Mytable where number like '%[^0-9]%'
Regarding the comments on performance maybe using clr and regex would speed things up slightly but the bulk of the cost for this query is going to be the number of logical reads.
A bit outside the box, but you could do something like:
bulk copy the data out of your table into a flat file
create a table that has the same structure as your original table but with a proper numeric type (e.g. int) for the [number] column.
bulk copy your data into this new table, making sure to specify a batch size of 1 and an error file (where rows that won't fit the schema will go)
rows that end up in the error file are the rows that have non-numerics in the [number] column
Of course, you could do the same thing with a cursor and a temp table or two...
I'm hoping to return a single row with a comma separated list of values from a query that returns multiple rows in Oracle, essentially flattening the returned rows into a single row.
In PostgreSQL this can be achieved using the array and array_to_string functions like this:
Given the table "people":
id | name
---------
1 | bob
2 | alice
3 | jon
The SQL:
select array_to_string(array(select name from people), ',') as names;
Will return:
names
-------------
bob,alice,jon
How would I achieve the same result in Oracle 9i?
Thanks,
Matt
Tim Hall has the definitive collection of string aggregation techniques in Oracle.
If you're stuck on 9i, my personal preference would be to define a custom aggregate (there is an implementation of string_agg on that page) such that you would have
SELECT string_agg( name )
FROM people
But you have to define a new STRING_AGG function. If you need to avoid creating new objects, there are other approaches but in 9i they're going to be messier than the PostgreSQL syntax.
In 10g I definitely prefer the COLLECT option mentioned at the end of Tim's article.
The nice thing about that approach is that the same underlying function (that accepts the collection as an argument), can be used both as an aggregate and as a multiset function:
SELECT deptno, tab_to_string(CAST(MULTISET(SELECT ename FROM emp
WHERE deptno = dept.deptno) AS t_varchar2_tab), ',') FROM dept
However in 9i that's not available. SYS_CONNECT_BY_PATH is nice because it's flexible, but it can be slow, so be careful of that.
As in subject... is there a way of looking at an empty table schema without inserting any rows and issuing a SELECT?
SELECT *
FROM SYSIBM.SYSCOLUMNS
WHERE
TBNAME = 'tablename';
Are you looking for DESCRIBE?
db2 describe table user1.department
Table: USER1.DEPARTMENT
Column Type Type
name schema name Length Scale Nulls
------------------ ----------- ------------------ -------- -------- --------
AREA SYSIBM SMALLINT 2 0 No
DEPT SYSIBM CHARACTER 3 0 No
DEPTNAME SYSIBM CHARACTER 20 0 Yes
For DB2 AS/400 (V5R4 here) I used the following queries to examine for database / table / column metadata:
SELECT * FROM SYSIBM.TABLES -- Provides all tables
SELECT * FROM SYSIBM.VIEWS -- Provides all views and their source (!!) definition
SELECT * FROM SYSIBM.COLUMNS -- Provides all columns, their data types & sizes, default values, etc.
SELECT * FROM SYSIBM.SQLPRIMARYKEYS -- Provides a list of primary keys and their order
Looking at your other question, DESCRIBE may not work. I believe there is a system table that stores all of the field information.
Perhaps this will help you out. A bit more coding but far more accurate.