I needed to insert an array field into a database and I was pleased to notice that PostGreSQL had that functionality. But now I am not able to insert the data using the tables active record.
I have tried the below calls with no success
$active_record->array_column = $_array_of_values;
which gives me the exception
Exception Raised:CDbCommand failed to execute the SQL statement: SQLSTATE[22P02]: Invalid text representation: 7 ERROR: array value must start with "{" or dimension information
I have also tried this using
foreach($_array_of_values as $value){
$active_record->array_column[] = $value;
}
which tells me
Indirect modification of overloaded property FeatureRaw::$colors_names has no effect
Can anyone help me with this?
Thanks!
Data must be inserted in the form (text representation of an ARRAY):
INSERT INTO tbl (arr_col) VALUES ('{23,45}')
Or:
INSERT INTO tbl (arr_col) VALUES ('{foo,"bar, with comma"}')
So you need to enclose your array values in '{}' and separate them with comma ,. Use double quotes "" around text values that include a comma.
I listed more syntax variants to insert arrays in a related answer.
For those who also have same problem:
I didn't check the Yii1 behavior, but in Yii2 you simply can insert array as properly formed string as Erwin Brandstetter mentioned in his comment:
$activeRecord->arrayField = '{' . implode(',',$array_values) . '}';
Of course you need to make additional efforts when your $array_values has strings with commas, etc. And you still need to convert value back to array after you load ActiveRecord.
You can make these conversions in ActiveRecord's beforeSave() and afterLoad() and you will not need to convert values manually.
UPD. Recently I made a simple behavior for Yii2 to use array fields with ActiveRecord without manual field building: kossmoss/yii2-postgresql-array-field. It is more generalized way to solve the problem and I hope it will help. For those who use Yii1: you can investigate the package code and create your own solutuion compatible with your framework.
Related
How to add leading zeros in ADF data flow from the expression builder
For example – have column with numeric value as “000001” but it is coming as 1 only in SQL DB , if I put in entire value in single quotes it is coming but I need dynamic way of implementation with out hard coding.
I agree with #Larnu's comments that even if we give 00001 to an int type column it will give as 1 only.
So, we have to give those in single quotes ('00001') to use like that or import the incoming data as string instead of int.
As you are using ADF dataflow, if you want to use the 00001, you can generate those using derived column transformation from SQL source. But this depends on your requirement like how your leading 0's varies. So, use according to it.
Sample demo:
concat('0000', toString(id))
Result:
Use that column as per your requirement, after that you can convert it back to the same input by toInteger(id).
Requirement:
Need the file to be exported as below format, where gender, age, and interest are columns and value after : is data for that column. Can this be achieved while using Snowflake, if not is it possible to export data using Python
User1234^gender:male;age:18-24;interest:fishing
User2345^gender:female
User3456^age:35-44
User4567^gender:male;interest:fishing,boating
EDIT 1: Solution as given by #demircioglu
It displays as NULL values instead of other column values
Below the EMPLOYEES table data
When I ran below query
SELECT 'EMP_ID'||EMP_ID||'^'||'FIRST_NAME'||':'||FIRST_NAME||';'||'LAST_NAME'||':'||LAST_NAME FROM tempdw.EMPLOYEES ;
Create your SQL with the desired format and write it to a file
COPY INTO #~/stage_data
FROM
(
SELECT 'User'||User||'^'||'gender'||':'||gender||';'||'age'||':'||age||';'||'interest'||':'||interest FROM table
)
file_format = (TYPE=CSV compression='gzip')
File format here is not important because each line will be treated as a field because of your delimiter requirements
Edit:
CONCAT function (aliased with ||) returns NULL if you have a NULL value.
In order to eliminate NULLs you can use NVL2 function
So your SQL will have series of NVL2s
NVL2 checks the first parameter and if it's not NULL returns first expression, if it's NULL returns second expression
So for User column
'User'||User||'^' will turn into
NVL2(User,'User','')||NVL2(User,User,'')||NVL2(User,'^','')
P.S. I am leaving up to you to create the rest of the SQL, because Stackoverflow's function is to help find the solution, not spoon feed the solution.
No, I do not believe multiple delimiters like this are supported in Snowflake at this time. Multiple byte and multiple character delimiters are supported, but they will need to be specified as the same delimiter repeated for either record or line.
Yes, it may be possible to do some post-processing or use Python scripts to achieve this. Or even SQL transformative statements. This is not really my area of expertise so if someone has an example for you, I'll let them add to the discussion.
I am using PostgreSQL 9.5.14, and have a column in a table that contains JSON arrays that I need to parse for their contents.
Using a select I can see that the structure of the JSON is of this kind:
SELECT rule_results from table limit 5;
Result:
[{"rule_key":"applicant_not_lived_outside_eu"},{"rule_key":"family_assets_exceed_limit"},{"rule_key":"owned_a_deed"}]
[]
[]
[{"rule_key":"family_category","details":"apply_with_parents_under_25"}]
[]
I have been unable to create an SQL command to give me the values of the rule_key keys.
I've attempted to use the documentation for json-functions in postgresql to find a solution from
https://www.postgresql.org/docs/9.5/functions-json.html
SELECT rule_results::json->'rule_key' as results from table;
This gives me null values only.
SELECT jsonb_object_keys(rule_results::jsonb) from table;
This results in the error msg "cannot call jsonb_object_keys on a scalar", which seems to mean that the query is limited to a single row.
This looks simple enough, an array with key:value pairs, but somehow the answer eludes me. I would appreciate any help.
demo: db<>fiddle
Different solutions are possible. It depends on what you are expecting finally. But all solutions would use the function json_array_elements(). This expands every element into one row. With that you can do whatever you want.
This results in one row per value:
SELECT
value -> 'rule_key'
FROM
data,
json_array_elements(rule_results)
I am trying to use to insert into the table but got the exception that array index out of bounds ?
Please give me the idea how to do that and where is I am making mistake?
I have tried many conversion but doesn't get succeeded.
Following is the code
Object[]param={ID,name,Fathername,Address,Mob,Gender,Qualification};
try{
db=Display.getInstance().openOrCreate("students.db");
db.execute("insert into clStudent values('?','?','?',?,'?','?','?')", (Object[]) param) ;
You should drop the single quotes around the question marks in your query string.
I am using PostgreSQL 9.4.1 and have run into an issue with a column that I need to update. The column is of type JSON with elements in the following format:
["a","b","c",...]
["a","c","d",...]
["c","d","e",...]
etc.
so that each element is a string. It is my understanding that each of these elements are considered keys to the JSON array (please correct me if I am a bit confused here, I haven't ever worked with JSON datatype columns before, so I'm still trying to get a grip on them anyways). There is not an actual pattern to these arrays, and their contents are dependent on user input from somewhere else. My goal is to update any of the arrays that contain a particular element (say "b" for the purpose of explaining my question more thoroughly) and replace the content "b" with say "b1". Meaning that:
["a","b","c",...]
would be updated to
["a","b1","c",...]
I have found a few ways listed on this site (I don't currently have the links, but I can find them again if necessary) to update VALUES for a particular KEY, but I haven't found a way mentioned to change the KEY itself. I have already found a way to target the specific rows of interest by doing something similar to:
SELECT *
FROM TableA
WHERE column::json ?| ["b", other string elements of interest]
Any suggestions would be greatly appreciated. Thanks for your help!
So I went ahead and gave that a check (because it looks like it should work, and it's more or less what I ended up doing), but I figured out what I was trying to do! What I got was this:
UPDATE TableA
SET column = REPLACE(column::TEXT,'b','b1')::JSON
WHERE column::JSON ?| ['b']
And now that I think about it, I probably don't even need the last where condition because the replace won't affect anything that doesn't have 'b' in it. But that worked for me, and it looks like yours probably should too! Thanks for the help!
I wanted to rename a specific key for json array column.
I tried and it worked on PostgreSQL 9.4:
UPDATE Your_Table_Name
SET Your_Column_Name = replace(Your_Column_Name::TEXT,'Key_Old_Name','Key_New_Name')::json
WHERE attributes::jsonb ? 'Key_Old_Name'
Basically, solution is to go over the list of json_array_elements, and based on json value using CASE condition replace certain value with other one. After all, need to re-build new json array using array_agg() and to_json() description of aggregate functions in psql is here.
Possible query can be the following:
-- Sample DDL and JSON data
CREATE TABLE jsontest (data JSON);
INSERT INTO jsontest VALUES ('["a","b","c"]'::JSON);
-- QUERY
WITH result AS (
SELECT to_json( -- creating updated json structure
array_agg( -- create array with new element "b1"
CASE WHEN element::TEXT = '"b"' -- here we process array elements to find "b"
THEN to_json('b1'::TEXT)
ELSE element
END)) as new_json
FROM jsontest,json_array_elements(jsontest.data) as element
)
UPDATE jsontest SET data = result.new_json FROM result;