Hi i am getting XML Parsing error in the following string.
How to find the error .
<ControllerSetupData>
<MasterSetupData ControllerId="0" ControllerModelId="1" ControllerTypeId="2"
EcolabAccountNumber="040242802" TabId="0" TopicName="test 78" Factors Multiplier="10"
ControllerNumber="78" OZSecondMultiplier="10" InjectionQuantityMultiplier="10"
InstallDate="05/02/2016" Active="false"/>
<DynamicSetupData>
<Data ControllerId="0" ControllerModelId="1" FieldGroupId="6" FieldId="21"
Value="10.225.134.21.1.1" FieldTagValue="" EcolabAccountNumber="040242802"/>
<Data ControllerId="0" ControllerModelId="1" FieldGroupId="6" FieldId="79" Value="78"
FieldTagValue="" EcolabAccountNumber="040242802"/>
</DynamicSetupData>
</ControllerSetupData>
In your XML at
<ControllerSetupData>
<MasterSetupData ControllerId="0" ControllerModelId="1" ControllerTypeId="2"
EcolabAccountNumber="040242802" TabId="0" TopicName="test 78" Factors Multiplier="10"
^^^^^^^
The attribute name Factors must be followed by = and a value
Related
enter image description here
ERROR: invalid input syntax for type integer: "1;CA-2016-152156;08/11/2016;11/11/2016;Second Class;CG-12520;Claire Gute;Consumer;United States;Henderson;Kentucky;42420;South;FUR-BO-10001798;Furniture;Bookcases;Bush Somerset Collection Bookcase;261"
CONTEXT: COPY orders, line 2, column row_id: "1;CA-2016-152156;08/11/2016;11/11/2016;Second Class;CG-12520;Claire Gute;Consumer;United States;Hend..."
tolong bantu saya mengatasinya
I am querying data from a internally stage file in snowflake as below -
I get the following error:
SQL compilation error: Format argument for function 'TO_TIMESTAMP_TZ' needs to be a string
Any idea what could be going wrong here?
Thanks
SELECT
TO_VARCHAR(stg.$10),
TO_VARCHAR(stg.$45),
TO_NUMBER(stg.$1,20),
TO_NUMBER(stg.$18,20),
TO_VARCHAR(stg.$8),
TO_VARCHAR(stg.$42),
TO_NUMBER(stg.$19,20),
TO_VARCHAR(stg.$11),
TO_VARCHAR(stg.$16),
TO_VARCHAR(stg.$49),
TO_TIMESTAMP_TZ(stg.$47::STRING,'21'),
TO_TIMESTAMP_TZ(stg.$50::STRING,10),
TO_VARCHAR(stg.$48),
TO_NUMBER(stg.$36,19,6),
TO_NUMBER(stg.$27,19,6),TO_VARCHAR(stg.$12),
TO_TIMESTAMP_TZ(stg.$13::STRING,10),
TO_TIMESTAMP_TZ(stg.$5::STRING,10),
TO_NUMBER(stg.$22,20),
TO_NUMBER(stg.$21,19,6),
TO_NUMBER(stg.$20,19,6),
TO_TIMESTAMP_TZ(stg.$2::STRING,10),
TO_NUMBER(stg.$39,19,6),TO_VARCHAR(stg.$35),
TO_VARCHAR(stg.$4),
TO_NUMBER(stg.$40,19,6),
TO_NUMBER(stg.$32,19,6),
TO_NUMBER(stg.$33,19,6),
TO_VARCHAR(stg.$34),
TO_VARCHAR(stg.$37),
TO_VARCHAR(stg.$38),
TO_VARCHAR(stg.$17),
TO_VARCHAR(stg.$23),
TO_NUMBER(stg.$14,19,6),
TO_NUMBER(stg.$28,19,6),
TO_NUMBER(stg.$29,19,6),
TO_NUMBER(stg.$41,6,2),
TO_VARCHAR(stg.$31),
TO_NUMBER(stg.$30,19,6),
TO_VARCHAR(stg.$24),
TO_VARCHAR(stg.$25),
TO_NUMBER(stg.$26,19,6),
TO_VARCHAR(stg.$9),
TO_TIMESTAMP_TZ(stg.$3::STRING,10),
TO_VARCHAR(stg.$15),
TO_VARCHAR(stg.$44),
TO_VARCHAR(stg.$43),
TO_VARCHAR(stg.$53),
TO_TIMESTAMP_TZ(stg.$51::STRING,10),
TO_TIMESTAMP_TZ(stg.$50::STRING,10),
TO_VARCHAR(stg.$52),
TO_TIMESTAMP_TZ(stg.$7::STRING,10),
TO_TIMESTAMP_TZ(stg.$6::STRING,10),
'T_RPDB_POLICY_1_0_0',
TO_TIMESTAMP_TZ(CURRENT_TIMESTAMP::STRING)
FROM '#INTERNAL_POLICY_STAGE/T_RPDB_POLICY.CSV.gz' (file_format => '"JVCO"."STAGING".CSV') stg;
Try
select to_timestamp_tz(CURRENT_TIMESTAMP::string)
Second argument is format(varchar type) and you are passing number.
select to_timestamp_tz('04/05/2013 01:02:03', 'mm/dd/yyyy hh24:mi:ss');
https://docs.snowflake.com/en/sql-reference/functions/to_timestamp.html
hi all ijson newbie I have a very large .json file 168 (GB) I want to get all possible keys, but in the file some values are written as NaN. ijson creates a generator and outputs dictionaries, in My code value. When a specific item is returned, it throws an error. How can you get a string instead of a dictionary instead of value? Tried **parser = ijson.items (input_file, '', multiple_values = True, map_type = str) **, didn't help.
def parse_json(json_filename):
with open('max_data_error.txt', 'w') as outfile:
with open(json_filename, 'r') as input_file:'''
# outfile.write('[ '
parser = ijson.items(input_file, '', multiple_values=True)
cont = 0
max_keys_list = list()
for value in parser:
for i in json.loads(json.dumps(value, ensure_ascii=False, default=str)) :
if i not in max_keys_list:
max_keys_list.append(i)
print(value)
print(max_keys_list)
for keys_item in max_keys_list:
outfile.write(keys_item + '\n')
if __name__ == '__main__':
parse_json('./email/emailrecords.bson.json')
Traceback (most recent call last):
File "panda read.py", line 29, in <module>
parse_json('./email/emailrecords.bson.json')
File "panda read.py", line 17, in parse_json
for value in parser:
ijson.common.IncompleteJSONError: lexical error: invalid char in json text.
litecashwire.com","lastname":NaN,"firstname":"Mia","zip":"87
(right here) ------^
Your file I not valid JSON (NaN is not a valid JSON value); therefore any JSON parsing library will complain about this, one way or another, unless they have an extension to handle this non-standard content.
The ijson FAQ found in the project description has a question about invalid UTF-8 characters and how to deal with them. Those same answers apply here, so I would suggest you go and try one of those.
Migrated column type from HSTORE to JSONB and am using this snippet of code...
from sqlalchemy.dialects.postgresql import ARRAY, JSONB
if employment_type:
base = base.filter(Candidate.bio["employment_type"].cast(ARRAY).contains(employment_type))
and am getting this error...
127.0.0.1 - - [28/Mar/2016 12:25:13] "GET /candidate_filter/?employment_type_3=true HTTP/1.1" 500 -
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/Library/Python/2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/surajkapoor/Desktop/lhv-talenttracker/app/views.py", line 660, in investor_filter
base = base.filter(Candidate.bio["employment_type"].cast(ARRAY).contains(employment_type))
File "/Library/Python/2.7/site-packages/sqlalchemy/dialects/postgresql/json.py", line 93, in cast
return self.astext.cast(type_)
File "/Library/Python/2.7/site-packages/sqlalchemy/dialects/postgresql/json.py", line 95, in cast
return sql.cast(self, type_)
File "<string>", line 2, in cast
File "/Library/Python/2.7/site-packages/sqlalchemy/sql/elements.py", line 2314, in __init__
self.type = type_api.to_instance(type_)
File "/Library/Python/2.7/site-packages/sqlalchemy/sql/type_api.py", line 1142, in to_instance
return typeobj(*arg, **kw)
TypeError: __init__() takes at least 2 arguments (1 given)
Candidate.bio["employment_type"] is an array of integers and I'm simply trying to query all the rows that contain a specific integer in them.
Also, .cast() works perfectly on the same column when calling Integer...
if internship:
base = base.filter(Candidate.bio["internship"].cast(Integer) == 1)
SqlAlchemy is probably having difficulty constructing the where clause because it can't figure out what type bio->'employment_type' is.
If the contains method is called from a String object, it would generate a LIKE clause, but for JSONB or ARRAY it would need to generate the #> operator.
To give SqlAlchemy the necessary hints, use explicit casting everywhere, i.e. write your query like
from sqlalchemy import cast
if employment_type:
casted_field = Candidate.bio['employment_type'].cast(JSONB)
casted_values = cast(employment_type, JSONB)
stmt = base.filter(casted_field.contains(casted_values))
In my example, I have a JSONB column named bio with the following data:
{"employment_type": [1, 2, 3]}
Edit: Casting to JSONB works:
>>> from sqlalchemy.dialects.postgresql import JSONB
>>> employment_type = 2
>>> query = (
... session.query(Candidate)
... .filter(Candidate.bio['employment_type'].cast(JSONB).contains(employment_type)))
>>> query.one().bio
{"employment_type": [1, 2, 3]}
Original answer:
I couldn't get .contains to work on Candidate.bio['employment_type'], but we can do the equivalent of the following SQL:
SELECT * FROM candidate WHERE candidate.bio #> '{"employment_type": [2]}';
like this:
>>> employment_type = 2
>>> test = {'employment_type': [employment_type]}
>>> query = (
... session.query(Candidate)
... .filter(Candidate.bio.contains(test)))
>>> query.one().bio
{"employment_type": [1, 2, 3]}
I am trying to use the Perl AI::ExpertSystem::Advanced module, and I try to use sign in the array of initial facts. The documentation of this module shows an example:
my $ai = AI::ExpertSystem::Advanced->new(
viewer_class => 'terminal',
knowledge_db => $yaml_kdb,
initial_facts => ['I', ['F', '-'], ['G', '+']);
but there is something wrong (syntax error). I thing that one ] missing at the end of code.
First question: What is the correct form? When I run the example my terminal shows me a lot of errors.
Second question: Can I use a file to stored initial facts?
Thanks for your answers.
Error log:
when I use example from documentation:
syntax error at mix.pl line 24, near "])"
Global symbol "$ai" requires explicit package name at mix.pl line 26.
Missing right curly or square bracket at mix.pl line 27, at end of line
Execution of mix.pl aborted due to compilation errors.
When I put ] in its correct place at the end of expression: initial_facts => ['I', ['F', '-'], ['G', '+']]);
Attribute (initial_facts) does not pass the type constraint because: Validation failed for 'ArrayRef[Str]' with value ARRAY(0x3268038) at C:/Perl64/lib/Moose/Meta/Attribute.pm line 1274.
Moose::Meta::Attribute::verify_against_type_constraint('Moose::Meta::Attribute=HASH(0x3111108)', 'ARRAY(0x3268038)', 'instance', 'AI::ExpertSystem::Advanced=HASH(0x30ef068)') called at C:/Perl64/lib/Moose/Meta/Attribute.pm line 1261
Moose::Meta::Attribute::_coerce_and_verify('Moose::Meta::Attribute=HASH(0x3111108)', 'ARRAY(0x3268038)', 'AI::ExpertSystem::Advanced=HASH(0x30ef068)') called at C:/Perl64/lib/Moose/Meta/Attribute.pm line 531
Moose::Meta::Attribute::initialize_instance_slot('Moose::Meta::Attribute=HASH(0x3111108)', 'Moose::Meta::Instance=HASH(0x32673d8)', 'AI::ExpertSystem::Advanced=HASH(0x30ef068)', 'HASH(0x3118298)') called at C:/Perl64/lib/Class/MOP/Class.pm line 525
Class::MOP::Class::_construct_instance('Moose::Meta::Class=HASH(0x2eb2418)', 'HASH(0x3118298)') called at C:/Perl64/lib/Class/MOP/Class.pm line 498
Class::MOP::Class::new_object('Moose::Meta::Class=HASH(0x2eb2418)', 'HASH(0x3118298)') called at C:/Perl64/lib/Moose/Meta/Class.pm line 274
Moose::Meta::Class::new_object('Moose::Meta::Class=HASH(0x2eb2418)', 'HASH(0x3118298)') called at C:/Perl64/lib/Moose/Object.pm line 28
Moose::Object::new('AI::ExpertSystem::Advanced', 'viewer_class', 'terminal', 'knowledge_db', 'AI::ExpertSystem::Advanced::KnowledgeDB::YAML=HASH(0x3118478)', 'verbose', 1, 'initial_facts', 'ARRAY(0x3268038)') called at mix.pl line 20
This is a bug in the documentation (and possibly in the module itself).
To set the object up with negative initial facts you need to create the dictionary object first.
my $initial_facts_dict = AI::ExpertSystem::Advanced::Dictionary->new(
stack => [ 'I', ['F', '-'], ['G', '+'] ]);
my $ai = AI::ExpertSystem::Advanced->new(
viewer_class => 'terminal',
knowledge_db => $yaml_kdb,
initial_facts_dict => $initial_facts_dict,
);