How can one use the 'as' keyword in gremlin python? - graph-databases

When attempting to translate a query I wrote and tested in the Gremlin CLI to gremlin-python, I'm encountering unexpected token 'as' errors on my .as('foo') expressions. How can one use the Gremlin as keyword when using gremlin-python?

When Gremlin steps conflict with Python reserved words the step gets suffixed with an underscore, so for as() you would instead do as_(). A full listing of all the steps can be found here, but currently they are:
Steps - and_(), as_(), from_(), is_(), in_(), not_(), or_(), with_()
Tokens - Scope.global_

Related

What is the difference between macro expansion and parameter dereferencing in SOLR?

I'm using Solr 8.11.
I've noticed in one of my ReRankQueries with LTR, that on an efi, it matters if I use:
efi.some_name=${queryParam} vs efi.some_name=$queryParam.
The single resource I found on the web was from a blog from 2014, calling the ${} notation, as the macro expansion one.
From what I experimented with, it seems that ${queryParam} is evaluated and the pasted there as is, for example, if queryParam="multi worded string", then efi.some_name=multi worded string, and this will require do be enclosed in ".
For the $queryParam, it seems that the query parser knows to associate the full value to efi.some_name.
What is the difference between those 2?

Combining mule expression language and literals

I am building an application using Mule 4.2.2 version where in I have to retrieve data from Mongo DB. For this I am using the MongoDB connector version 6.3.0. I am using the "Find Documents" as show in the image below, where you can see I have mentioned the query as
{"eventCode": $[vars.eventCode]} where eventCode is the field on which I am querying and eventCode is the variable where I am storing the incoming eventCode.
When I run the mule application I see an error in the logs that says -
org.bson.json.JsonParseException: Invalid JSON input. Position: 15. Character: '#'.
I thought that I could combine literals and mule expressiong using #[], but that doesn't seem to work. Any pointers on how to solve this?
If it is an expression, then you can not use #[...] inside. Just write the expression:
{"eventCode": vars.eventCode}
If it is not an expression -if the fx button clicked?- you might need to enclose the entire expression into #[...].
#[{"eventCode": vars.eventCode}]

Snowflake and Regular Expressions - issue when implementing known good expression in SF

I'm looking for some assistance in debugging a REGEXP_REPLACE() statement.
I have been using an online regular expressions editor to build expressions, and then the SF regexp_* functions to implement them. I've attempted to remain consistent with the SF regex implementation, but I'm seeing an inconsistency in the returned results that I'm hoping someone can explain :)
My intent is to replace commas within the text (excluding commas with double-quoted text) with a new delimiter (#^#).
Sample text string:
"Foreign Corporate Name Registration","99999","Valuation Research",,"Active Name",02/09/2020,"02/09/2020","NEVADA","UNITED STATES",,,"123 SOME STREET",,"MILWAUKEE","WI","53202","UNITED STATES","123 SOME STREET",,"MILWAUKEE","WI","53202","UNITED STATES",,,,,,,,,,,,
RegEx command and Substitution (working in regex101.com):
([("].*?["])*?(,)
\1#^#
regex101.com Result:
"Foreign Corporate Name Registration"#^#"99999"#^#"Valuation Research"#^##^#"Active Name"#^#02/09/2020#^#"02/09/2020"#^#"NEVADA"#^#"UNITED STATES"#^##^##^#"123 SOME STREET"#^##^#"MILWAUKEE"#^#"WI"#^#"53202"#^#"UNITED STATES"#^#"123 SOME STREET"#^##^#"MILWAUKEE"#^#"WI"#^#"53202"#^#"UNITED STATES"#^##^##^##^##^##^##^##^##^##^##^##^#
When I try and implement this same logic in SF using REGEXP_REPLACE(), I am using the following statement:
SELECT TOP 500
A.C1
,REGEXP_REPLACE((A."C1"),'([("].*?["])*?(,)','\\1#^#') AS BASE
FROM
"<Warehouse>"."<database>"."<table>" AS A
This statement returns the result for BASE:
"Foreign Corporate Name Registration","99999","Valuation Research",,"Active Name",02/09/2020,"02/09/2020","NEVADA","UNITED STATES",,,"123 SOME STREET",,"MILWAUKEE","WI","53202","UNITED STATES","123 SOME STREET",,"MILWAUKEE","WI","53202","UNITED STATES"#^##^##^##^##^##^##^##^##^##^##^##^#
As you can see when comparing the results, the SF result set is only replacing commas at the tail-end of the text.
Can anyone tell me why the results between regex101.com and SF are returning different results with the same statement? Is my expression non-compliant with the SF implementation of RegEx - and if yes, can you tell me why?
Many many thanks for your time and effort reading this far!
Happy Wednesday,
Casey.
The use of .*? to achieve lazy matching for regexing is limited to PCRE, which Snowflake does not support. To see this, in regex101.com, change your 'flavor" to be anything other than PCRE (PHP); you will see that your ([("].*?["])*?(,) regex no longer achieves what you are expecting.
I believe that this will work for your purposes:
REGEXP_REPLACE(A.C1,'("[^"]*")*,','\\1#^#')

Google App Engine JDO query filter error with multiple String methods [duplicate]

I am working on a GAE Django Project where I have to implementing the search functionality, I have written a query and it fetches the data according to the search keyword.
portfolio = Portfolio.all().filter('full_name >=',key).filter('full_name <',unicode(key) + u'\ufffd')
The issue with this query is, that it is case sensitive.
Is there any way through which I can make it to work, without depending upon the case of the keyword?
Please suggest.
Thanks in advance.
You need to store normalized versions of your data at write time, then use the same normalization to search.
Store the data either all uppercase or all lowercase, optionally removing punctuation and changing all whitespace to a single space and maybe converting non-ASCII characters to some reasonable ASCII representation (which is, of course, trickier than it sounds.)
An alternative solution to this problem - where the datasets are small - is to filter the results in python after you have called them from the datastore:
for each_item in list_of_results:
if each_item.name.lower().rfind(your_search_term) != -1:
#Your results action

using mongo-c-driver can not use regular expressions containing forward slash (/)

I am trying to use the C-API for MongoDB. I want to find records with a name matching a regular expression containing slashes (/). If I run the mongo command line I get 40 results for the query:
db.test_006.find({'name':/^\/TEST::TOP:NEWSTAT\/_data_[0-9]*/})
which is correct. When I try to code this in C I can not get it to match.
using:
bson_append_regex( &query, "name","/TEST::TOP:NEWSTAT/" , "" )
I find all the records and more.
All combinations of \\/ or even [/] or [\\/] do not find any records. I also tried \\x2F.
Is this just broken ? Am I missing something ? There is plenty of information about compiling regular expressions to use with other languages (python, java, etc...)
Thanks,
-Josh

Resources