Related
Im trying to add a vertex that will be linked to another vertex with a conditional property value in between their edges.
So far this is what i came up with:
- this runs with no errors but im not able to get any results.
g.V().has('label', 'product')
.has('id', 'product1')
.outE('has_image')
.has('primary', true)
.inV()
.choose(fold().coalesce(unfold().values('public_url'), constant('x')).is(neq('x')))
.option(true,
addV('image')
.property('description', '')
.property('created_at', '2019-10-31 09:08:15')
.property('updated_at', '2019-10-31 09:08:15')
.property('pk', 'f920a210-fbbd-11e9-bed6-b9a9c92913ef')
.property('path', 'product_images/87wfMABXBodgXL1O4aIf6BcMMG47ueUztjNCkGxP.png')
.V()
.hasLabel('product')
.has('id', 'product1')
.addE('has_image')
.property('primary', false))
.option(false,
addV('image')
.property('description', '')
.property('created_at', '2019-10-31 09:08:15')
.property('updated_at', '2019-10-31 09:08:15')
.property('pk', 'f920a930-fbbd-11e9-b444-8bccc55453b9')
.property('path', 'product_images/87wfMABXBodgXL1O4aIf6BcMMG47ueUztjNCkGxP.png')
.V()
.hasLabel('product')
.has('id', 'product1')
.addE('has_image')
.property('primary', true))
What im doing here is im trying to set the primary property of newly added edge in between image vertex and product vertex, depending on whether a product is already connected to an image where the edge already has a primary set to true.
if a product already has an image with an edge property: primary:true then the newly added image that will be linked to the product should have an edge with property primary:false
Seed azure graphdb:
//add product vertex
g.addV('product').property(id, 'product1').property('pk', 'product1')
//add image vertex
g.addV('image').property(id, 'image1').property('public_url', 'url_1').property('pk', 'image1');
//link products to images
g.V('product1').addE('has_image').to(V('image1')).property('primary', true)
I'm surprised that your traversal runs without errors as I hit several syntax problems around your use of option() and some other issues with your mixing of T.id and the property key of "id" (the latter of which might be part of your issue in why this didn't work as-is, but I'm not completely sure). Of course, I didn't test on CosmosDB, so perhaps they took such liberties with the Gremlin language.
Anyway, assuming I have followed your explanation correctly, I think there is a way to vastly simplify your Gremlin. I think you just need this:
g.V('product1').as('p').
addV('image').
property('description', '').
property('created_at', '2019-10-31 09:08:15').
property('updated_at', '2019-10-31 09:08:15').
property('pk', 'f920a210-fbbd-11e9-bed6-b9a9c92913ef').
property('path', 'product_images/87wfMABXBodgXL1O4aIf6BcMMG47ueUztjNCkGxP.png').
addE('has_image').
from('p').
property('primary', choose(select('p').outE('has_image').values('primary').is(true),
constant(false), constant(true)))
Now, I'd say that this is the most idiomatic approach for Gremlin and as I've not tested on CosmosDB I can't say if this approach will work for you but perhaps looking at my console session below you can see that it does satisfy your expectations:
gremlin> g.V('product1').as('p').
......1> addV('image').
......2> property('description', '').
......3> property('created_at', '2019-10-31 09:08:15').
......4> property('updated_at', '2019-10-31 09:08:15').
......5> property('pk', 'f920a210-fbbd-11e9-bed6-b9a9c92913ef').
......6> property('path', 'product_images/87wfMABXBodgXL1O4aIf6BcMMG47ueUztjNCkGxP.png').
......7> addE('has_image').
......8> from('p').
......9> property('primary', choose(select('p').outE('has_image').values('primary').is(true), constant(false), constant(true)))
==>e[31][product1-has_image->25]
gremlin> g.E().elementMap()
==>[id:31,label:has_image,IN:[id:25,label:image],OUT:[id:product1,label:product],primary:true]
gremlin> g.V('product1').as('p').
......1> addV('image').
......2> property('description', '').
......3> property('created_at', '2019-10-31 09:08:15').
......4> property('updated_at', '2019-10-31 09:08:15').
......5> property('pk', 'f920a210-fbbd-11e9-bed6-b9a9c92913ef').
......6> property('path', 'product_images/87wfMABXBodgXL1O4aIf6BcMMG47ueUztjNCkGxP.png').
......7> addE('has_image').
......8> from('p').
......9> property('primary', choose(select('p').outE('has_image').values('primary').is(true), constant(false), constant(true)))
==>e[38][product1-has_image->32]
gremlin> g.E().elementMap()
==>[id:38,label:has_image,IN:[id:32,label:image],OUT:[id:product1,label:product],primary:false]
==>[id:31,label:has_image,IN:[id:25,label:image],OUT:[id:product1,label:product],primary:true]
gremlin> g.V('product1').as('p').
......1> addV('image').
......2> property('description', '').
......3> property('created_at', '2019-10-31 09:08:15').
......4> property('updated_at', '2019-10-31 09:08:15').
......5> property('pk', 'f920a210-fbbd-11e9-bed6-b9a9c92913ef').
......6> property('path', 'product_images/87wfMABXBodgXL1O4aIf6BcMMG47ueUztjNCkGxP.png').
......7> addE('has_image').
......8> from('p').
......9> property('primary', choose(select('p').outE('has_image').values('primary').is(true), constant(false), constant(true)))
==>e[45][product1-has_image->39]
gremlin> g.E().elementMap()
==>[id:38,label:has_image,IN:[id:32,label:image],OUT:[id:product1,label:product],primary:false]
==>[id:45,label:has_image,IN:[id:39,label:image],OUT:[id:product1,label:product],primary:false]
==>[id:31,label:has_image,IN:[id:25,label:image],OUT:[id:product1,label:product],primary:true]
If that looks right and this doesn't work properly in CosmosDB, it is because of line 9 which utilizes a Traversal as an argument to property() which isn't yet supported in CosmosDB. The remedy is to simply invert that line a bit:
g.V('product1').as('p').
addV('image').
property('description', '').
property('created_at', '2019-10-31 09:08:15').
property('updated_at', '2019-10-31 09:08:15').
property('pk', 'f920a210-fbbd-11e9-bed6-b9a9c92913ef').
property('path', 'product_images/87wfMABXBodgXL1O4aIf6BcMMG47ueUztjNCkGxP.png').
addE('has_image').
from('p').
choose(select('p').outE('has_image').values('primary').is(true),
property('primary', false),
property('primary', true))
I find this approach only slightly less readable as the property() doesn't align with the addE() but, it's not a terrible alternative.
Suppose I want to query the Neptune graph with "group-by" on one property (or more), and I want to get back the list of vertices too.
Let's say, I want to group-by on ("city", "age") and want to get the list of vertices too:
[
{"city": "SFO", "age": 29, "persons": [v[1], ...]},
{"city": "SFO", "age": 30, "persons": [v[10], v[13], ...]},
...
]
Or, get back the vertex with its properties (as valueMap):
[
{"city": "SFO", "age": 29, "persons": [[id:1,label:person,name:[marko],age:[29],city:[SFO]], ...]},
...
]
AFAIK, Neptune doesn't support lambda nor variable assignments. is there a way to do this with one traversal and no lambdas?
Update: I'm able to get the vertices, but without their properties (with valueMap).
Query:
g.V().hasLabel("person").group().
by(values("city", "age").fold()).
by(fold().
match(__.as("p").unfold().values("city").as("city"),
__.as("p").unfold().values("age").as("age"),
__.as("p").fold().unfold().as("persons")).
select("city", "age", "persons")).
select(values).
next()
Output:
==>[city:SFO,age:29,persons:[v[1]]]
==>[city:SFO,age:27,persons:[v[2],v[23]]]
...
If I understand it correctly, then ...
g.V().hasLabel("person").
group().
by(values("city", "age").fold())
... or ...
g.V().hasLabel("person").
group().
by(valueMap("city", "age").by(unfold()))
... already gives you what you need, it's just about reformating the result. To merge the maps in keys and values together, you can do something like this:
g.V().hasLabel("person").
group().
by(valueMap("city", "age").by(unfold())).
unfold().
map(union(select(keys),
project("persons").
by(values)).
unfold().
group().
by(keys).
by(select(values)))
Executing this on the modern toy graph (city replaced with name) will yield the following result:
gremlin> g = TinkerFactory.createModern().traversal()
==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard]
gremlin> g.V().hasLabel("person").
......1> group().
......2> by(valueMap("name", "age").by(unfold())).
......3> unfold().
......4> map(union(select(keys),
......5> project("persons").
......6> by(values)).
......7> unfold().
......8> group().
......9> by(keys).
.....10> by(select(values)))
==>[persons:[v[2]],name:vadas,age:27]
==>[persons:[v[4]],name:josh,age:32]
==>[persons:[v[1]],name:marko,age:29]
==>[persons:[v[6]],name:peter,age:35]
I've been trying to figure out a way to get a list of all the Coins that Coinbase has listed (not necessarily for trade) but can't figure it out, in the early days it was easy as you could just login and see the list of 4 basic coins that were supported (and could hard code those values in a program and/or script).
But now they have a list of many coins listed, some as I understand, which are not available to actually trade but are listed for educational purposes (as stated on their site when looking at such coins).
I was wondering if anyone has figured out a way to get a list those coins (all supported and simply listed) perhaps with a tag of which are actually supported for trade.
I looked at the API and the REST API (using a simple GET request over HTTPS or using cURL for testing) has the following endpoints:
curl https://api.coinbase.com/v2/currencies - This lists all the Fiat currencies.
and:
curl https://api.pro.coinbase.com/products - This lists all the supported trading pairs (which is not what I'm looking for....)
Any ideas, short of logging in and parsing the html? (which could break since the site can be reformatted etc at any time).
Any help would be greatly appreciated!
perhaps not really what you asked, but you could also use https://api.pro.coinbase.com/currencies
import requests
import json
uri = 'https://api.pro.coinbase.com/currencies'
response = requests.get(uri).json()
for i in range(len(response)):
if response[i]['details']['type'] == 'crypto':
print(response[i]['id])
This will return the coins available for trading.
I'm not sure if I this is the response that you want or not. I first used the first URL that you have listed... The response from that looked like it didn't have the available coins. I then tried the below URL instead and the response does have a lot of curriencies listed on it. You can parse it by loading with JSON and looking for the fields that you want.
Also I didn't see a language posted with your question. I'm using python3 below. If you're a Linux person you can also just use curl GET from the command line. It doesn't matter the language... you just need to make a GET request to that URL and parse the response however you see fit.
To get 1 particular field you can use a line like response['data']['rates']['BTC'] to extract '0.00029200' out of the response/JSON string.
>>> r = requests.get("https://api.coinbase.com/v2/exchange-rates")
>>> response = json.loads(r.text)
>>> pprint.pprint(response)
{'data': {'currency': 'USD',
'rates': {'AED': '3.67',
'AFN': '75.22',
'ALL': '108.84',
'AMD': '487.59',
'ANG': '1.79',
'AOA': '311.37',
'ARS': '37.32',
'AUD': '1.38',
'AWG': '1.80',
'AZN': '1.70',
'BAM': '1.71',
'BAT': '9.00418244',
'BBD': '2.00',
'BCH': '0.00879160',
'BDT': '83.80',
'BGN': '1.71',
'BHD': '0.377',
'BIF': '1824',
'BMD': '1.00',
'BND': '1.58',
'BOB': '6.90',
'BRL': '3.65',
'BSD': '1.00',
'BTC': '0.00029200',
'BTN': '71.11',
'BWP': '10.41',
'BYN': '2.15',
'BYR': '21495',
'BZD': '2.02',
'CAD': '1.31',
'CDF': '1631.00',
'CHF': '0.99',
'CLF': '0.0242',
'CLP': '656',
'CNH': '6.71',
'CNY': '6.70',
'COP': '3174.95',
'CRC': '608.98',
'CUC': '1.00',
'CVE': '96.90',
'CZK': '22.50',
'DJF': '178',
'DKK': '6.52',
'DOP': '50.44',
'DZD': '118.30',
'EEK': '14.61',
'EGP': '17.68',
'ERN': '15.00',
'ETB': '28.52',
'ETC': '0.25542784',
'ETH': '0.00944599',
'EUR': '0.87',
'FJD': '2.10',
'FKP': '0.76',
'GBP': '0.76',
'GEL': '2.66',
'GGP': '0.76',
'GHS': '4.98',
'GIP': '0.76',
'GMD': '49.52',
'GNF': '9210',
'GTQ': '7.74',
'GYD': '208.55',
'HKD': '7.85',
'HNL': '24.49',
'HRK': '6.49',
'HTG': '78.37',
'HUF': '276',
'IDR': '13940.00',
'ILS': '3.63',
'IMP': '0.76',
'INR': '70.93',
'IQD': '1190.000',
'ISK': '120',
'JEP': '0.76',
'JMD': '132.72',
'JOD': '0.710',
'JPY': '109',
'KES': '100.60',
'KGS': '68.70',
'KHR': '4015.00',
'KMF': '429',
'KRW': '1114',
'KWD': '0.303',
'KYD': '0.83',
'KZT': '380.63',
'LAK': '8559.50',
'LBP': '1511.15',
'LKR': '178.40',
'LRD': '160.75',
'LSL': '13.53',
'LTC': '0.03208728',
'LTL': '3.22',
'LVL': '0.66',
'LYD': '1.385',
'MAD': '9.53',
'MDL': '17.05',
'MGA': '3465.0',
'MKD': '53.78',
'MMK': '1519.04',
'MNT': '2453.75',
'MOP': '8.08',
'MRO': '357.0',
'MTL': '0.68',
'MUR': '34.23',
'MVR': '15.49',
'MWK': '728.47',
'MXN': '19.14',
'MYR': '4.10',
'MZN': '61.87',
'NAD': '13.53',
'NGN': '361.50',
'NIO': '32.60',
'NOK': '8.43',
'NPR': '113.78',
'NZD': '1.45',
'OMR': '0.385',
'PAB': '1.00',
'PEN': '3.33',
'PGK': '3.36',
'PHP': '52.13',
'PKR': '139.30',
'PLN': '3.73',
'PYG': '6084',
'QAR': '3.64',
'RON': '4.14',
'RSD': '103.53',
'RUB': '65.47',
'RWF': '886',
'SAR': '3.75',
'SBD': '8.06',
'SCR': '13.67',
'SEK': '9.05',
'SGD': '1.35',
'SHP': '0.76',
'SLL': '8390.00',
'SOS': '582.00',
'SRD': '7.46',
'SSP': '130.26',
'STD': '21050.60',
'SVC': '8.75',
'SZL': '13.52',
'THB': '31.23',
'TJS': '9.43',
'TMT': '3.50',
'TND': '2.968',
'TOP': '2.26',
'TRY': '5.18',
'TTD': '6.77',
'TWD': '30.72',
'TZS': '2317.00',
'UAH': '27.70',
'UGX': '3670',
'USD': '1.00',
'USDC': '1.000000',
'UYU': '32.58',
'UZS': '8380.00',
'VEF': '248487.64',
'VND': '23287',
'VUV': '111',
'WST': '2.60',
'XAF': '573',
'XAG': '0',
'XAU': '0',
'XCD': '2.70',
'XDR': '1',
'XOF': '573',
'XPD': '0',
'XPF': '104',
'XPT': '0',
'YER': '250.30',
'ZAR': '13.27',
'ZEC': '0.02056344',
'ZMK': '5253.08',
'ZMW': '11.94',
'ZRX': '4.04721481',
'ZWL': '322.36'}}}
The following code:
import requests
uri = 'https://api.pro.coinbase.com/currencies'
response = requests.get(uri).json()
for i in range(len(response)):
if response[i]['details']['type'] == 'crypto':
print(response[i]['id'])
Will provide this output:
COTI
BTC
ETH
LTC
BCH
ZEC
XTZ
XRP
XLM
EOS
ALGO
DASH
ATOM
CGLD
FIL
ADA
ICP
SOL
DOT
DOGE
OXT
KNC
MIR
REP
COMP
NMR
ACH
BAND
ZRX
BAT
LOOM
UNI
YFI
LRC
CVC
DNT
MANA
GNT
REN
LINK
BAL
ETC
USDC
RLC
DAI
WBTC
NU
AAVE
SNX
BNT
GRT
SUSHI
MLN
ANKR
CRV
STORJ
SKL
AMP
1INCH
ENJ
NKN
OGN
FORTH
GTC
TRB
CTSI
MKR
UMA
USDT
CHZ
SHIB
BOND
LPT
QNT
KEEP
CLV
MASK
MATIC
OMG
POLY
FARM
FET
PAX
RLY
PLA
RAI
IOTX
ORN
AXS
QUICK
TRIBE
UST
REQ
TRU
WLUNA
you can use
curl -X GET https://api.exchange.coinbase.com/products
refer to
https://docs.cloud.coinbase.com/exchange/reference/exchangerestapi_getproducts
Here is my graph
g.addV('user').property('id',1).as('1').
addV('user').property('id',2).as('2').
addV('user').property('id',3).as('3').
addE('follow').from('1').to('2').
addE('follow').from('1').to('3').iterate()
The below is my approach when a user wants to follow another user suppose 2 wants to follow 3
I'm checking first whether follow edge exist between 2 and 3
if(g.V().has(id, 2).outE(follow).inV().has(id, 3).hasNext())
{
//if exists that means he already following him so i'm dropping the follow edge and adding unfollow edge to 2,3.
}
else if(g.V().has(id, 2).outE(unfollow).inV().has(id, 3).hasNext())
{
//if exists he already unfollowed him and he wants to follow him again i'm dropping the unfollow edge and adding the follow edge to 2,3.
}
else
{
// there is no edges between 2,3 so he is following him first so i'm adding follow edge 2,3.
}
but the drawback of this approach is every time it needs to query 2 times which impacts performance . Can you suggest me a better approach ?
You can build if-then-else semantics with choose(). A direct translation of your logic there would probably look like this:
gremlin> g.addV('user').property(id,1).as('1').
......1> addV('user').property(id,2).as('2').
......2> addV('user').property(id,3).as('3').
......3> addE('follow').from('1').to('2').
......4> addE('follow').from('1').to('3').iterate()
gremlin> g.V(3).as('target').
......1> V(2).as('source').
......2> choose(outE('follow').aggregate('d1').inV().hasId(3),
......3> sideEffect(addE('unfollow').from('source').to('target').
......4> select('d1').unfold().drop()).constant('unfollowed'),
......5> choose(outE('unfollow').aggregate('d2').inV().hasId(3),
......6> sideEffect(addE('follow').from('source').to('target').
......7> select('d2').unfold().drop()).constant('followed'),
......8> addE('follow').from('source').to('target').constant('followed-first')))
==>followed-first
gremlin> g.E()
==>e[0][1-follow->2]
==>e[1][1-follow->3]
==>e[2][2-follow->3]
gremlin> g.V(3).as('target').
......1> V(2).as('source').
......2> choose(outE('follow').aggregate('d1').inV().hasId(3),
......3> sideEffect(addE('unfollow').from('source').to('target').
......4> select('d1').unfold().drop()).constant('unfollowed'),
......5> choose(outE('unfollow').aggregate('d2').inV().hasId(3),
......6> sideEffect(addE('follow').from('source').to('target').
......7> select('d2').unfold().drop()).constant('followed'),
......8> addE('follow').from('source').to('target').constant('followed-first')))
==>unfollowed
gremlin> g.E()
==>e[0][1-follow->2]
==>e[1][1-follow->3]
==>e[3][2-unfollow->3]
gremlin> g.V(3).as('target').
......1> V(2).as('source').
......2> choose(outE('follow').aggregate('d1').inV().hasId(3),
......3> sideEffect(addE('unfollow').from('source').to('target').
......4> select('d1').unfold().drop()).constant('unfollowed'),
......5> choose(outE('unfollow').aggregate('d2').inV().hasId(3),
......6> sideEffect(addE('follow').from('source').to('target').
......7> select('d2').unfold().drop()).constant('followed'),
......8> addE('follow').from('source').to('target').constant('followed-first')))
==>followed
gremlin> g.E()
==>e[0][1-follow->2]
==>e[1][1-follow->3]
==>e[4][2-follow->3]
Using 6.0.1 SOLR.
Have got a type declaration:
<fieldType name="customy_icu" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.ICUTokenizerFactory"/>
<filter class="solr.LengthFilterFactory" min="1" max="100"/>
<filter class="solr.NGramTokenizerFactory" minGramSize="2" maxGramSize="20"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.ICUTokenizerFactory"/>
<filter class="solr.LengthFilterFactory" min="1" max="100"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>
customy_icu is used for storing text data at hebrew lang (word are reading/writing) from right to left.
When query is "מי פנים"
I have got the result in incorrect order, product_3351 is higher (more relevant) than product product_3407, but should be vice versa.
Here is debug:
<str name="product_3351">
2.711071 = sum of:
2.711071 = max of:
0.12766865 = weight(meta_keyword:"מי פנים" in 882) [ClassicSimilarity], result of:
0.12766865 = score(doc=882,freq=1.0), product of:
0.05998979 = queryWeight, product of:
8.5126915 = idf(), sum of:
4.7235003 = idf(docFreq=21, docCount=910)
3.7891912 = idf(docFreq=55, docCount=910)
0.0070471005 = queryNorm
2.1281729 = fieldWeight in 882, product of:
1.0 = tf(freq=1.0), with freq of:
1.0 = phraseFreq=1.0
8.5126915 = idf(), sum of:
4.7235003 = idf(docFreq=21, docCount=910)
3.7891912 = idf(docFreq=55, docCount=910)
0.25 = fieldNorm(doc=882)
2.711071 = weight(name:"מי פנים" in 882) [ClassicSimilarity], result of:
2.711071 = score(doc=882,freq=1.0), product of:
0.6178363 = queryWeight, product of:
9.99 = boost
8.776017 = idf(), sum of:
4.8417873 = idf(docFreq=22, docCount=1071)
3.93423 = idf(docFreq=56, docCount=1071)
0.0070471005 = queryNorm
4.3880086 = fieldWeight in 882, product of:
1.0 = tf(freq=1.0), with freq of:
1.0 = phraseFreq=1.0
8.776017 = idf(), sum of:
4.8417873 = idf(docFreq=22, docCount=1071)
3.93423 = idf(docFreq=56, docCount=1071)
0.5 = fieldNorm(doc=882)
</str>
and
<str name="product_3407">
2.711071 = sum of:
2.711071 = max of:
2.711071 = weight(name:"מי פנים" in 919) [ClassicSimilarity], result of:
2.711071 = score(doc=919,freq=1.0), product of:
0.6178363 = queryWeight, product of:
9.99 = boost
8.776017 = idf(), sum of:
4.8417873 = idf(docFreq=22, docCount=1071)
3.93423 = idf(docFreq=56, docCount=1071)
0.0070471005 = queryNorm
4.3880086 = fieldWeight in 919, product of:
1.0 = tf(freq=1.0), with freq of:
1.0 = phraseFreq=1.0
8.776017 = idf(), sum of:
4.8417873 = idf(docFreq=22, docCount=1071)
3.93423 = idf(docFreq=56, docCount=1071)
0.5 = fieldNorm(doc=919)
</str>
The product 3351 has name field value:
סאבליים סופט מי פנים
And product 3407 has name field value:
מי פנים מיסלרים
http://screencast.com/t/2iBwLQqu
How I can boost 3407 product it become higher in result list ?
Thanks a lot!
If you have a specific query where you want to boost a document to the top of the result set, irrelevant of its own score, use the Query Elevation Component.
There is no automagic boosting for "appears earlier in the document", but there's a few ways to work around it. See How to boost scores for early matches for a couple of possible solutions.
"Relevancy" is a fluent term, and you have to implement the kind of scoring that you feel is suitable for your application outside of the standard rules. The debugQuery you've included shows that the documents are scored identically on relevancy by default.
You can use elevate.xml file to set particular document to appear top in the resultset for specific serachterm.
example :
<elevate>
<query text ="מי פנים">
<doc id="your_product_ID" />
</query>